Automatically deploy Dell OMSA with Puppet

I’m pretty sure that nowadays every sysadmin out there still managing bare metal hardware and in particulary Dell servers know and use Open Manage Server Administrator (OMSA), which is a very nice and convenient software. The problem with OMSA is that it can be a little cumbersome to install, with all it’s repos and packages (especially on not-officially-supported distros like Ubuntu, Debian or CentOS). As a Puppet user I am, I was looking for a decent module to install the most recent OMSA version on all my machines, but I didn’t find any.

So, I wrote this brand new Open Manage Server Administrator puppet module!

It sports Ubuntu & CentOS support, granular package install and SNMP integration.
If you want the SNMP integration, be sure that your /etc/snmp/snmpd.conf contains this lines:

view systemview included .1.3.6.1.4.1.674.10892
view systemview included .1.3.6.1.4.1.674.10893
smuxpeer .1.3.6.1.4.1.674.10892.1
smuxpeer .1.3.6.1.4.1.674.10893.1

As I said in the module’s README, you can enable all of these lines through Puppet snmp module only if you use this PR, otherwise you won’t have the second smuxpeer line (whose OID is the StorageServices one).

So, after installing the OMSA module with SNMP support, you can for example easily check all your hardware with this Zabbix template or, if you are a Nagios user, with this plugin

Puppet + Vagrant VS librarian-puppet

Only recently (call me old-fashioned) I’ve started working with Vagrant to first test my puppet manifests instead of committing everything, push to the puppet master and use test environments on live machines (VM or bare metal, doesn’t matter).
At the same time, I wanted to solve another outstanding problem I had with puppet: external modules. For months/years I’ve suffered from the pain of using git submodules to integrate external modules with the in-house ones, but enough is enough. Searching for a possible solution, I stumbled upon librarian-puppet which seemed to do everything I needed.

Following Librarian’s installation instructions, I moved my modules/ directory to a separate git repository and created a nice and tidy Puppetfile with all my modules (which at the moment were all internal modules). Nice, everything worked as expected, my modules were installed in my local puppet repo with `librarian-puppet install` and I can even add external modules from the Forge with all the dependencies automagically resolved. Happy unicorns are puking colorful rainbows and everything, buuuut…

What happens when I want to edit some internal module in my new, separated modules repository? And what about adding a new role with a couple of profiles that use external modules (defined in the Puppetfile)? BUMMER.

I have to commit everything in my puppet-modules repo and push it somewhere before testing it in Vagrant, because librarian-puppet doesn’t support raw directories. Moreover, I need to edit my Puppetfile with the new development branch!!

That’s not fine at all.

But thankfully both Vagrant and Puppet support more than one modules directory at once, so:

  • first, I’ve added back my in-house modules to the original puppet repo, under modules/
  •  then, I’ve set up librarian-puppet to use a different path to install the modules it manages:
librarian-puppet config path modules-contrib --global

(in my puppet repo, remeber to gitignore modules-contrib/!)

  • relevant fragment of my Vagrantfile:
    config.vm.provision :puppet do |puppet|
    
        puppet.manifests_path    = "#{$puppet_dir}/manifests"
        puppet.manifest_file     = "site.pp"
        puppet.module_path       = [ "#{$puppet_dir}/modules", "#{$puppet_dir}/modules-contrib"]
        puppet.hiera_config_path = "#{$puppet_dir}/hiera.yaml"
    
      end
    end
    
  • and finally, if you’re using like me a puppet master, add this line to puppet.conf:
    modulepath = /etc/puppet/modules:/etc/puppet/modules-contrib

Now, it’s just a matter of librarian-puppet install before provisioning Vagrant or just after pushing changes to the Puppet master.

nginx and the too many open files limit

So, nginx is fast, nginx is light, nginx is great but… nginx can be nasty too, with undocumented unexpected behaviors. What happened today? We were putting in production a new reverse proxy based on nginx 1.4.1 and one of the obvious thing you do when putting it in production is to raise the nofile limit from the standard 1024 (at least on Debian). So, you expect that nginx will inherit those pam_limit little numbers but… no!! If you check out /proc/$nginx_worker_PID/limits you will see that nofile is still 1024. So, obviously someone is cheating. Looking at the nginx documentation about nofile you will see that the interesting option worker_rlimit_nofile has no default value, so one would think that this value would be inherited from the system conf but, as you have already figured out, it’s not that way. You have to explicitly set, for example:

worker_rlimit_nofile = 100000;

in your main part of nginx.conf to have it as you wish. BTW this overrides limits.conf even if you put a lower value in limits.conf, so if you’re using nginx >= 1.4 just tune this configuration option to solve the “too many open files” problem.

Xen, XFS and the barrier error

Probably nobody is going to hit this but since I’ve lost half a morning puzzling my mind about this error, I think it would be useful to blog about it.
We have some old Xen machine running Debian Lenny with Xen 3.2 and while upgrading a VM using a XFS root partition to Debian Wheezy (it was a Squeeze), I’ve got this nice error:

[ 5.024330] blkfront: xvda1: barrier or flush: disabled
[ 5.024338] end_request: I/O error, dev xvda1, sector 4196916
[ 5.024343] end_request: I/O error, dev xvda1, sector 4196916
[ 5.024360] XFS (xvda1): metadata I/O error: block 0x400a34 (“xlog_iodone”) error 5 buf count 3072
[ 5.024369] XFS (xvda1): xfs_do_force_shutdown(0x2) called from line 1007 of file /build/linux-s5x2oE/linux-3.2.46/fs/xfs/xfs_log.c. Return address = 0xffffffffa009fed5
[ 5.024394] XFS (xvda1): Log I/O Error Detected. Shutting down filesystem
[ 5.024401] XFS (xvda1): Please umount the filesystem and rectify the problem(s)
[ 5.024411] XFS (xvda1): xfs_log_force: error 5 returned.
[ 5.024419] XFS (xvda1): xfs_do_force_shutdown(0x1) called from line 1033 of file /build/linux-s5x2oE/linux-3.2.46/fs/xfs/xfs_buf.c. Return address = 0xffffffffa005f8a7

this strange error simply means that the underlying device doesn’t provide barrier support! The simple, quick fix (once you know it) is to specify in the fstab nobarrier option. I think that I found this problem because in Linux 3.2 the barrier option is enabled by default, while it was off in previous kernel versions.

How to remove a port from a port-channel in a Dell PowerConnect switch

This is a “note to self” type post, and basically because Google seems unable to find a direct answer to this simple question.
So it’s simple as this


# configure
(config)# interface ethernet NN
(config-if)# no channel-group

et voilà, the ethernet port doesn’t belong anymore to the port channel. It should work with PowerConnect 5324, PowerConnect 5424, PowerConnect 5448, PowerConnect 5548 etc.

Monitor NDB memory usage with Nagios

We are planning to put online a MySQL NDB Cluster soon (more on this in another post), and one thing you have to do before putting anything in production is to monitor it for problems. In the case of a NDB cluster, you should care about monitoring your limited resources – basically because is a in-memory database – and be alerted when your developers are filling up the dedicated tablespace.

You can do this in two ways: performing a SELECT on the ndbinfo database through the MySQL interface to NDB or parsing the ndb_mgm output. I prefer the latter because maybe I’m using another frontend to the data (native NDB API, memcached etc) and I don’t want to maintain a MySQL server frontend just to check how much space I still have free.

So, you can use this script on GitHub

https://github.com/vide/nagios-ndb/blob/master/check_ndb_usage.sh

to parse its output and know if your tablesapce is OK, warning or critical. Feel free to post any comment, fork it and send patches! :)

/etc/hosts and the thousand-characters-long line

This is a self-note in the case I encounter another strange behaviour like this. We were experiencing a strange problem with MySQL and DNS. I was trying to do this:

$ mysql -h server.mysql
Unknown MySQL server host 'server.mysql' (-1)

but both dig and a normal ping (which in turns uses libc and nsswitch to do the name resolving) were working:

$ dig +short server.mysql
192.168.10.1

$ ping server.mysql
PING server.mysql (192.168.10.1) 56(84) bytes of data.
64 bytes from server.mysql (192.168.10.1): icmp_req=1 ttl=64 time=0.399 ms

and obviously connecting using the MySQL client and the IP address worked. So, what was happening? The smarter amongst you maybe already know the problem: a very very large line in /etc/hosts was driving the mysql client crazy (but not ping). Removing the “files” database fron the hosts entry in /etc/nsswitch.conf showed where the problem lied, and fixing the bad-ass line fixed the problem

Very slow Hadoop on PowerEdge R815

We have a little internal Hadoop cluster for development and testing, two very powerful Dell PowerEdge R815 with Debian and a bunch of Xen VMs to reproduce a production environment. Problem is that the cluster, even with a relatively small amount of data, was sloooow. And when I say slow I mean almost unusable for Hadoop development (a mapreduce on a small dataset took 5x more than on the big one in production). Even an insignificant

$ hadoop fs -ls

took more than 4s to list the content of HDFS. strace was showing tons of wait() syscalls for no apparent reason, while in the production system the same operation takes 1s and no wait() at all.
After trying almost everything (even without Xen and running Hadoop on the bare metal), I changed by chance a Power Management option in the R815 BIOS. By default it was set to Active Power Controller. Changing it to Maximum Performance did the trick! The ls now takes about a second, just like the production environment. My guess is that probably the default value (which is some kind of automagical load detection) wasn’t able to see that the machine really needed power when running Hadoop, leaving the CPU underclocked to save energy. Maximum power probably is not so green but it solved the problem

PHP 5.3 max_input_vars and big forms

Starting from PHP 5.3.9 there is a brand new php.ini option: max_input_vars. You may read in the PHP documentation about it. But what you don’t probably now is that if you are using the Suhoshin patch (for example if you’re using dotdeb packages), then you need to tweak 2 other variables to increase the max number of POST variables accepted by your PHP.

So, if you want to increase this number to, say, 3000 from the default number which is 1000, you have to put in your php.ini these lines:


max_input_vars = 3000
suhosin.post.max_vars = 3000
suhosin.request.max_vars = 3000

The other suitable option is to fix your form and make it saner :)

Apache2: seg fault or similar nasty error detected in the parent process

If you happen to see a message like this

seg fault or similar nasty error detected in the parent process

when reloading Apache2, and if you’re using PHP5 through mod_php5, then it may be related to having an extension loaded via php.ini and not really present on the system. It was my case with a redis extension (redis.so) and I banged my head a day before finding it.