Skip to content

nginx and the too many open files limit

by

So, nginx is fast, nginx is light, nginx is great but… nginx can be nasty too, with undocumented unexpected behaviors. What happened today? We were putting in production a new reverse proxy based on nginx 1.4.1 and one of the obvious thing you do when putting it in production is to raise the nofile limit from the standard 1024 (at least on Debian). So, you expect that nginx will inherit those pam_limit little numbers but… no!! If you check out /proc/$nginx_worker_PID/limits you will see that nofile is still 1024. So, obviously someone is cheating. Looking at the nginx documentation about nofile you will see that the interesting option worker_rlimit_nofile has no default value, so one would think that this value would be inherited from the system conf but, as you have already figured out, it’s not that way. You have to explicitly set, for example:

worker_rlimit_nofile = 100000;

in your main part of nginx.conf to have it as you wish. BTW this overrides limits.conf even if you put a lower value in limits.conf, so if you’re using nginx >= 1.4 just tune this configuration option to solve the “too many open files” problem.

Xen, XFS and the barrier error

by

Probably nobody is going to hit this but since I’ve lost half a morning puzzling my mind about this error, I think it would be useful to blog about it.
We have some old Xen machine running Debian Lenny with Xen 3.2 and while upgrading a VM using a XFS root partition to Debian Wheezy (it was a Squeeze), I’ve got this nice error:

[ 5.024330] blkfront: xvda1: barrier or flush: disabled
[ 5.024338] end_request: I/O error, dev xvda1, sector 4196916
[ 5.024343] end_request: I/O error, dev xvda1, sector 4196916
[ 5.024360] XFS (xvda1): metadata I/O error: block 0x400a34 (“xlog_iodone”) error 5 buf count 3072
[ 5.024369] XFS (xvda1): xfs_do_force_shutdown(0×2) called from line 1007 of file /build/linux-s5x2oE/linux-3.2.46/fs/xfs/xfs_log.c. Return address = 0xffffffffa009fed5
[ 5.024394] XFS (xvda1): Log I/O Error Detected. Shutting down filesystem
[ 5.024401] XFS (xvda1): Please umount the filesystem and rectify the problem(s)
[ 5.024411] XFS (xvda1): xfs_log_force: error 5 returned.
[ 5.024419] XFS (xvda1): xfs_do_force_shutdown(0×1) called from line 1033 of file /build/linux-s5x2oE/linux-3.2.46/fs/xfs/xfs_buf.c. Return address = 0xffffffffa005f8a7

this strange error simply means that the underlying device doesn’t provide barrier support! The simple, quick fix (once you know it) is to specify in the fstab nobarrier option. I think that I found this problem because in Linux 3.2 the barrier option is enabled by default, while it was off in previous kernel versions.

How to remove a port from a port-channel in a Dell PowerConnect switch

by

This is a “note to self” type post, and basically because Google seems unable to find a direct answer to this simple question.
So it’s simple as this


# configure
(config)# interface ethernet NN
(config-if)# no channel-group

et voilĂ , the ethernet port doesn’t belong anymore to the port channel. It should work with PowerConnect 5324, PowerConnect 5424, PowerConnect 5448, PowerConnect 5548 etc.

Monitor NDB memory usage with Nagios

by

We are planning to put online a MySQL NDB Cluster soon (more on this in another post), and one thing you have to do before putting anything in production is to monitor it for problems. In the case of a NDB cluster, you should care about monitoring your limited resources – basically because is a in-memory database – and be alerted when your developers are filling up the dedicated tablespace.

You can do this in two ways: performing a SELECT on the ndbinfo database through the MySQL interface to NDB or parsing the ndb_mgm output. I prefer the latter because maybe I’m using another frontend to the data (native NDB API, memcached etc) and I don’t want to maintain a MySQL server frontend just to check how much space I still have free.

So, you can use this script on GitHub

https://github.com/vide/nagios-ndb/blob/master/check_ndb_usage.sh

to parse its output and know if your tablesapce is OK, warning or critical. Feel free to post any comment, fork it and send patches! :)

/etc/hosts and the thousand-characters-long line

by

This is a self-note in the case I encounter another strange behaviour like this. We were experiencing a strange problem with MySQL and DNS. I was trying to do this:

$ mysql -h server.mysql
Unknown MySQL server host 'server.mysql' (-1)

but both dig and a normal ping (which in turns uses libc and nsswitch to do the name resolving) were working:

$ dig +short server.mysql
192.168.10.1

$ ping server.mysql
PING server.mysql (192.168.10.1) 56(84) bytes of data.
64 bytes from server.mysql (192.168.10.1): icmp_req=1 ttl=64 time=0.399 ms

and obviously connecting using the MySQL client and the IP address worked. So, what was happening? The smarter amongst you maybe already know the problem: a very very large line in /etc/hosts was driving the mysql client crazy (but not ping). Removing the “files” database fron the hosts entry in /etc/nsswitch.conf showed where the problem lied, and fixing the bad-ass line fixed the problem

Very slow Hadoop on PowerEdge R815

by

We have a little internal Hadoop cluster for development and testing, two very powerful Dell PowerEdge R815 with Debian and a bunch of Xen VMs to reproduce a production environment. Problem is that the cluster, even with a relatively small amount of data, was sloooow. And when I say slow I mean almost unusable for Hadoop development (a mapreduce on a small dataset took 5x more than on the big one in production). Even an insignificant

$ hadoop fs -ls

took more than 4s to list the content of HDFS. strace was showing tons of wait() syscalls for no apparent reason, while in the production system the same operation takes 1s and no wait() at all.
After trying almost everything (even without Xen and running Hadoop on the bare metal), I changed by chance a Power Management option in the R815 BIOS. By default it was set to Active Power Controller. Changing it to Maximum Performance did the trick! The ls now takes about a second, just like the production environment. My guess is that probably the default value (which is some kind of automagical load detection) wasn’t able to see that the machine really needed power when running Hadoop, leaving the CPU underclocked to save energy. Maximum power probably is not so green but it solved the problem

PHP 5.3 max_input_vars and big forms

by

Starting from PHP 5.3.9 there is a brand new php.ini option: max_input_vars. You may read in the PHP documentation about it. But what you don’t probably now is that if you are using the Suhoshin patch (for example if you’re using dotdeb packages), then you need to tweak 2 other variables to increase the max number of POST variables accepted by your PHP.

So, if you want to increase this number to, say, 3000 from the default number which is 1000, you have to put in your php.ini these lines:


max_input_vars = 3000
suhosin.post.max_vars = 3000
suhosin.request.max_vars = 3000

The other suitable option is to fix your form and make it saner :)

Follow

Get every new post delivered to your Inbox.