nginx and the too many open files limit

So, nginx is fast, nginx is light, nginx is great but… nginx can be nasty too, with undocumented unexpected behaviors. What happened today? We were putting in production a new reverse proxy based on nginx 1.4.1 and one of the obvious thing you do when putting it in production is to raise the nofile limit from the standard 1024 (at least on Debian). So, you expect that nginx will inherit those pam_limit little numbers but… no!! If you check out /proc/$nginx_worker_PID/limits you will see that nofile is still 1024. So, obviously someone is cheating. Looking at the nginx documentation about nofile you will see that the interesting option worker_rlimit_nofile has no default value, so one would think that this value would be inherited from the system conf but, as you have already figured out, it’s not that way. You have to explicitly set, for example:

worker_rlimit_nofile = 100000;

in your main part of nginx.conf to have it as you wish. BTW this overrides limits.conf even if you put a lower value in limits.conf, so if you’re using nginx >= 1.4 just tune this configuration option to solve the “too many open files” problem.

Xen, XFS and the barrier error

Probably nobody is going to hit this but since I’ve lost half a morning puzzling my mind about this error, I think it would be useful to blog about it.
We have some old Xen machine running Debian Lenny with Xen 3.2 and while upgrading a VM using a XFS root partition to Debian Wheezy (it was a Squeeze), I’ve got this nice error:

[ 5.024330] blkfront: xvda1: barrier or flush: disabled
[ 5.024338] end_request: I/O error, dev xvda1, sector 4196916
[ 5.024343] end_request: I/O error, dev xvda1, sector 4196916
[ 5.024360] XFS (xvda1): metadata I/O error: block 0x400a34 (“xlog_iodone”) error 5 buf count 3072
[ 5.024369] XFS (xvda1): xfs_do_force_shutdown(0x2) called from line 1007 of file /build/linux-s5x2oE/linux-3.2.46/fs/xfs/xfs_log.c. Return address = 0xffffffffa009fed5
[ 5.024394] XFS (xvda1): Log I/O Error Detected. Shutting down filesystem
[ 5.024401] XFS (xvda1): Please umount the filesystem and rectify the problem(s)
[ 5.024411] XFS (xvda1): xfs_log_force: error 5 returned.
[ 5.024419] XFS (xvda1): xfs_do_force_shutdown(0x1) called from line 1033 of file /build/linux-s5x2oE/linux-3.2.46/fs/xfs/xfs_buf.c. Return address = 0xffffffffa005f8a7

this strange error simply means that the underlying device doesn’t provide barrier support! The simple, quick fix (once you know it) is to specify in the fstab nobarrier option. I think that I found this problem because in Linux 3.2 the barrier option is enabled by default, while it was off in previous kernel versions.

Very slow Hadoop on PowerEdge R815

We have a little internal Hadoop cluster for development and testing, two very powerful Dell PowerEdge R815 with Debian and a bunch of Xen VMs to reproduce a production environment. Problem is that the cluster, even with a relatively small amount of data, was sloooow. And when I say slow I mean almost unusable for Hadoop development (a mapreduce on a small dataset took 5x more than on the big one in production). Even an insignificant

$ hadoop fs -ls

took more than 4s to list the content of HDFS. strace was showing tons of wait() syscalls for no apparent reason, while in the production system the same operation takes 1s and no wait() at all.
After trying almost everything (even without Xen and running Hadoop on the bare metal), I changed by chance a Power Management option in the R815 BIOS. By default it was set to Active Power Controller. Changing it to Maximum Performance did the trick! The ls now takes about a second, just like the production environment. My guess is that probably the default value (which is some kind of automagical load detection) wasn’t able to see that the machine really needed power when running Hadoop, leaving the CPU underclocked to save energy. Maximum power probably is not so green but it solved the problem

PHP 5.3 max_input_vars and big forms

Starting from PHP 5.3.9 there is a brand new php.ini option: max_input_vars. You may read in the PHP documentation about it. But what you don’t probably now is that if you are using the Suhoshin patch (for example if you’re using dotdeb packages), then you need to tweak 2 other variables to increase the max number of POST variables accepted by your PHP.

So, if you want to increase this number to, say, 3000 from the default number which is 1000, you have to put in your php.ini these lines:


max_input_vars = 3000
suhosin.post.max_vars = 3000
suhosin.request.max_vars = 3000

The other suitable option is to fix your form and make it saner :)

*fdisk and the 1.5TB partition size limit

If you have a large volume (like a disk array or the next-generation SATA disk) and you’re trying to create a single, giant partition for whatever reason, you should know that fdisk (for DOS compatibility reasons, I suppose) cannot create partitions bigger than ~1.5TB, although it won’t throw you any error or complain So if you want to create bigger partition, use parted (or one of its frontend). The limitation applies to fdisk, cfdisk and all the *fdisk family.

EDIT: in parted you have to change the disk partition type to something like gpt or you still won’t be able to create a partition bigger than 1.5TB.
Nonetheless, once the very large partition is created, I still haven’t found a way to format it, mount it and get all my terabytes. I’m still stuck with 1.5TB. Look at this:

server:~# parted /dev/mapper/mpath1
GNU Parted 1.7.1
Using /dev/mapper/mpath1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Disk /dev/mapper/mpath1: 6000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 6000GB 6000GB ext2
server:~# df -h |grep mpath1p1
/dev/mapper/mpath1p1 1.5T 5.1M 1.5T 1% /mnt/logs

I’m stuck with this. Any idea, dear lazyweb?

FreeBSD6 and nfsd gotchas

If you’re using FreeBSD6 as NFS server, you may find useful these quick tips related to /etc/export syntax, because otherwise you will be stuck with a generic

mountd[321]: bad exports list

in your logs.
So, what went wrong here?
One possible cause is that in your exports file you’re trying to export as shared NFS resource a symlink and not a real directory. NFS doesn’t like it at all and will simply no work.
One other, curious, glitch I found is that if you have two resource in two separate lines with the same options, the latter will fail.
Example of /etc/exports:

/path/share1
/path/share2 -network 192.168.1.0
/path/share3 -network 192.168.1.0

in this case share1 and share2 will work, while share3 won’t work and you’ll get a

mountd[321]: can't change attributes for /path/share3
mountd[321]: bad exports list line /path/share3

but if you change the network value in share3 (and only this), it will work!
Maybe there’s an explanation for this (I didn’t read all the exports(5) manpage) but anyway it’s a little bit strange.

Using Wanem to simulate a wide-area network

If you (or your company) are in the web-development business, one thing you need when testing your application, besides trying it in different browsers, is trying the user experience at differents conenction speed.

This can be achieved easily with Wanen, a live CD that enables you to create a gateway for your test computer which slows down (or intruduces errors, jitters, random disconnections) the network experience. Wanem uses the almighty iproute2 tools (more in details, tc) to accomplish its tasks.

The easiest way is to try Wanem is to boot it in a virtual environment (for example, VMWare Server, for easy management). Its use is quite straightforward, so I won’t nag you here and I’ll just invite you to read the documentation the project provides, but I let you know a couple of things/errors I found while using it (I’m going to report them to the developers as well)

  • The remote administration interface, which is basically a web page, it’s so bad written HTML that onlky works with Internet Explorer 6. Firefox will render a completely useless mess instead of the simple, plain HTML table that is supposed to be. I can’t understand how is this still possible in 2007 from people using Linux (it’s a Knoppix-based live cd!). So, be careful with the advanced settings.
  • Even if you specify a static IP address on startup (in the end it’s meant as a gateway, so DHCP is almost useless), there’ll always be a “pump” (DHCP client) process active in memory, resetting your IP from time to time, if you are in a DHCP’ed environment. To solve this, you have to do a couple of tricks because by default Wanem only gives you access to a limited control shell.
    So, acces Wanem from a remote ssh with these options:

    ssh perc@$WANEM_IP -t /bin/bash

    and enter the password you created at boot time. Now you’re in the live cd and you may try to kill the pump process. But you can’t since you don’t have enough permissions! And sudo/su ask you an inexistent root password. The solution is the “dosu” executable found in the home you’ve just entered. Resuming:

    dosu killall pump

Keepalived and TCP_CHECK problem

Today I was debugging a problem I had with keepalived not discovering that a real server behind a virtual IP it manages, had died.

The problem was really strange because the check was very, very simple

real_server 192.168.1.65 3306
{
TCP_CHECK
{
connect_port 3306
bindto 192.168.1.65
connect_timeout 2
}
}

This configuration was created after reading keepalived.conf man pages, that talk about these 3 options for the TCP_CHECK, without going in deeper details. So I assumed that bindto IPADDR has to be used to indicate to which IP address we should connect to do the check. But I was wrong, because with this configuration if the real server behind dies, keepalived doesn’t notice anything at all. This is because the “bindto” option, I guess, is used to choose to which local (to the LVS director) IP address bind to check the external IP:port.
So, changing the configuration to looks like this:


real_server 192.168.1.65 3306
{
TCP_CHECK
{
connect_port 3306
connect_timeout 2
}
}

fixed the problem. Keepalived is a great product and works quite well, but it’s documentation is a bit disappointing.

Problems with SATA disks and Dell PE SC1435: how to fix

We have got a couple of Dell PowerEdge SC1435 (Dual Opteron) with a lspci output like this:


00:01.0 PCI bridge: Broadcom HT1000 PCI/PCI-X bridge
00:02.0 Host bridge: Broadcom HT1000 Legacy South Bridge
00:02.1 IDE interface: Broadcom HT1000 Legacy IDE controller
00:02.2 ISA bridge: Broadcom HT1000 LPC Bridge
00:03.0 USB Controller: Broadcom HT1000 USB Controller (rev 01)
00:03.1 USB Controller: Broadcom HT1000 USB Controller (rev 01)
00:03.2 USB Controller: Broadcom HT1000 USB Controller (rev 01)
00:04.0 VGA compatible controller: ATI Technologies Inc ES1000 (rev 02)
00:07.0 PCI bridge: Broadcom Unknown device 0140 (rev a2)
00:08.0 PCI bridge: Broadcom Unknown device 0142 (rev a2)
00:09.0 PCI bridge: Broadcom Unknown device 0144 (rev a2)
00:0a.0 PCI bridge: Broadcom Unknown device 0142 (rev a2)
00:0b.0 PCI bridge: Broadcom Unknown device 0144 (rev a2)
00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration
00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map
00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control
00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration
00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map
00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller
00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control
01:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express (rev 21)
02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express (rev 21)
03:0d.0 PCI bridge: Broadcom HT1000 PCI/PCI-X bridge (rev c0)
03:0e.0 IDE interface: Broadcom BCM5785 (HT1000) PATA/IDE Mode

it may happen that, when there is disk activity, tha SATA disk just disconnects, causing the processes using the disk to freeze for 30-60 seconds. The output in /var/log/messages could look like something like this:


ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x40000000 action 0x2 frozen
ata1.00: cmd ec/00:00:00:00:00/00:00:00:00:00/00 tag 0 cdb 0x0 data 512 in
res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
ata1: port is slow to respond, please be patient (Status 0xd0)
ata1: soft resetting port
ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
ata1.00: configured for UDMA/133
ata1: EH complete
SCSI device sda: 312500000 512-byte hdwr sectors (160000 MB)
sda: Write Protect is off
sda: Mode Sense: 00 3a 00 00
SCSI device sda: write cache: enabled, read cache: enabled, doesn't support DPO or FUA

The solution is to put

pci=noacpi

in your Grub/Lilo configuration as parameter of the kernel you’re using. I’ve experienced this problem with kernels 2.6.18 and 2.6.20, both 32 and 64 bit

EDIT:

I’ve spoken too early, it seems that the trick doesn’t work, so we are here again with this SATA problem on these machines. Any idea from the web?