Xen, XFS and the barrier error

Probably nobody is going to hit this but since I’ve lost half a morning puzzling my mind about this error, I think it would be useful to blog about it.
We have some old Xen machine running Debian Lenny with Xen 3.2 and while upgrading a VM using a XFS root partition to Debian Wheezy (it was a Squeeze), I’ve got this nice error:

[ 5.024330] blkfront: xvda1: barrier or flush: disabled
[ 5.024338] end_request: I/O error, dev xvda1, sector 4196916
[ 5.024343] end_request: I/O error, dev xvda1, sector 4196916
[ 5.024360] XFS (xvda1): metadata I/O error: block 0x400a34 (“xlog_iodone”) error 5 buf count 3072
[ 5.024369] XFS (xvda1): xfs_do_force_shutdown(0x2) called from line 1007 of file /build/linux-s5x2oE/linux-3.2.46/fs/xfs/xfs_log.c. Return address = 0xffffffffa009fed5
[ 5.024394] XFS (xvda1): Log I/O Error Detected. Shutting down filesystem
[ 5.024401] XFS (xvda1): Please umount the filesystem and rectify the problem(s)
[ 5.024411] XFS (xvda1): xfs_log_force: error 5 returned.
[ 5.024419] XFS (xvda1): xfs_do_force_shutdown(0x1) called from line 1033 of file /build/linux-s5x2oE/linux-3.2.46/fs/xfs/xfs_buf.c. Return address = 0xffffffffa005f8a7

this strange error simply means that the underlying device doesn’t provide barrier support! The simple, quick fix (once you know it) is to specify in the fstab nobarrier option. I think that I found this problem because in Linux 3.2 the barrier option is enabled by default, while it was off in previous kernel versions.

Very slow Hadoop on PowerEdge R815

We have a little internal Hadoop cluster for development and testing, two very powerful Dell PowerEdge R815 with Debian and a bunch of Xen VMs to reproduce a production environment. Problem is that the cluster, even with a relatively small amount of data, was sloooow. And when I say slow I mean almost unusable for Hadoop development (a mapreduce on a small dataset took 5x more than on the big one in production). Even an insignificant

$ hadoop fs -ls

took more than 4s to list the content of HDFS. strace was showing tons of wait() syscalls for no apparent reason, while in the production system the same operation takes 1s and no wait() at all.
After trying almost everything (even without Xen and running Hadoop on the bare metal), I changed by chance a Power Management option in the R815 BIOS. By default it was set to Active Power Controller. Changing it to Maximum Performance did the trick! The ls now takes about a second, just like the production environment. My guess is that probably the default value (which is some kind of automagical load detection) wasn’t able to see that the machine really needed power when running Hadoop, leaving the CPU underclocked to save energy. Maximum power probably is not so green but it solved the problem

HOWTO: install puppet-dashboard on Debian Squeeze

This should apply to Ubuntu Server as well (10.10, 11.04) but it’s tested to work 100% on Debian Squeeze 6.0.
Puppet Dashboard is a neat piece of software really useful if you ara managing a good number of hosts without Puppet.

First of all, install the required deps:

# aptitude install ruby rake dbconfig-common libdbd-mysql-ruby mysql-client rubygems libhttpclient-ruby1.8

you’ll probably have lots of them already installed if you are running Puppet master on the same host (which by the way is not mandatory).
Then, download and install the deb package:

# wget http://downloads.puppetlabs.com/dashboard/puppet-dashboard_1.2.0-1_all.deb
# dpkg -i puppet-dashboard_1.2.0-1_all.deb

Enable the daemon editing the default file /etc/default/puppet-dashboard and then customize your database definition by editing /etc/puppet-dashboard/database.yml which should looks something like this:

production:
database: puppet_dashboard
host: your.database.host
username: puppet_dashboard
password: secret_password
encoding: utf8
adapter: mysql

if you plan to use MySQL as a backend. Remember to create the database and grant the appropriate privileges to the user

GRANT ALL PRIVILEGES ON puppet_dashboard.* TO 'puppet_dashboard'@'%' IDENTIFIED BY 'secret_password';

Now we have to populate the database, Rails way

# cd /usr/share/puppet-dashboard/
# rake RAILS_ENV=production db:migrate

Now you can start /etc/init.d/puppet-dashboard and /etc/init.d/puppet-dashboard-workers and you should be already able to access http://your-host.yourdomain.tld:3000 and see the Puppet Dashboard.
You just have to do two thing more before you can see any actual data in it: enable report sending in the Puppet clients and tell Puppet Master to pull those reports to the Dashboard via HTTP.

So, edit /etc/puppet.pupept.conf on the clients (I suggest you to do it via puppet if you do not already have this setting in it) and add

[agent]
# ... whatever you already have
report=true

and on the Master side

[master]
# ... whatever you already have
reports = store, http
reporturl = http://your-host.yourdomain.tld:3000/reports/upload

That’s it!

Disable directory listing in Apache with Debian

If you find one of your servers with the ugly directory listing enabled, there’s a quick way to disable it in Debian

# echo autoindex | a2dismod
# /etc/init.d/apache2 restart

For other Apache installations in other distro, you can simple find the Autoindex option in your config file and delete it manually, then restart Apache

EDIT: a cleaner and more elegant way to achieve the same is, as the comments section says

# a2dismod autoindex

thanks :)

HOWTO: Poor man VPN in Debian/Ubuntu with OpenSSH

If you are managing a remote Linux network and you are tired of NATting or two ssh hops to enter a remote server, but OpenVPN poses too much overhead, you can use ssh tunneling to easily create a workstation-to-site VPN.
I’ve tested this with Ubuntu 9.10 Karmic Koala as the workstation and Debian 5.0 Lenny as the server, but it should work identically with older Ubuntu and Debian (both server or workstation).

I’ve been inspired by these two tutorials, although both didn’t work 100% for me, but joining pieces did the trick, so here I am :)

Software prerequisites:

  • Standard Debian or Ubuntu
  • openssh-server on the remote side of the VPN
  • openssh-client on the local side of the VPN (your PC)

Network configuration (as an example)

  • Workstation LAN: 192.168.0.0/24
  • Server LAN: 192.168.10.0/24 on eth1
  • VPN: 10.0.0.0/24
  • Remote server public address: 1.2.3.4 on eth0

First of all, on the workstation generate a dedicated key (it should be a dedicated one cause the server will identify you’re going to bring up a tunnel based on the key you’re using to connect) with

# ssh-keygen -f /root/.ssh/VPNkey -b 2048

Now edit /etc/network/interfaces and create a new stanza like this one (remember to change IP addresses – in bold – according to your personal network configuration)

iface tun0 inet static
# from pre-up to true on the same line
pre-up ssh -i /root/.ssh/VPN -S /var/run/ssh-vpn-tunnel-control -M -f -w 0:0 1.2.3.4 true
pre-up sleep 5
address 10.1.0.2
pointopoint 10.1.0.1
netmask 255.255.255.0
up route add -net 192.168.10.0 netmask 255.255.255.0 gw 10.1.0.1 tun0
post-down ssh -i /root/.ssh/VPN -S /var/run/ssh-vpn-tunnel-control -O exit 1.2.3.4

Just a copuple of notes: address is your VPN local endpoint address (say, your workstation) while pointopoint is the remote VPNaddress (your server), which are the two tunnel’s endpoints.

Now let’s go to the server.

Edit /etc/ssh/sshd_server, add the line
PermitTunnel point-to-point

and restart your sshd instance.
Now edit (or create) /root/.ssh/authorized_keys (remember, we are on the server now, not your workstation) and add a line like

tunnel="0",command="/sbin/ifdown tun0; /sbin/ifup tun0" ssh-rsa HERE IT GOES YOUR VPNkey.pub FROM YOUR WORKSTATION

now edit /etc/network/interfaces and add this stanza:

iface tun0 inet static
address 10.1.0.1
netmask 255.255.255.0
pointopoint 10.1.0.2
post-up /sbin/sysctl -w net.ipv4.ip_forward=1
post-up /sbin/iptables -t nat -A POSTROUTING -s 10.1.0.0/24 -o eth1 -j MASQUERADE
post-down /sbin/iptables -t nat -D POSTROUTING -s 10.1.0.0/24 -o eth1 -j MASQUERADE
post-down /sbin/sysctl -w net.ipv4.ip_forward=0

the post-up and post-down commands enable the network sharing between the VPN server endpoint and the remote LAN (it’s called masquerading), so you can access the remote LAN from your workstation and not only the remote server. Obviously you need to instruct your workstation with a dedicated static route to reach the remote LAN network, and this is the route add -net in your workstation config.

Now, bring up the tunnel on the workstation with
# ifup tun0
and you should be able to reach a remote server on your remote LAN, with traffic secured by OpenSSH encryption.

HOWTO: Ethernet bonding in Debian Lenny

In an older post I explained how to create a bond interface in Debian Etch… now, this doesn’t work anymore due to some changes in Lenny.

So, long story short, first of all, install ifenslave

# apt-get install ifenslave-2.6

edit /etc/network/interfaces and add the bond0 config:

auto bond0
iface bond0 inet static
address 192.168.1.2
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 192.168.1.1
up /sbin/ifenslave bond0 eth0 eth1
down /sbin/ifenslave -d bond0 eth0 eth1

now edit /etc/modprobe.d/arch/x86_64 (change the filename depending on your architecture) and add these lines


alias bond0 bonding
options bonding mode=1 miimon=100 downdelay=200 updelay=200

Brief explanation:

  • miimon N: check if the active interface(s) is alive every N milliseconds
  • downdelay N: wait N milliseconds after a detected link failure to consider the link down
  • updelay N: wait N milliseconds after a detected link restoration to consider the link up
  • mode N: 1 means master/slave configuration, so there’s only one active master. If this link fails, then slave is used.

For a more complete description of all the possible parameters, refer to Linux Documentation/networking/bonding.txt

After this, you can restart networking or reboot if you are working remotely and it should work without a problem. It did for me :)