HOWTO: install puppet-dashboard on Debian Squeeze

This should apply to Ubuntu Server as well (10.10, 11.04) but it’s tested to work 100% on Debian Squeeze 6.0.
Puppet Dashboard is a neat piece of software really useful if you ara managing a good number of hosts without Puppet.

First of all, install the required deps:

# aptitude install ruby rake dbconfig-common libdbd-mysql-ruby mysql-client rubygems libhttpclient-ruby1.8

you’ll probably have lots of them already installed if you are running Puppet master on the same host (which by the way is not mandatory).
Then, download and install the deb package:

# wget http://downloads.puppetlabs.com/dashboard/puppet-dashboard_1.2.0-1_all.deb
# dpkg -i puppet-dashboard_1.2.0-1_all.deb

Enable the daemon editing the default file /etc/default/puppet-dashboard and then customize your database definition by editing /etc/puppet-dashboard/database.yml which should looks something like this:

production:
database: puppet_dashboard
host: your.database.host
username: puppet_dashboard
password: secret_password
encoding: utf8
adapter: mysql

if you plan to use MySQL as a backend. Remember to create the database and grant the appropriate privileges to the user

GRANT ALL PRIVILEGES ON puppet_dashboard.* TO 'puppet_dashboard'@'%' IDENTIFIED BY 'secret_password';

Now we have to populate the database, Rails way

# cd /usr/share/puppet-dashboard/
# rake RAILS_ENV=production db:migrate

Now you can start /etc/init.d/puppet-dashboard and /etc/init.d/puppet-dashboard-workers and you should be already able to access http://your-host.yourdomain.tld:3000 and see the Puppet Dashboard.
You just have to do two thing more before you can see any actual data in it: enable report sending in the Puppet clients and tell Puppet Master to pull those reports to the Dashboard via HTTP.

So, edit /etc/puppet.pupept.conf on the clients (I suggest you to do it via puppet if you do not already have this setting in it) and add

[agent]
# ... whatever you already have
report=true

and on the Master side

[master]
# ... whatever you already have
reports = store, http
reporturl = http://your-host.yourdomain.tld:3000/reports/upload

That’s it!

Advertisements

HOWTO: Poor man VPN in Debian/Ubuntu with OpenSSH

If you are managing a remote Linux network and you are tired of NATting or two ssh hops to enter a remote server, but OpenVPN poses too much overhead, you can use ssh tunneling to easily create a workstation-to-site VPN.
I’ve tested this with Ubuntu 9.10 Karmic Koala as the workstation and Debian 5.0 Lenny as the server, but it should work identically with older Ubuntu and Debian (both server or workstation).

I’ve been inspired by these two tutorials, although both didn’t work 100% for me, but joining pieces did the trick, so here I am :)

Software prerequisites:

  • Standard Debian or Ubuntu
  • openssh-server on the remote side of the VPN
  • openssh-client on the local side of the VPN (your PC)

Network configuration (as an example)

  • Workstation LAN: 192.168.0.0/24
  • Server LAN: 192.168.10.0/24 on eth1
  • VPN: 10.0.0.0/24
  • Remote server public address: 1.2.3.4 on eth0

First of all, on the workstation generate a dedicated key (it should be a dedicated one cause the server will identify you’re going to bring up a tunnel based on the key you’re using to connect) with

# ssh-keygen -f /root/.ssh/VPNkey -b 2048

Now edit /etc/network/interfaces and create a new stanza like this one (remember to change IP addresses – in bold – according to your personal network configuration)

iface tun0 inet static
# from pre-up to true on the same line
pre-up ssh -i /root/.ssh/VPN -S /var/run/ssh-vpn-tunnel-control -M -f -w 0:0 1.2.3.4 true
pre-up sleep 5
address 10.1.0.2
pointopoint 10.1.0.1
netmask 255.255.255.0
up route add -net 192.168.10.0 netmask 255.255.255.0 gw 10.1.0.1 tun0
post-down ssh -i /root/.ssh/VPN -S /var/run/ssh-vpn-tunnel-control -O exit 1.2.3.4

Just a copuple of notes: address is your VPN local endpoint address (say, your workstation) while pointopoint is the remote VPNaddress (your server), which are the two tunnel’s endpoints.

Now let’s go to the server.

Edit /etc/ssh/sshd_server, add the line
PermitTunnel point-to-point

and restart your sshd instance.
Now edit (or create) /root/.ssh/authorized_keys (remember, we are on the server now, not your workstation) and add a line like

tunnel="0",command="/sbin/ifdown tun0; /sbin/ifup tun0" ssh-rsa HERE IT GOES YOUR VPNkey.pub FROM YOUR WORKSTATION

now edit /etc/network/interfaces and add this stanza:

iface tun0 inet static
address 10.1.0.1
netmask 255.255.255.0
pointopoint 10.1.0.2
post-up /sbin/sysctl -w net.ipv4.ip_forward=1
post-up /sbin/iptables -t nat -A POSTROUTING -s 10.1.0.0/24 -o eth1 -j MASQUERADE
post-down /sbin/iptables -t nat -D POSTROUTING -s 10.1.0.0/24 -o eth1 -j MASQUERADE
post-down /sbin/sysctl -w net.ipv4.ip_forward=0

the post-up and post-down commands enable the network sharing between the VPN server endpoint and the remote LAN (it’s called masquerading), so you can access the remote LAN from your workstation and not only the remote server. Obviously you need to instruct your workstation with a dedicated static route to reach the remote LAN network, and this is the route add -net in your workstation config.

Now, bring up the tunnel on the workstation with
# ifup tun0
and you should be able to reach a remote server on your remote LAN, with traffic secured by OpenSSH encryption.

HOWTO: Ethernet bonding in Debian Lenny

In an older post I explained how to create a bond interface in Debian Etch… now, this doesn’t work anymore due to some changes in Lenny.

So, long story short, first of all, install ifenslave

# apt-get install ifenslave-2.6

edit /etc/network/interfaces and add the bond0 config:

auto bond0
iface bond0 inet static
address 192.168.1.2
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 192.168.1.1
up /sbin/ifenslave bond0 eth0 eth1
down /sbin/ifenslave -d bond0 eth0 eth1

now edit /etc/modprobe.d/arch/x86_64 (change the filename depending on your architecture) and add these lines


alias bond0 bonding
options bonding mode=1 miimon=100 downdelay=200 updelay=200

Brief explanation:

  • miimon N: check if the active interface(s) is alive every N milliseconds
  • downdelay N: wait N milliseconds after a detected link failure to consider the link down
  • updelay N: wait N milliseconds after a detected link restoration to consider the link up
  • mode N: 1 means master/slave configuration, so there’s only one active master. If this link fails, then slave is used.

For a more complete description of all the possible parameters, refer to Linux Documentation/networking/bonding.txt

After this, you can restart networking or reboot if you are working remotely and it should work without a problem. It did for me :)

HOWTO: Install Mysql 5.1 for SPARC64 under Debian Lenny

If you happen to own a SPARC64 box, you’ll probably already know that even if the kernel is 64bit the userland comes from the normal SPARC Debian port, so it’s 32bit. Mysql is no exception, with all the 32bit limitations – mainly the 4GB RAM per process limit.

This is really  a PITA because if you have a SPARC64 box probably it has got plenty of RAM and you want to use it at its full potential, without having to messing around with Solaris (yeah, I don’t like it very much, I’m sorry).

This guide covers Mysql 5.1 installation in Debian Lenny, so we have to use SID repositories.


# echo "deb http://ftp.de.debian.org/debian/ sid main" >> /etc/apt/sources.list
# echo "deb-src http://ftp.de.debian.org/debian/ sid main" >> /etc/apt/sources.list

then let’s edit our apt preferences to avoid massive update on next dist-upgrade :)

# vim /etc/apt/preferences
Package: *
Pin: release a=stable
Pin-Priority: 900
Package: *
Pin: release a=sid
Pin-Priority: 100

and then update our repo list

# aptitude update

And here we go:

# apt-get build-dep mysql-server-5.1
# mkdir /tmp/mysql-build; cd /tmp/mysql-build
# apt-get source mysql-server-5.1
# vim mysql-dfsg-5.1*/debian/rules

here we touch a little the rules for compiling cause there are a couple of things that are not going to work by default.

The MAKE_J variable doesn’t work very well, so you can modify the grep to look for “CPU” instead of “processor” or you could hardcode it to the number of processor you have. This will make compilation a lot faster.

MAKE_J = -j$(shell if [ -f /proc/cpuinfo ] ; then grep -c CPU* /proc/cpuinfo ; else echo 1 ; fi)

then edit the CFLAGS variable because it’s used to compile some library that will ignore the environment variables we are going to set later in this howto.

CFLAGS=$${MYSQL_BUILD_CFLAGS:-"-O3 -DBIG_JOINS=1 -m64 -mcpu=niagara2 ${FORCE_FPIC_CFLAGS}"} \

it should be about line 73. Please note that -m64 will make it 64bit so it’s mandatory while the mcpu flag it’s to optimize the executable for your CPU. In my case it’s a niagara2 chip but you can use another CPU as well. Check the GCC documentation for more details
Save and quit and then we can start with the compilation process:

# export CFLAGS="-m64 -mcpu=niagara2 -O2 -g"
# export CXXFLAGS="-m64 -mcpu=niagara2 -O2 -g"
# export CPPFLAGS="-m64 -mcpu=niagara2 -O2 -g"
# export LDFLAGS="-m64 -mcpu=niagara2 -O2 -g"
# export DEB_BUILD_OPTIONS="nocheck"
# debuild -us -uc --preserve-env

that’s it. After some minutes (depending on your HW), you should have in /tmp/mysql-build all your new DEBs which you can install with dpkg -i. I advice to install the stock mysql-server-5.1 with aptitude before to get all dependencies installed, then you can use dpkg with your new DEBs.

HOWTO: install Transifex with Mysql on Debian Lenny

Transifex is a not-so-well-known opensource localization platform, written in Python and running on Django (a Python based application server, if you don’t know it). So, being not so well known, there isn’t a lot of documentation about it, and how to install it under Debian 5 Lenny it’s almost undocumented. So, here we go.

First of all, you have to install some packages. Luckily Lenny has got lot of them, although not all the needed

# aptitude install django python-urlgrabber python-setuptools python-pygments python-openid python-markdown python-httplib2
# aptitude install subversion
# aptitude install python-mysqldb
# aptitude install build-essential python-dev

These should be all the packages needed by Transifex which are available as deb packages. Now let’s install the remaining ones through easy_install

# easy_install django-authopenid django-pagination
# easy_install -f http://transifex.org/files/eggs/ contact_form tagging
# easy_install django-notification
# easy_install mercurial

Now the last package, django_evolution which is, AFAIK, only available as an SVN checkout from Google Code

# svn checkout http://django-evolution.googlecode.com/svn/trunk /tmp/django-evol
# mv /tmp/django-evol/django_evolution /usr/lib/python2.5/site-packages/

Now we can download the Transifex tarball

# cd /tmp && wget http://transifex.org/files/transifex-0.6.tar.gz
# tar xzvf transifex-0.6.tar.gz
# cp -a transifex-0.6/transifex /var/www

Now we have to edit some configuration files located in /var/www/transifex/settings with particular attention to the database backend configuration stored in 20-engines.conf. Take this as example

DATABASE_ENGINE = 'mysql'
DATABASE_NAME = 'transifex'
DATABASE_USER = 'transifex'
DATABASE_PASSWORD = 'secret_password'
DATABASE_HOST = 'ADDRESS-OF-YOUR-DB'             # Set to empty string for local socket
DATABASE_PORT = '3306'             # Set to empty string for default

obviously you must create a database (called ‘transifex’ in this example) in your database server and give full permissions to a dedicated user (called ‘transifex’ with ‘secret_password’ as password in this example). You can do it with these commands in your mysql console:

CREATE DATABASE transifex;
GRANT ALL ON transifex.* to 'transifex'@'%' IDENTIFIED BY 'secret_password';

Now we can run the configuration scripts, located in the transifex’s base dir

# cd /var/www/transifex
# ./manage.py syncdb
# ./manage.py txcreatedirs
# ./manage.py runserver

Now we can execute a server instance, listening on address $IPADDRESS and port 8088,  and then we can access it fro http://$IPADDRESS:8088 in our web browser. Remember to use nohup if yoiu want to detach it from the console

# ./manage.py $IPADDRESS:8088

HOWTO: Debian and SCSI multipathing with multipath-tools

After getting iSCSI working on Debian Etch the next thing to do is to set up multipath to get redundancy in case one path from the SCSI client to the SCSI target fails.

First, let’s digg a bit more in depth about what a path is, what can go wrong and what we can do to prevent it. Usually in a simple iSCSI environment there are two network interfaces dedicated to the remote storage, each one connected to a distinct ethernet switch and each switch connected to the a distinct ethernet interface in the host SAN. Then here you have two separated controller cards (let’s call them A and B) which connect to the same logical volume (a RAID array.. so here redundancy is already covered). I repeat, this is the simplest redundant scenario, in which you can have redundancy, a good fault-tolerance and can parallelize via round-robin the requests from the initiator to the host target.
Continue reading

HOWTO: the definitive guide to Debian Etch open-iscsi (take 2)

I guess the fact I’m here writing again on this topic goes for that definitive I put in the title the first time :) So obviously it was not so definitive, and here we are again with a, I hope, better and improved version.
This time we are going to use th backports repository and the Etcn’n’half kernel, cause they provide a better and far more stable support for iSCSI under Debian (Etch).

So, first of all add the backports repository:


echo "deb http://www.backports.org/debian etch-backports main contrib non-free" >> /etc/apt/sources.list

and do some basic stuff:

# aptitude update
# aptitude install debian-backports-keyring
# aptitude update

Now, let’s install the newer 2.6.24 kernel from the Debian Etch’n’half project (note: it’s present in the officila Debian repository, it doesn’t come from the backports.org one)


# aptitude install linux-image-2.6-amd64-etchnhalf # remove amd64 if you're on x86_32

now here, if you are a Broadcom NeteXtreme 2 user (lsmod|grep bnx2), be careful and remember to install these NEW package before rebooting, or you will have an unpleasant surprise


# aptitude install firmware-bnx2

This is due to a change in newer Linux versions

Then reboot, cross your fingers and then install the newer open-iscsi package:

# aptitude install -t etch-backports open-iscsi

Everything should be ok and this time you should have all the config files in the right place, a proper script to mount/unmount iSCSI target devices at boot time and so on…
Anyway, I still prefer the old-school config file, so usually I replace the Debian stock one with something like this:


node.active_cnx = 1
#node.startup = manual
node.startup = automatic
#node.session.auth.username = dima
#node.session.auth.password = aloha
#node.session.timeo.replacement_timeout = 15
node.session.timeo.recovery_timeout = 15
node.session.err_timeo.abort_timeout = 10
node.session.err_timeo.reset_timeout = 30
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Wait = 0
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.MaxConnections = 0
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.DataDigest = None
node.conn[0].iscsi.MaxRecvDataSegmentLength = 65536

I have highlighted one line because that parameter is used to choose the timeout after which an iSCSI device is considered dead, and thus that path discarded (we’ll talk about paths later).

So, time to discover new devices now:

# /etc/init.d/open-iscsi restart
# iscsiadm -m discovery -t sendtargets -p $SAN_IP_ADDRESS
# /etc/init.d/open-iscsi restart

check out your dmesg output and look for new /dev/sdX devices.
Some partitioning and formatting later, you can edit your fstab with something like this


/dev/sdb1 /mnt/files ext3 defaults,auto,_netdev 0 0

and you should be done!