After getting iSCSI working on Debian Etch the next thing to do is to set up multipath to get redundancy in case one path from the SCSI client to the SCSI target fails.
First, let’s digg a bit more in depth about what a path is, what can go wrong and what we can do to prevent it. Usually in a simple iSCSI environment there are two network interfaces dedicated to the remote storage, each one connected to a distinct ethernet switch and each switch connected to the a distinct ethernet interface in the host SAN. Then here you have two separated controller cards (let’s call them A and B) which connect to the same logical volume (a RAID array.. so here redundancy is already covered). I repeat, this is the simplest redundant scenario, in which you can have redundancy, a good fault-tolerance and can parallelize via round-robin the requests from the initiator to the host target.
So, let’s imagine we have configured both interfaces (and all the needed connections) in our server (the initiator) and we send an iSCSI discover request on both interfaces:
:~# iscsiadm -m discovery -t sendtargets -p 172.16.1.10
172.16.1.10:3260,1 iqn.2002-10.com.infortrend:raid.sn7612996.101
:~# iscsiadm -m discovery -t sendtargets -p 172.16.11.10
172.16.11.10:3260,1 iqn.2002-10.com.infortrend:raid.sn7612961.112
as said, both interfaces are connected to the exact same data volume(s), because we want some fault tolerance in case one path fails. But wait… what’s happening in kernel land?
:~# dmesg|grep "SCSI device"| grep -v sda # sda is the local disk
SCSI device sdb: 1638400000 512-byte hdwr sectors (838861 MB)
SCSI device sdb: drive cache: write back
SCSI device sdb: 1638400000 512-byte hdwr sectors (838861 MB)
SCSI device sdb: drive cache: write back
SCSI device sdc: 11717947392 512-byte hdwr sectors (5999589 MB)
SCSI device sdc: drive cache: write back
SCSI device sdc: 11717947392 512-byte hdwr sectors (5999589 MB)
SCSI device sdc: drive cache: write back
SCSI device sdd: 1638400000 512-byte hdwr sectors (838861 MB)
SCSI device sdd: drive cache: write back
SCSI device sdd: 1638400000 512-byte hdwr sectors (838861 MB)
SCSI device sdd: drive cache: write back
SCSI device sde: 11717947392 512-byte hdwr sectors (5999589 MB)
SCSI device sde: drive cache: write back
SCSI device sde: 11717947392 512-byte hdwr sectors (5999589 MB)
SCSI device sde: drive cache: write back
We have two volumes exported from the SAN but our server is detecting four volumes, two pairs of identical volumes to be correct. This is quite normal… we are exporting the same volumes on every path so our initiator detects four distinct volumes. One solution could be to mount sdb and sdc and then, if something goes wrong, manually mount in the same point sdd and sde. But obviously this is something we should avoid because it would create unwanted downtime. So, you need multipath.
In Debian, you can install it with a simple:
aptitude install multipath-tools
and have a very very basic configuration editing /etc/multipath.conf with something like this:
blacklist {
devnode "sda"
}defaults {
user_friendly_names yes
}
Restart the multipath-tools service and then you’ll get your new devices as /dev/mapper/mpath*
. These are absolutely ordinary block devices, so you can partition, format and mount them as they were a normal local disk.
When a path fails, multipathd will automatically exclude it from the dispatching algorithm and you won’t notice the failure happened.
As a final side note, remember that you can use multipath with any number of block devices and they haven’t to be iSCSI devices… it could be a failovered DAS as well, for example (Dell M3000 comes to my mind).
How do I set up a path between two scsi devices and purposely break the same, for a demo of multipathing
@gaurav: personally I do an “ifdown ethX” where ethX is one of the NICs connected to the SAN
Thanks for this helful blog post.
I have a question though if you don’t mind. Does anything need to be done on the target end to allow for multipathing? Or is it just a matter of installing 2 NICs on the target (each with a different IP in a different subnet)?
Thanks
It seems like your “SAN” is an iSCSI target linux box, yes?
I don’t understand why ethernet bonding wouldn’t achieve the same aim without all the hassle of the multi path and duplicate iscsi config.
If you were using an off-the-shelf iSCSI appliance that would be different but I’d be surprised if any iSCSI appliance didn’t also use NIC bonding for the same reason.
[…] References: [1]The DM-Multipath Configuration File [2]HOWTO: Debian and SCSI multipathing with multipath-tools […]