[aklug] Re: OK, akluggers, riddle me this (FC SAN/switching question)

From: Arthur Corliss <acorliss@nevaeh-linux.org>
Date: Fri May 07 2010 - 09:26:53 AKDT

On Thu, 6 May 2010, adam bultman wrote:

> Well, these servers are HP Bladeservers - so I automatically have a port
> on each switch directly connected. The 'new' servers I'm getting will
> have external switches, which makes things more complicated.
> As for the number of switches - it's for redundancy purposes. If I have
> all my servers on one switch, and the switch resets, dies, etc - I'm
> toast. If I have multipathing, and two switches, I'll suffer a path
> loss, but I'll still be OK. (Correct me if I misinterpreted what you
> were saying.)

Gotcha, should have seen that one coming. I've got IBM BladeCenters over
here with FC switches (in the process of moving from Qlogic to Cisco so I
can enable ISL trunking to my core switches). I understand why you want two
switches, I just didn't understand the need for four. Sounds like your new
server must be standalone units. Dual fabrics are the only way to mesh.
:-)

> Network Appliance's clustering is Active-Active, so all paths are
> active. A particular head will 'own' a volume, LUN, etc, but accesses
> made to the "wrong" head will go through the cluster interconnect to the
> 'right head'. Accesses made to the 'wrong' head gives you a warning on
> the SAN, but linux, VMWare, etc knows how to choose the preferred path
> as to avoid the wrong head unless the "owner" head stops responding. If
> the "owner" head dies, the partner head takes over those luns. (Also,
> the two cluster nodes mirror each other's NVRAM, so you don't lose any
> writes or data.)
> (I have an HP MSA array that is 'active/active', but only because they
> sandwiched two active/passive controllers together. It doesn't work
> well, at all.)

Good deal. You should be able to round-robin I/O through both HBAs,
etc., to improve I/O. Note that the extra paths on each individual
HBA won't help performance, only redundancy, but using both HBAs should
increase your performance.

My setup is active-passive, but which controller is active is set on a
per-lun basis. But, I still get to do load balancing across HBAs since both
SAN controllers have a connection to both fabrics.

> Oh, I never expected to get my full throughput. I just don't want to
> end up hamstrung in the future. The performance I get even through
> 2Gbit FC exceeds multipathed iSCSI over gigE, so 4Gbit FC is another
> step up, and multipathed 4Gbit FC is better yet! Although if I fiddle
> with my multipath.conf too much, I cut my throughput via a multipathed
> connection to less than a single FC connection. Go figure (and go back
> to defaults.)

Yeah, it took me a bit to figure out that configuring for RDAC wouldn't
actually work unless I loaded the RDAC device mapper module *before* the HBA
driver. There's more than a few oddities depending on the hardware you're
using.

> What'll be even MORE interesting is when I connect the second ports on
> my tape drives to the new switches, which will give me multipathing on
> THOSE suckers. That'll be a party to configure with Veritas...

:-) Sounds like we have some of the same To Do list. I've been sitting on
some FC LTOs to replace the SCSI LTOs for while, now. Just have to get some
cabling finished first.

         --Arthur Corliss
           Live Free or Die
---------
To unsubscribe, send email to <aklug-request@aklug.org>
with 'unsubscribe' in the message body.
Received on Fri May 7 09:27:04 2010

This archive was generated by hypermail 2.1.8 : Fri May 07 2010 - 09:27:04 AKDT