[aklug] Re: adding disks to a volset.

From: Arthur Corliss <acorliss@nevaeh-linux.org>
Date: Tue Apr 20 2010 - 10:47:42 AKDT

On Mon, 19 Apr 2010, adam bultman wrote:

> If the hardware RAID controller is a good one, then use it. If it's a
> crappy one, or a slow one, then don't. I can't speak for SATA RAID
> cards, but I usually prefer hardware RAID over software RAID. Lots of
> people will swear up and down that linux software RAID is super fast,
> and that it's almost as good as hardware RAID - well, in the 10 years
> I've been a system admin, I've never found a software RAID that performs
> even close to the crappiest RAID card I've ever come across. The only
> benefit that I know of for linux software RAID1 is that you have two
> mountable filesystems with identical data.

FYI: my experience differs a bit. I've seen plenty of crappy RAID
hardware implementations that suck wind compared to Linux software RAID.
But software RAID is definitely inferior performance wise to a good hardware
implementation. Even so, depending on the hardware you're using I still use
software RAID extensively because at least there you have total visibility
into RAID events that sometimes are only available under Linux via
proprietary binaries. And some of those binaries include some ridiculous
dependencies (I'm looking at you, IBM & Compaq).

At work I typically use software RAID on internal system disks for the OS.
If I have extra storage requirements I go to the SAN where I use hardware
RAID.

> An aside: AFAIK, there's no problem with creating an additional RAID
> with differently sized drives. ( Even if you had a drive fail in an
> existing hardware RAID and you replaced it with a drive that was larger
> than the failed disk, the RAID card would simply not use all of the
> disk. From what I've found, the days of 'you need a disk of the same
> manufacturer with the same cylinders/heads/sectors or it won't work' are
> long over. )

That's always been the case with good hardware RAID. The problem is that
you'll end up with unused space on the larger drives. If they're fairly
close it's not a big deal. But you definitely don't want to add a 300GB
drive to a RAID array built on 9GB drives.

> All that being said, I'm not a very big fan of taking new disk space
> (RAID or not) and using it as physical extents to expand the size of an
> existing Logical Volume; it's just 'dirty'. While both of your Physical
> Volumes are protected by RAID, it's certainly not a fun situation if one
> of your PVs go offline for some reason and the other one stays up.
> You're looking at corruption problems at the very least - imagine a bus
> reset on your two newer drives, or a bus reset on your older three
> drives. Part of your filesystem disappears, perhaps mid-write! Eek!

True, but there isn't a LVM scenario where that isn't a problem. Lose a PV
and you're fairly well screwed unless you've been very particular with your
LV allocations. That said, just make sure your PVs are redundant and you
use journaling filesystems, and you stand an excellent chance of
recoverability. If HA is an issue you definitely want to look at SAN setups
with dual FC HBAs and multipath SCSI.

         --Arthur Corliss
           Live Free or Die
---------
To unsubscribe, send email to <aklug-request@aklug.org>
with 'unsubscribe' in the message body.
Received on Tue Apr 20 10:47:54 2010

This archive was generated by hypermail 2.1.8 : Tue Apr 20 2010 - 10:47:54 AKDT