Re: Raid 5, xfs or reiserfs


Subject: Re: Raid 5, xfs or reiserfs
From: volz (volz@koyukuk.at.uaa.alaska.edu)
Date: Wed Feb 11 2004 - 14:25:25 AKST


>Date: Wed, 11 Feb 2004 12:56:30 -0900 (AKST)
>From: Arthur Corliss <acorliss@nevaeh-linux.org>
>To: Mike Tibor <tibor@lib.uaa.alaska.edu>
>Cc: aklug@aklug.org
>Subject: Re: Raid 5, xfs or reiserfs
>
>
>On Wed, 11 Feb 2004, Mike Tibor wrote:
>
>> One thing I would point out is that as soon as you introduce Linux
>> software raid into the picture, your chances for encountering problems go
>> up significantly. That's not to say that Linux software raid isn't
>> good--I personally think it's awesome--however I've had problems with at
>> least raid+smp, raid+nfs, raid+xfs, raid+jfs and raid+reiser3. For each
>> of those cases, one thing might work perfectly fine, but as soon as I
>> brought software raid into the picture, I had problems (kernel panics, fs
>> corruption, etc.) Note none of my comments apply to hardware raid, just
>> Linux software raid.
>
>Good point, I don't think which type was specified. Having followed the xfs
>mailing lists, though, all of hte problems lately that I've heard about were
>very hardware-specific. You might want to check the archives beforehand.
>That said, I haven't heard of any recent problems using LVM + XFS, which is
>what I use. That supports striping and mirroring as well.
>

Software raid with (I hope) nfs exported volume. On an smp system. I thought LVM
was pretty mature and stable. However Mike's point is well made that we would be
adding several wrinkles. xfs sounds good in theory, but we would be doing
xfs+nfs+lvm. I will check the lists.

>> Most people aren't aware of them, but there are mount options for ext3
>> that can really help performance. "data=journal" is supposed to help
>> hugely in most situations. The options are described in the mount man
>> page.
>
>XFS has the same capability, as well as specifying other devices for raw I/O
>storage, etc. Having only used JFS on AIX boxes, I can't say for sure what
>your options are with that.
>

 This is sort of a side project and an experiment to replace a crappy hardware
raid that runs windows. So we could do an evaluation before we commit too much
data. Or even for a little more money run two; one with ext3 and one with xfs.

The major downside of the ide raid? With only two drives per controller and two
controllers per board, the possibilities are limited and costs are higher. Still
by putting three 300G drives in 2 of these units, I can get a T+ for $1.19/G

Arthur-

We are decomissioning the cluster you built, and I am looking for a way to use
the old systems for NAS. The new PowerEdge servers we bought last year for our
current cluster have gigabit networking and cut model runs from 3:20 to 1:05.

- Karl

---------
To unsubscribe, send email to <aklug-request@aklug.org>
with 'unsubscribe' in the message body.



This archive was generated by hypermail 2a23 : Wed Feb 11 2004 - 14:25:55 AKST