Re: Raid 5, xfs or reiserfs


Subject: Re: Raid 5, xfs or reiserfs
From: Mac Mason (macmasta@ak.net)
Date: Wed Feb 11 2004 - 15:16:59 AKST


I'd actually argue strongly *against* ext3, given that I personally have
lost data to ext3 foulups, but never to reiserfs foulups.

I run a four-partition raid-5 (using EVMS, which is pretty much
equivalent to linux software raid) and have been very impressed with it,
both speed- and stability-wise.

I've used XFS a wee bit, and it's really very nice; I would say the
choice here is going to depend on the types of files you will be
storing. Reiserfs is the fastest thing out there, bar none, for small
files; xfs scales much better to larger files.

And yes, raid-5 does cost you some extra disk in order to pay for
redundancy. My four drives each have a 30-gig partition on the raid-5,
so I see 90gigs (unformatted) in the raid device.

Remember that filesystem metadata takes up room, too; reiserfs costs
about 1gig of metadata per 15 gig of disk; I don't know stats on others.

~Mac~

On Wed, 2004-02-11 at 15:25, volz wrote:
>
> >Date: Wed, 11 Feb 2004 12:56:30 -0900 (AKST)
> >From: Arthur Corliss <acorliss@nevaeh-linux.org>
> >To: Mike Tibor <tibor@lib.uaa.alaska.edu>
> >Cc: aklug@aklug.org
> >Subject: Re: Raid 5, xfs or reiserfs
> >
> >
> >On Wed, 11 Feb 2004, Mike Tibor wrote:
> >
> >> One thing I would point out is that as soon as you introduce Linux
> >> software raid into the picture, your chances for encountering problems go
> >> up significantly. That's not to say that Linux software raid isn't
> >> good--I personally think it's awesome--however I've had problems with at
> >> least raid+smp, raid+nfs, raid+xfs, raid+jfs and raid+reiser3. For each
> >> of those cases, one thing might work perfectly fine, but as soon as I
> >> brought software raid into the picture, I had problems (kernel panics, fs
> >> corruption, etc.) Note none of my comments apply to hardware raid, just
> >> Linux software raid.
> >
> >Good point, I don't think which type was specified. Having followed the xfs
> >mailing lists, though, all of hte problems lately that I've heard about were
> >very hardware-specific. You might want to check the archives beforehand.
> >That said, I haven't heard of any recent problems using LVM + XFS, which is
> >what I use. That supports striping and mirroring as well.
> >
>
> Software raid with (I hope) nfs exported volume. On an smp system. I thought LVM
> was pretty mature and stable. However Mike's point is well made that we would be
> adding several wrinkles. xfs sounds good in theory, but we would be doing
> xfs+nfs+lvm. I will check the lists.
>
> >> Most people aren't aware of them, but there are mount options for ext3
> >> that can really help performance. "data=journal" is supposed to help
> >> hugely in most situations. The options are described in the mount man
> >> page.
> >
> >XFS has the same capability, as well as specifying other devices for raw I/O
> >storage, etc. Having only used JFS on AIX boxes, I can't say for sure what
> >your options are with that.
> >
>
> This is sort of a side project and an experiment to replace a crappy hardware
> raid that runs windows. So we could do an evaluation before we commit too much
> data. Or even for a little more money run two; one with ext3 and one with xfs.
>
> The major downside of the ide raid? With only two drives per controller and two
> controllers per board, the possibilities are limited and costs are higher. Still
> by putting three 300G drives in 2 of these units, I can get a T+ for $1.19/G
>
> Arthur-
>
> We are decomissioning the cluster you built, and I am looking for a way to use
> the old systems for NAS. The new PowerEdge servers we bought last year for our
> current cluster have gigabit networking and cut model runs from 3:20 to 1:05.
>
>
> - Karl
>
>
> ---------
> To unsubscribe, send email to <aklug-request@aklug.org>
> with 'unsubscribe' in the message body.
>
>

---------
To unsubscribe, send email to <aklug-request@aklug.org>
with 'unsubscribe' in the message body.



This archive was generated by hypermail 2a23 : Wed Feb 11 2004 - 15:16:14 AKST