[Bioclusters] Re: 3ware 7850 RAID 5 performance
Dan Yocum
bioclusters@bioinformatics.org
Wed, 04 Sep 2002 09:01:39 -0500
Hi all,
I subscribe to the ALINKA clustering newsletter (weekly summary of several
mailing lists) and not to Bioclusters, per se, and saw your discussion of
the 3ware RAID controllers (that is to say, cc me directly if you have any
more questions).
First off, there's a linux-ide-array mailing list at lists.math.uh.edu. It's
majordomo so the standard subscription rules apply.
Secondly, we (SDSS) have ~30 3ware controllers in-house which account for
~20TB of storage space. Other groups here at Fermi have more. Some have
less. There's a bunch of us, in any case.
Using RAID50 (2 controllers - hw RAID5, sw RAID0) we're getting ~110MB/s
block writes and >200MB/s block reads for large (2-4GB) files. However,
these are using the SuperMicro P4DC6+ mobo which have the broken i860
chipset (broken, in that the PCI bandwidth is limited to something like
220MB/s, there's a kludgy fix but that only ups it to ~300MB/s).
One of the members on the linux-ide-array list has reported >325MB/s reads
using the SuperMicro P4DPE mobo, which has the E7500 chipset.
Anyway, I've been maintaining a technical document about my experiences
(patch sets, compilation parameters, benchmarks, etc.). I try to keep it
complete but concise:
http://home.fnal.gov/~yocum/storageServerTechnicalNote.html
and another group at the Lab has there own findings here:
http://mit.fnal.gov/~msn/cdf/caf/server_evaluation.html
They've been having some problems with multiple clients hitting the machines
and are considering trying another FS (they use ext3, currently). I use XFS
and am VERY happy with it, but only a few (<5) processes hit the machines at
a time.
Hope this helps, some.
Cheers,
Dan
--
Dan Yocum
Sloan Digital Sky Survey, Fermilab 630.840.6509
yocum@fnal.gov, http://www.sdss.org
SDSS. Mapping the Universe.