[Bioclusters] 3ware 7850 RAID 5 performance

Joseph Landman bioclusters@bioinformatics.org
26 Aug 2002 20:06:52 -0400

Hi Simon:

  RAID 5 (and actually all RAID apart from pure RAID0) have to convert a
write request into a "read-modify-write" request.  This tends to reduce
the overall throughput of writes.  Though some controllers (3ware
included) try to cache large writes with on-board memory, so as to
reduce the effect of the cycle.

  I went to the 3ware site and couldnt find the 7850.  I did find the
7500-8.  Is this the card?  I am looking at
http://www.3ware.com/products/pdf/7500SelectionGuide7-26.pdf .  The
parallel-ata sheet on http://www.3ware.com/products/parallel_ata.asp
states # RAID 0, 1, 10, 5 and JBOD as the options.  RAID5 will always be
slower than RAID1, and RAID1 will always be slower than RAID0.  My guess
is the numbers they are quoting are JBOD reads and writes.  JBOD (aka
Just a Bunch Of Disks) dont generally require the parity computation,
the data layout and other processing which limits the performance of

  Of course, according to the data sheet, their accelerated raid 5 does
about 5x normal RAID5, which probably means that they pull some tricks
to write at JBOD speed similar to how the NetApp boxes and WAFL (Write
Anywhere Fast Layer) OS handle it.  

  RAID on these systems are going to be limited to the speed of the
slowest disk.  If the disk is in PIO modes rather than UDMA modes, then
I could imagine that you have that sort of write speed.  It is also
possible, that if you are using a journaling file system such as XFS,
and you are pointing your log to write to a single disk somewhere else,
that is likely to be your bottleneck.

  Which file system are you using?  What is the nature of your test
(large block reads/writes), and specifically how are you testing?  What
is the machine the card is plugged into?  What is the reported speed for

	hdparm -tT /dev/raid_device

where /dev/raid_device is the device that appears to be your big single
disk.  Are you using LVM?  Software RAID atop a JBOD? ???

If you run the following, how long does it take?

	/usr/bin/time --verbose dd if=/dev/zero of=big bs=10240000 count=100

On my wimpy single spindle file system, this takes 42 wall clock
seconds, and 7 system seconds.  This corresponds to a write speed of
about 24.4 MB/s.

If you run the following after creating the 1 GB file, how long does it

	/usr/bin/time --verbose md5sum big

On the same wimpy single spindle file system, this takes 50 seconds for
a read of about 20 MB/s. 

Using hdparm, I find

    [landman@squash /work3]# hdparm -tT /dev/hdb
     Timing buffer-cache reads:   128 MB in  0.71 seconds =180.28 MB/sec
     Timing buffered disk reads:  64 MB in  2.75 seconds = 23.27 MB/sec

If you could report some of these, it might give us more of a clue.


On Mon, 2002-08-26 at 19:03, Vsevolod Ilyushchenko wrote:
> Hi,
> Is anyone using a 3ware 7850 hardware RAID card set to RAID 5? 
> According, to the docs, it's using something called "accelerated RAID 5" 
> which should give read *and write* speeds in excess of 100 Mb/s. Several 
> tests on a dual Linux P3 1.26 Ghz box with ext3 indicate much lower 
> speeds: 47 Mb/s sequential read and 12 (twelve!) Mb/s sequential write. 
> There are 5 drives in the RAID 5 configuration.
> Has anyone benchmarked this card under Linux? What numbers are people 
> seeing? Can anyone suggest parameters to tune?
> Thanks,
> Simon
> -- 
> Simon (Vsevolod ILyushchenko)   simonf@cshl.edu
> http://www.simonf.com          simonf@simonf.com
> Even computers need analysts these days!
> 				("Spider-Man")
> _______________________________________________
> Bioclusters maillist  -  Bioclusters@bioinformatics.org
> https://bioinformatics.org/mailman/listinfo/bioclusters
Joseph Landman, Ph.D
Scalable Informatics LLC
email: landman@scalableinformatics.com
  web: http://scalableinformatics.com
phone: +1 734 612 4615