[Bioclusters] Re: file server for cluster
Joe Landman
bioclusters@bioinformatics.org
23 Apr 2002 15:27:50 -0400
On Tue, 2002-04-23 at 13:53, Ivo Grosse wrote:
> Yes, and the web page I quoted uses Mb/s consistently. Do you think
> it's a typo?
Possibly... if you think about it, 470 Mb/s would be on the order of 50
MB/s, which is nothing terribly impressive. So I suspect typos here (I
have seen this in the past as one of the most common typos out there :(
).
> > It is going to be bandwidth limited by the number of PCI busses, the
> > arrangement of IO, etc. The specs say 3 PCI slots, but if they are on a
> > single PCI bus, you will be able to swamp the PCI with either 1 Ultra160
> > (for PCI-33), or 1 Ultra160+1 GigE card, or 2 Ultra160s.
>
> Could you explain that (to a dummy)?
Or a smarty? :)
> 66 MHz x 64 bit = 4 Gb/s.
~= 4.2 Gb/s for PCI 64 bit 66 MHz. For the 33 MHz variety you would
get:
32 bit: 1.05 Gb/s
64 bit: 2.1 Gb/s
Each Ultra160 can consume 160 MB/s -> 1.3 Gb/s
Each GigE card can consume 1.0 Gb/s.
You generally cannot run the PCI at full throttle, you start getting
contention, locking, or collision nightmares. Call it anywhere between
60-80% of the bandwidth available depending upon chipset, cards,
drivers/implementation of drivers, and service request patterns. To be
reasonable, assume that 70% of BW is available.
0.7 * 4.2 Gb/s ~ 2.9 Gb/s realizable on one PCI-66.
Now, lets look at the combos (call this I/O accounting 101 :) ):
BW (Gb/s)
Bus combo committed free
----------------------------------------------------------------
PCI-33 Ultra160 1.3 -0.25
PCI-33 GigE 1.0 0.05
PCI-66/64 Ultra160 1.3 1.6
PCI-66/64 Ultra160+GigE 2.3 0.6
PCI-66/64 2xUltra160 2.6 0.3
PCI-66/64 2xUltra160+GigE 3.6 -0.5
Negative bandwidth is not really well defined. It is better to call the
excess available BW 0, and talk about bus contention. Bus contention is
a throughput killer.
Assume each card comes with a budget, an allocation of IO bandwidth. So
a 160 MB/s card means you can theoretically fill up at most 160 MB/s of
bus bandwidth before the contention issue kicks in. So if you have
disks which talk at 40 MB/s, you need 160/40 or 4 disks to completely
fill this bus. Of course, the problem is that you really dont have 160
MB/s (protocol and signalling overhead), so you need to make allowances
for this.
> Note the "small" b. :-)
>
> Of course 4 Gb/s is purely theoretical. What is the overhead? 20%?
> 50%?
See above. It depends actually upon the nature of your transactions to
a degree. Large sequential transactions have lower overhead in toto
than the equivalent payload among many small transactions.
> Even if it were 50%, that would leave 2 Gb/s? How can that be
> saturated with 2 Ultra160s? What is the w/r flux of one Ultra160?
See above.
This is why you need multiple PCI busses.
>
> Thanks!
>
> Ivo
>
> _______________________________________________
> Bioclusters maillist - Bioclusters@bioinformatics.org
> http://bioinformatics.org/mailman/listinfo/bioclusters
--
Joseph Landman, Ph.D.
Senior Scientist,
MSC Software High Performance Computing
email : joe.landman@mscsoftware.com
messaging : page_joe@mschpc.dtw.macsch.com
Main office : +1 248 208 3312
Cell phone : +1 734 612 4615
Fax : +1 714 784 3774