[Bioclusters] gigabit ethernet performance

Chen Peng bioclusters@bioinformatics.org
Sat, 10 Jul 2004 15:55:17 +0800


--Apple-Mail-4--530231797
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
	charset=US-ASCII;
	format=flowed

Thank all of you for the discussion.

We have double checked the link status of each port been connected and 
all show up as "1000base TX full". The disk IO speed has been tested to 
over 45MB/s by copying file in between two internal drives.

So most of the feedback is with gigabit ethernet in other hardware, 
does any one using Xserve and gigabit ethernet but achieve 30+ mb/s?

Cheers
--
Chen Peng
<chenpeng@alumni.nus.edu.sg>

On Jul 10, 2004, at 10:33 AM, Joe Landman wrote:

> I have also seen in the 60 - 80 MB/s for real applications on e325's
> using the bcm5700 drivers (the tg3's don't work so well).
>
> Generally there are many reasons why gigabit performance can be bad.
> Switch performance is one of them.  Network settings are another.
>
> The original query was on NFS, and how the Xserves were getting 12ish
> MB/s on GB.  This sounds suspiciously like someone somewhere is locked
> into 100 Mb/s mode on a port they think is 1000 Mb/s.  When running at
> full tilt, a good NFS server implementation on 100 Mb links can source
> about 11.7 - 12 MB/s.  You would see similar performance from rcp in
> this case.
>
> If this is not the problem (and I recommend sanity checks, as in
> checking what both sides of the connection report, as many switches are
> known to autonegotiate incorrectly), start looking at things like MTU
> (can you use jumbo packets?), TCP based NFS, larger read/write sizes,
> ...
>
> Some of the IDE RAID systems we have set up have been able to
> sink/source upwards of 60 MB/s without working hard at tuning, and we
> have seen a sustained 70+/- MB/s for over a 2 day run at a customer
> site.  We cannot speak to the Xserve as we don't normally use or spec
> it.
>
> Look in the usual spots, and make sure that you leave nothing to an
> assumption.  It is possible that you will run into driver issues,
> network stack implementation, bad switches...
>
> Joe
>
> On Fri, 2004-07-09 at 21:46, Farul M. Ghazali wrote:
>> On Fri, 9 Jul 2004, Tim Cutts wrote:
>>
>>> ~30 MB/sec sounds about right to me.  We get 36 MB/sec between our
>>> AlphaServer ES45 boxes; that's GBit ethernet, and HP StorageWorks HSV
>>> RAID controllers; there's no way the disk is the limiting factor in 
>>> our
>>> set up - we can get about 200 MB/sec on the HSV controllers.  I think
>>> the low 30's is about what GBit can sustain.
>>
>> 30MB/sec is pretty bad for gigabit if that's the case. I've used 
>> netperf
>> and gotten 70-80MB/sec on my IBM x325s, but I don't have fast enough 
>> disks
>> to really test the system.
>>
>> Does the switch come into play in this? I've used el-cheapo Netgear
>> switches and we are looking into geting some demo Foundry swtches to 
>> test
>> out.
>>
>> _______________________________________________
>> Bioclusters maillist  -  Bioclusters@bioinformatics.org
>> https://bioinformatics.org/mailman/listinfo/bioclusters
> -- 
> Joseph Landman, Ph.D
> Scalable Informatics LLC,
> email: landman@scalableinformatics.com
> web  : http://scalableinformatics.com
> phone: +1 734 612 4615
>
> _______________________________________________
> Bioclusters maillist  -  Bioclusters@bioinformatics.org
> https://bioinformatics.org/mailman/listinfo/bioclusters
>
> -- 
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>

--Apple-Mail-4--530231797
Content-Transfer-Encoding: 7bit
Content-Type: text/enriched;
	charset=US-ASCII

Thank all of you for the discussion. 


We have double checked the link status of each port been connected and
all show up as "1000base TX full". The disk IO speed has been tested
to over 45MB/s by copying file in between two internal drives.


So most of the feedback is with gigabit ethernet in other hardware,
does any one using Xserve and gigabit ethernet but achieve 30+ mb/s?

<fixed>

Cheers

--

Chen Peng 

<<chenpeng@alumni.nus.edu.sg>


</fixed>On Jul 10, 2004, at 10:33 AM, Joe Landman wrote:


<excerpt>I have also seen in the 60 - 80 MB/s for real applications on
e325's

using the bcm5700 drivers (the tg3's don't work so well).


Generally there are many reasons why gigabit performance can be bad. 

Switch performance is one of them.  Network settings are another.


The original query was on NFS, and how the Xserves were getting 12ish

MB/s on GB.  This sounds suspiciously like someone somewhere is locked

into 100 Mb/s mode on a port they think is 1000 Mb/s.  When running at

full tilt, a good NFS server implementation on 100 Mb links can source

about 11.7 - 12 MB/s.  You would see similar performance from rcp in

this case.


If this is not the problem (and I recommend sanity checks, as in

checking what both sides of the connection report, as many switches are

known to autonegotiate incorrectly), start looking at things like MTU

(can you use jumbo packets?), TCP based NFS, larger read/write sizes,

...


Some of the IDE RAID systems we have set up have been able to

sink/source upwards of 60 MB/s without working hard at tuning, and we

have seen a sustained 70+/- MB/s for over a 2 day run at a customer

site.  We cannot speak to the Xserve as we don't normally use or spec

it.


Look in the usual spots, and make sure that you leave nothing to an

assumption.  It is possible that you will run into driver issues,

network stack implementation, bad switches...  


Joe


On Fri, 2004-07-09 at 21:46, Farul M. Ghazali wrote:

<excerpt>On Fri, 9 Jul 2004, Tim Cutts wrote:


<excerpt>~30 MB/sec sounds about right to me.  We get 36 MB/sec
between our

AlphaServer ES45 boxes; that's GBit ethernet, and HP StorageWorks HSV

RAID controllers; there's no way the disk is the limiting factor in our

set up - we can get about 200 MB/sec on the HSV controllers.  I think

the low 30's is about what GBit can sustain.

</excerpt>

30MB/sec is pretty bad for gigabit if that's the case. I've used
netperf

and gotten 70-80MB/sec on my IBM x325s, but I don't have fast enough
disks

to really test the system.


Does the switch come into play in this? I've used el-cheapo Netgear

switches and we are looking into geting some demo Foundry swtches to
test

out.


_______________________________________________

Bioclusters maillist  -  Bioclusters@bioinformatics.org

https://bioinformatics.org/mailman/listinfo/bioclusters

</excerpt>-- 

Joseph Landman, Ph.D

Scalable Informatics LLC,

email: landman@scalableinformatics.com

web  : http://scalableinformatics.com

phone: +1 734 612 4615


_______________________________________________

Bioclusters maillist  -  Bioclusters@bioinformatics.org

https://bioinformatics.org/mailman/listinfo/bioclusters


-- 

This message has been scanned for viruses and

dangerous content by MailScanner, and is

believed to be clean.


</excerpt>
--Apple-Mail-4--530231797--