<br><font size=2 face="sans-serif">Depending on your applications, (Can you give us an Idea of which ones), you might want to look at a scalable multiprocessor system. If you are not planning on expanding much, as it looks without a switch, you might get more bang for your buck with a 8 way box. You can get an 8 way opteron box within your budget I would guess. There would be some memory latency issues with more the 4 processors, but it will be faster than infiniband. IBM offers some interesting hardware that might suit your needs. The xSeries 445 scales from 4 processors up to 32 processors (8 processors per 4u) and expands via an IBM interconnect giving you a large NUMA box. I am not sure if they offer this with the nacona chipset so it might not work for you. I fear we are way off the general topic of bioclusters now in any case :).</font>
<br>
<br><font size=2 face="sans-serif">My 1/2 cnts.</font>
<br>
<br><font size=2 face="sans-serif">Jason</font>
<br>
<br>
<br>
<table width=100%>
<tr valign=top>
<td>
<td><font size=1 face="sans-serif"><b>Chris Dagdigian <dag@sonsorol.org></b></font>
<br><font size=1 face="sans-serif">Sent by: bioclusters-bounces+jason.calvert=pharma.novartis.com@bioinformatics.org</font>
<p><font size=1 face="sans-serif">03/15/2005 12:42 PM</font>
<br><font size=1 face="sans-serif">Please respond to "Clustering, compute farming & distributed computing in life science informatics"</font>
<br>
<td><font size=1 face="Arial"> </font>
<br><font size=1 face="sans-serif"> To: "Clustering, compute farming & distributed computing in life science informatics" <bioclusters@bioinformatics.org></font>
<br><font size=1 face="sans-serif"> cc: (bcc: Jason Calvert/PH/Novartis)</font>
<br><font size=1 face="sans-serif"> Subject: Re: [Bioclusters] Direct connect infiniband/quadrics?</font></table>
<br>
<br>
<br><font size=2 face="Courier New"><br>
It comes down to this:<br>
<br>
There are very few applications in most areas of life science <br>
informatics that are (a) parallel aware enough to take advantage of a <br>
high-speed low latency interconnect like infiniband/quadrics and (b) <br>
actually written well enough to take advantage of the faster <br>
interconnect. There are some MPI codes out there that just use MPI for <br>
simple stuff or API calls that do not actually run faster via the <br>
special interconnect layer.<br>
<br>
So the usual case in bioclusters is "you don't need these sorts of <br>
interconnects at all because your science can't take advantage of them. <br>
They may be sexy to management types but generally its a waste of money...".<br>
<br>
The exceptions are:<br>
<br>
o people doing inhouse parallel development that desire to use these <br>
interconnects. These are typically smart people coding in Fortran or C++ <br>
and relatively few life sci inhouse software development groups have the <br>
skills to write true HPC codes.<br>
<br>
o doing other "stuff" with the interconnect like clustering for HA or <br>
perhaps running a global/cluster filesystem over it<br>
<br>
o people doing computational chemistry and molecular dynamics :)<br>
<br>
<br>
You have personally found the big exception -- there are lots of <br>
commercial and non-commercial codes in the chemistry and molecular <br>
modeling spaces that actually are written for MPI and can (for most use <br>
cases) take advantage of the faster, lower-latency interconnects.<br>
<br>
My $.02<br>
<br>
-Chris<br>
<br>
<br>
<br>
<br>
<br>
<br>
Farul Mohd Ghazali wrote:<br>
> <br>
> Has anyone had any experience with a direct connect/point-to-point <br>
> implementation of Quadrics or Inifiniband? I talked to a small lab doing <br>
> some computational chemistry and molecular dynamics work and they're <br>
> interested in setting up a cluster but there is a need to justify the <br>
> cost of a cluster before the budget can be approved.<br>
> <br>
> During the discussion, the idea of using direct connect infiniband or <br>
> quadrics on two dual or quad Opteron nodes came up as a testbed platform <br>
> to justify to management. From a price point of view, this is very <br>
> attractive since it'll probably cost less than $40,000 (two quad <br>
> Opterons, two Quadrics cards) for a testbed system. Money is tight...<br>
> <br>
> So, is this setup workable? In theory this should be faster than a <br>
> gigabit based interconnect, even if it's just two nodes but I'd welcome <br>
> any other ideas/suggestions. Thanks.<br>
> <br>
> <br>
> -- "Leadership & Life-long Learning" --<br>
> <br>
> Farul Mohd. Ghazali<br>
> Manager, Systems & Bioinformatics<br>
> Open Source Systems Sdn. Bhd.<br>
> www.aldrich.com.my Tel: +603-8656 0139/29 Fax: +603-8656 0132<br>
> <br>
> _______________________________________________<br>
> Bioclusters maillist - Bioclusters@bioinformatics.org<br>
> https://bioinformatics.org/mailman/listinfo/bioclusters<br>
</font>
<br><font size=2 face="Courier New">-- <br>
Chris Dagdigian, <dag@sonsorol.org><br>
BioTeam - Independent life science IT & informatics consulting<br>
Office: 617-665-6088, Mobile: 617-877-5498, Fax: 425-699-0193<br>
PGP KeyID: 83D4310E iChat/AIM: bioteamdag Web: http://bioteam.net<br>
_______________________________________________<br>
Bioclusters maillist - Bioclusters@bioinformatics.org<br>
https://bioinformatics.org/mailman/listinfo/bioclusters<br>
</font>
<br>
<br>
<BR>
The Novartis email address format has changed to <BR>
firstname.lastname@novartis.com. Please update your address book <BR>
accordingly.<BR>