[Bioclusters] Direct connect infiniband/quadrics?

Farul M. Ghazali farul at aldrich.com.my
Tue Mar 15 18:19:22 EST 2005


Hi all,

Thanks for the responses so far. The applications in use are molecular
dynamics applications (mostly amber, some gromacs) and Autodock. There
will be others as well but they won't be taking up a lot of CPU time. Just
going from a dual to a quad-CPU machine on Amber makes a big improvement
as far as I can tell.

I'll most likely be using Rocks on this cluster or manually set it up if
need be. For the time being at least it's small enough to manage (2 nodes
:-)

In terms of expansion, I'm trying to push through quad-CPU nodes over
duals to minimize the cost of the interconnect if the interconnects prove
to be faster. I haven't looked at 8 or more CPU Opterons yet, but I'd like
to keep away from non-standard configurations eg. the Cray XD1. My
experiences with the quad-Opteron Tyan and Sun V40z platforms in other non
MD apps with gigabit have been quite good and Rocks works fine on these
machines. Looking at the Quadrics price list, I guess I'll have to hit the
8 or 32 node sweet spot to get good pricing too.

Thanks again everyone.



On Tue, 15 Mar 2005 jason.calvert at novartis.com wrote:

> Depending on your applications, (Can you give us an Idea of which ones),
> you might want to look at a scalable multiprocessor system.  If you are
> not planning on expanding much, as it looks without a switch, you might
> get more bang for your buck with a 8 way box.  You can get an 8 way
> opteron box within your budget I would guess.  There would be some memory
> latency issues with more the 4 processors, but it will be faster than
> infiniband.  IBM offers some interesting hardware that might suit your
> needs.   The xSeries 445 scales from 4 processors up to 32 processors (8
> processors per 4u) and expands via an IBM interconnect giving you a large
> NUMA box.  I am not sure if they offer this with the nacona chipset so it
> might not work for you.  I fear we are way off the general topic of
> bioclusters now in any case :).
>
> My 1/2 cnts.
>
> Jason
>
>
>
>
> Chris Dagdigian <dag at sonsorol.org>
> Sent by:
> bioclusters-bounces+jason.calvert=pharma.novartis.com at bioinformatics.org
> 03/15/2005 12:42 PM
> Please respond to "Clustering,  compute farming & distributed computing in
> life science informatics"
>
>
>         To:     "Clustering,  compute farming & distributed computing in life science
> informatics" <bioclusters at bioinformatics.org>
>         cc:     (bcc: Jason Calvert/PH/Novartis)
>         Subject:        Re: [Bioclusters] Direct connect infiniband/quadrics?
>
>
>
> It comes down to this:
>
> There are very few applications in most areas of life science
> informatics that are (a) parallel aware enough to take advantage of a
> high-speed low latency interconnect like infiniband/quadrics and (b)
> actually written well enough to take advantage of the faster
> interconnect. There are some MPI codes out there that just use MPI for
> simple stuff  or API calls that do not actually run faster via the
> special interconnect layer.
>
> So the usual case in bioclusters is "you don't need these sorts of
> interconnects at all because your science can't take advantage of them.
> They may be sexy to management types but generally its a waste of
> money...".
>
> The exceptions are:
>
> o people doing inhouse parallel development that desire to use these
> interconnects. These are typically smart people coding in Fortran or C++
> and relatively few life sci inhouse software development groups have the
> skills to write true HPC codes.
>
> o doing other "stuff" with the interconnect like clustering for HA or
> perhaps running a global/cluster filesystem over it
>
> o people doing computational chemistry and molecular dynamics :)
>
>
> You have personally found the big exception -- there are lots of
> commercial and non-commercial codes in the chemistry and molecular
> modeling spaces that actually are written for MPI and can (for most use
> cases) take advantage of the faster, lower-latency interconnects.
>
> My $.02
>
> -Chris
>
>
>
>
>
>
> Farul Mohd Ghazali wrote:
> >
> > Has anyone had any experience with a direct connect/point-to-point
> > implementation of Quadrics or Inifiniband? I talked to a small lab doing
>
> > some computational chemistry and molecular dynamics work and they're
> > interested in setting up a cluster but there is a need to justify the
> > cost of a cluster before the budget can be approved.
> >
> > During the discussion, the idea of using direct connect infiniband or
> > quadrics on two dual or quad Opteron nodes came up as a testbed platform
>
> > to justify to management. From a price point of view, this is very
> > attractive since it'll probably cost less than $40,000 (two quad
> > Opterons, two Quadrics cards) for a testbed system. Money is tight...
> >
> > So, is this setup workable? In theory this should be faster than a
> > gigabit based interconnect, even if it's just two nodes but I'd welcome
> > any other ideas/suggestions. Thanks.
> >
> >
> > -- "Leadership & Life-long Learning" --
> >
> > Farul Mohd. Ghazali
> > Manager, Systems & Bioinformatics
> > Open Source Systems Sdn. Bhd.
> > www.aldrich.com.my  Tel: +603-8656 0139/29   Fax: +603-8656 0132
> >
> > _______________________________________________
> > Bioclusters maillist  -  Bioclusters at bioinformatics.org
> > https://bioinformatics.org/mailman/listinfo/bioclusters
>
> --
> Chris Dagdigian, <dag at sonsorol.org>
> BioTeam  - Independent life science IT & informatics consulting
> Office: 617-665-6088, Mobile: 617-877-5498, Fax: 425-699-0193
> PGP KeyID: 83D4310E iChat/AIM: bioteamdag  Web: http://bioteam.net
> _______________________________________________
> Bioclusters maillist  -  Bioclusters at bioinformatics.org
> https://bioinformatics.org/mailman/listinfo/bioclusters
>
>
>
>
> The Novartis email address format has changed to
> firstname.lastname at novartis.com.  Please update your address book
> accordingly.


More information about the Bioclusters mailing list