Hi, >From my experience on AMBER, GROMACS etc on clusters and SMPs, I would suggest to go for SMPs rather than clusters. Since there is a lot of communication required at short intervals, scaling on many processors in a cluster is difficult. Jithesh P.V.Jithesh Bioinformatician Belfast e-Science Centre The Queen's University of Belfast UK On Wed, 2005-03-16 at 12:18, Tim Cutts wrote: > On 15 Mar 2005, at 11:19 pm, Farul M. Ghazali wrote: > > > Hi all, > > > > Thanks for the responses so far. The applications in use are molecular > > dynamics applications (mostly amber, some gromacs) and Autodock. There > > will be others as well but they won't be taking up a lot of CPU time. > > Just > > going from a dual to a quad-CPU machine on Amber makes a big > > improvement > > as far as I can tell. > > > > I'll most likely be using Rocks on this cluster or manually set it up > > if > > need be. For the time being at least it's small enough to manage (2 > > nodes > > :-) > > > > In terms of expansion, I'm trying to push through quad-CPU nodes over > > duals to minimize the cost of the interconnect if the interconnects > > prove > > to be faster. I haven't looked at 8 or more CPU Opterons yet, but I'd > > like > > to keep away from non-standard configurations eg. the Cray XD1. > > > Did you consider an 8-way SGI Altix 350? > > It might suit your application well, and might even fit in that budget > (just). > > The NUMALink interconnect is very fast indeed. Although I'd probably > be tempted to go for a few more nodes, and use GBit for the > interconnect. > > How many jobs do these people typically want to run at a time? If it's > more than one at a time, then buy four quad-CPU Opteron nodes, GBit > connected, and run up to four 4-CPU jobs at a time. No need for the > MPI to use the interconnect then, and the throughput of the cluster > will be very good (although individual job turnaround will be slower). > It's worth considering. > > Tim