blade servers (was Re: [Bioclusters] Any experiences with these guys? reeks)

Michael Gutteridge bioclusters@bioinformatics.org
Tue, 4 Nov 2003 09:43:05 -0800


We looked at IBM's blade center, the Sun blades, and RLX (someone else 
was asking about cheaper blades).  The only knock we had against the 
Sun blade was that we don't have a big need for Sparc blades (they seem 
to be aiming at web farms)- most of our Sun boxes are for big 
SMP/Multi-user applications (8 CPU and better).  The premium didn't 
seem worth it for exclusively Intel based applications.  RLX has a 
solid offering- their transmeta chip could be real interesting for some 
applications, and their management tools seemed real solid.  Seems to 
me they've been in the blade business a while.

We ended up with the blade center, and we've been pretty happy with it. 
  The switch (to finally get around to your question) connects each of 
the blades internally (no cabling necessary).  Then, you have four gig 
ports to connect to your LAN.  The switch supports VLAN tagging and 
trunking, so you can aggregate those links.

Note that you have to buy two switches to get both interfaces on the 
blades- i.e. switch module 1 connects to ethernet0 on each of the 
blades, switch module 2 connects to ethernet 1.

Best of luck

Michael

Michael Gutteridge                  Fred Hutchinson Cancer
Unix System Administrator                  Research Center
mgutteri@fhcrc.org              Research Computing Support
- - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Views expressed do not necessarily represent those of my
employer. Caveat emptor, your mileage may vary, warranty
not valid if seal is broken.  Share and enjoy.


On Monday, Nov 3, 2003, at 15:13 US/Pacific, Marcia A. Goetsch wrote:

> I am looking at Sun's Bladeservers.  Anyone have any good/bad things to
> say about them and the N1 provisioning software?  Also, how does the 
> gig
> ethernet switch in a blade chassis (blade center) work with the 
> switches
> for the company LAN?  You connect the blades to the switch and the
> switch to the ethernet port to company switch?  Or something more
> complex?
>
> Marcia
>
> On Mon, 2003-11-03 at 14:07, Goran Ceric wrote:
>> We've had a rack full of IBM blades for 3-4 months now, and I am 
>> really
>> impressed. Management module has very nice administration and 
>> monitoring
>> capabilities including console redirection, environment monitoring, 
>> and
>> collecting data from individual management processors. Blade centers
>> also come with a copy of IBM Director 4 which I think is a great piece
>> of software. We didn't get their deployment software since I didn't 
>> want
>> to pay for it. Systemimager still works well enough for me. Cabling is
>> extremely neat since blade centers have built-in gigabit ethernet
>> switches, built-in power supplies etc. 14 blades in a single 7U blade
>> center are networked internally, and there are 4 external ports in 
>> each
>> switch.  Also, instead of 84 power cables, there are only 24 in a 
>> whole
>> rack, and half of these are from redundant power supplies. Considering
>> that you get all this redundancy and gigabit switches included, plus
>> avoid the cabling nightmare, and get a system that is extremely easy 
>> to
>> manage, I don't think IBM blades cost much more (with our pricing) 
>> than
>> a comparable setup with standard 1U machines from a major hardware
>> vendor. Another neat thing is when IBM comes out with Power blades
>> eventually (or possibly Opteron and/or Itanium), they will plug in the
>> same chassis as P4 blades (probably with different power supplies
>> though). So, to answer your question, yes, I believe it's worth 
>> paying a
>> little premium for them.
>>
>> Goran
>>
>> Chris Dagdigian wrote:
>>
>>>
>>> Rackable denies sending the astroturf message and points to big sales
>>> numbers (and order backlog) as reasons why even aggressive 
>>> salespeople
>>> would not be out trying to drum up more business. I've been 'joe
>>> jobbed' by spammers using one of my domain names to forge email so I
>>> am open to the possibility that some other sneaky stuff may be going
>>> on by people who are not part of Rackable. Interesting stuff but not
>>> on-topic for this list :)
>>>
>>> Ok back on topic..
>>>
>>> You bring up an interesting point about blades that I'd love to get
>>> some discussion going.
>>>
>>> My take is this:
>>>
>>> (1) The real value for blade servers is mostly in the management,
>>> monitoring and provisioning software that you get with the blade
>>> platform. When done right it is the software that delivers the actual
>>> savings in terms of operational burden and that is what really tips
>>> the scales over in favor of paying extra $$ for blades.
>>>
>>> (2) Of the blade platforms that I've looked at I found that I only
>>> really enjoyed working with the IBM Bladecenter and RLX product
>>> offerings. Both had excellent software tools IMHO.
>>>
>>> What do others think? Is the management and provisioning software 
>>> just
>>> as important as the form factor and wiring density savings?
>>>
>>> -Chris
>>>
>>>
>>>
>>>
>>> Philip MacMenamin wrote:
>>>
>>>> First off, this mail smells bogey, and hats off to the boys who did
>>>> some sherlock holmesing.
>>>> Second off, Blades are ALSO cool because they plug into a chassis,
>>>> all very easy, nice and tidy. The fact that they do not have miles 
>>>> of
>>>> cable hanging out of the back is a genuine advantage. I didn't read
>>>> all of this mail too closely (see first point) but there seems still
>>>> to be miles of cable hanging out of the back of these units. So I am
>>>> still a blade convertee...
>>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Bioclusters maillist  -  Bioclusters@bioinformatics.org
>>> https://bioinformatics.org/mailman/listinfo/bioclusters
>>
>>
>> _______________________________________________
>> Bioclusters maillist  -  Bioclusters@bioinformatics.org
>> https://bioinformatics.org/mailman/listinfo/bioclusters
> -- 
> Marcia A. Goetsch <marcia.goetsch@channing.harvard.edu>
> Channing Laboratory
> 617-525-2765
>
> _______________________________________________
> Bioclusters maillist  -  Bioclusters@bioinformatics.org
> https://bioinformatics.org/mailman/listinfo/bioclusters
>