On Mon, 3 Nov 2003, Chris Dagdigian wrote: > Rackable denies sending the astroturf message and points to big sales > numbers (and order backlog) as reasons why even aggressive salespeople The same "Astroturf" message was sent to the Beowulf mailing list, where it was moderated out. There had been no previous posting by that person, and it fits the profile of other similar marketing attempts by other vendors. > My take is this: > > (1) The real value for blade servers is mostly in the management, > monitoring and provisioning software that you get with the blade > platform. The only tie in between the blade hardware platform and the management software is the physical monitoring and management: power, thermal/fan, reset and perhaps console management. Each of these has functional equals with non-blade system. To put it another way: blades are just a different hardware package. There is no reason blades should dictate the software platform > When done right it is the software that delivers the actual > savings in terms of operational burden and that is what really tips the > scales over in favor of paying extra $$ for blades. Here I agree completely. The software architecture is what reduces the big costs: - administrative and user time, - training, - and the ability to transparently scale up and move to newer hardware > What do others think? Is the management and provisioning software just > as important as the form factor and wiring density savings? The software is far more important. - Starting two years ago, we have seen many hardware platforms that are near the air-cooled thermal density limits of most computer rooms. Compute density is no longer a vital issue. - Wiring complexity is somewhat influenced by the software platform: full OS installs on each machine tend to require KVM switches or serial concentrators for maintenance. But non-blade platforms are frequently installed and operated with only power and network connections. With the right on-motherboard NIC, IPMI management can invisibly run over same Fast/Gb Ethernet as communication. Unless/until the machines uses Myrinet, CLAN or IB, network wiring complexity isn't a significant issue. -- Donald Becker becker@scyld.com Scyld Computing Corporation http://www.scyld.com 914 Bay Ridge Road, Suite 220 Scyld Beowulf cluster system Annapolis MD 21403 410-990-9993