On Sat, 18 May 2002 19:29:41 -0500 Mike Coleman <mkc@mathdogs.com> wrote: > Andrew Shewmaker <shewa@inel.gov> writes: > > I think you are right that you are using the word 'distribution' differently > > than Don. My opinion is that Scyld is its own distribution, or is at least > > becoming one in the same way that Mandrake has. > > I agree that this is what Scyld is doing. As a potential user, though, it > concerns me. If I run into a problem with one of the many hundreds of > packages in the Scyld distribution, who do I talk to about it? I wouldn't > expect Scyld to have any special knowledge about most of these packages (since > their expertise is in Beowulf and clustering). I'd prefer to get clustering > support from Scyld and distribution support from the distribution that I > choose. I thought you might be interested in this article which compares the support available from Linuxcare, HP, IBM, and Caldera. http://www.networkcomputing.com/1309/1309f3.html > Because there's little reason not too, given that Bproc already requires that > some files (shared libraries and files the application accesses by name) must > be replicated anyway. I think it's a lot easier to explain to users that each > node has a virtually complete (read-only) image of the master's files, versus > trying to explain that some files are there and some are not. I see what you are saying. Some users may find the psuedo-single system image concept confusing. Note that you can still mount filesystems like NFS or PVFS on the slave nodes. PVFS was designed to be used with the MPI-IO interface and is supposed to get very good performance (someone on the beowulf list posted some numbers). Projects like www.opencf.org may alleviate your concerns that the Bproc patch tampers with the vanilla kernel too much. This group is working to standardize a cluster framework for the kernel. Something like Bproc will likely get in some day. The Compaq SSI project (http://ssic-linux.sourceforge.net) uses parts of both Bproc and Mosix which are the two leading projects for unifying the process space on Linux clusters, if I'm not mistaken. > I'd like to, too, which is why I'm asking all these (hopefully not too) rude > questions. Your questions and insights are excellent and not rude. I hope that I have not come off sounding like a know-it-all. -- Andrew Shewmaker Associate Engineer Phone: 208.526.1415 Idaho National Engineering and Environmental Laboratory 2525 Fremont Ave. P.O. Box 1625, MS 3605 Idaho Falls, ID 83414-3605