[Bioclusters] Large numbers of files (again...?)

elijah wright bioclusters@bioinformatics.org
Wed, 28 Jan 2004 06:42:26 -0600 (CST)

> I accidentally created a directory with 300,000 files, and it was practically a
> death trap for the system.
> Does anyone have any suggestions about how to handle this kind of situation? For
> example I was looking into hashing functions using directories as nodes in the hash

> Any FS implemented in this way? It is frustrating when mysql can easily handle
> millions of records, but my file system starts to complain with about 5000 files in
> one directory.

ReiserFS is intended to cope with situations like this one.  Try that :)

ext2/FFS/UFS are likely to be disastrous, as you say.