[Bioclusters] recompiling redhat source rpms from .spec files to add large file support?

Chris Dagdigian bioclusters@bioinformatics.org
Wed, 08 May 2002 15:43:34 -0400


Hey folks,

Hopefully this is a trivial question for someone on this list..

Support for large files has been in modern kernels and filesystems for a 
while now. What I constantly run into though are programs that bomb out 
when faced with large files. Particularly when I'm trying to build large 
blast databases :)

I'm seeing this right now with RedHat 7.2 -- the /bin/zcat program will 
dump core if you try to get it to process the latest nonredundant 'nt.Z' 
database from the NCBI. 'uncompress' and '/bin/cat' work just fine.

The easy solution is just to recompile the programs as you discover them 
to enable large file support. I did this all the time back at the prior 
job.

My problem is that despite a quick google search and trawl through my 
own notes I can't remember the specific compiler arguments that you need 
to pass through make to enable largefile support on RH linux systems. It 
was something really simple like "-D64_BIT_OFFSET -DENABLE_LARGE_FILES"

Can anyone help jog my memory? I want to start collecting .spec files 
for the various problematic utilities so I can roll my own RPMs whenever 
I need them on a project.

Thanks!
-Chris


-- 
Chris Dagdigian, <dag@sonsorol.org>
Independent life science IT & research computing consulting
Office: 617-666-6454, Mobile: 617-877-5498, Fax: 425-699-0193
Work: http://BioTeam.net PGP KeyID: 83D4310E  Yahoo IM: craffi