[Bioclusters] Memory Usage for Blast - question

Lucas Carey lcarey at odd.bio.sunysb.edu
Wed Mar 9 18:06:44 EST 2005


Hi Dina,
I don't know how many of the results you actually need. You may free up some memory by limiting e-value, returned results, and aligned results. 
blastall -e 0.0001 -b 25 -v 25
Another option, if you can limit Condor to a single job per machine, would be to run 'blastall -a 2' to use both CPUs with only one process.
-Lucas

On Wednesday, March 09, 2005 at 16:09 -0600, Dinanath Sulakhe wrote:
> Hi,
> I am not sure if this is the right place to ask this question !!
> I am running Blast (NCBI) parallely on a cluster with 80 nodes. (I am 
> running NCBI NR against Itself). Each node is a dual processor.
> 
> I am using Condor to submit the jobs to this cluster. The problem I am 
> coming across is, whenever two blast jobs (each blast job has 100 
> sequences) are assigned on One node (one on each processor), the node 
> cannot handle the amount of memory used by the two blast jobs. PBS mom 
> daemon on the nodes cannot allocate the memory they need to monitor the 
> jobs on the node and they fail, thus killing the jobs.
> 
> Condor doesn't recognize this failure and assumes that the job was 
> successfully completed, but actually only few sequences get processed 
> before the job is killed.
> 
> Now the Admin of the Site is asking me if its possible to reduce the amount 
> of memory these blast jobs use? He says these jobs are requesting about 
> 600-700MB of RAM, and he is asking me to reduce it to atmost 500MB.
> 
> Is it possible to reduce the amount of RAM it is requesting by tweaking any 
> of the parameters in blast??
> 
> My blast options are :
> 
> blastall -i $input -o $output -d $db -p blastp -m 8 -F F
> 
> Please let me know,
> Thank you,
> Dina
> 
> _______________________________________________
> Bioclusters maillist  -  Bioclusters at bioinformatics.org
> https://bioinformatics.org/mailman/listinfo/bioclusters


More information about the Bioclusters mailing list