[Bioclusters] Memory Usage for Blast - question

Lucas Carey lcarey at odd.bio.sunysb.edu
Wed Mar 9 18:33:38 EST 2005


On Wednesday, March 09, 2005 at 17:23 -0600, Dinanath Sulakhe wrote:
> At 05:06 PM 3/9/2005, Lucas Carey wrote:
> >Hi Dina,
> >I don't know how many of the results you actually need. You may free up 
> >some memory by limiting e-value, returned results, and aligned results.
> >blastall -e 0.0001 -b 25 -v 25
> 
>after would limiting the e-value and other parameters reduce the RAM usage??
with mpiBLAST -e & -b can limit the memory usage for the master node. I don't have a free cpu right now to check blastall.
> 
> >Another option, if you can limit Condor to a single job per machine, would 
> >be to run 'blastall -a 2' to use both CPUs with only one process.
> 
> These jobs are assigned by the scheduler. Initially i had used '-a 2' 
> option, but when this job is running on a node, the scheduler would assign 
> another job by some other user on the same node, assuming the other 
> processor to be free, but then blast would starve the other job. So we 
> can't use 'a -n' option here.
I used to use an OpenPBS cluster that would do that, but allowed me to specify which nodes I wanted to run on. I would start up my compute job on one processor, and 
	while (1){
	sleep (1000);
	}
one the second.
-Lucas

> 
> 
> Thanks,
> Dina
> 
> >-Lucas
> >
> >On Wednesday, March 09, 2005 at 16:09 -0600, Dinanath Sulakhe wrote:
> >> Hi,
> >> I am not sure if this is the right place to ask this question !!
> >> I am running Blast (NCBI) parallely on a cluster with 80 nodes. (I am
> >> running NCBI NR against Itself). Each node is a dual processor.
> >>
> >> I am using Condor to submit the jobs to this cluster. The problem I am
> >> coming across is, whenever two blast jobs (each blast job has 100
> >> sequences) are assigned on One node (one on each processor), the node
> >> cannot handle the amount of memory used by the two blast jobs. PBS mom
> >> daemon on the nodes cannot allocate the memory they need to monitor the
> >> jobs on the node and they fail, thus killing the jobs.
> >>
> >> Condor doesn't recognize this failure and assumes that the job was
> >> successfully completed, but actually only few sequences get processed
> >> before the job is killed.
> >>
> >> Now the Admin of the Site is asking me if its possible to reduce the 
> >amount
> >> of memory these blast jobs use? He says these jobs are requesting about
> >> 600-700MB of RAM, and he is asking me to reduce it to atmost 500MB.
> >>
> >> Is it possible to reduce the amount of RAM it is requesting by tweaking 
> >any
> >> of the parameters in blast??
> >>
> >> My blast options are :
> >>
> >> blastall -i $input -o $output -d $db -p blastp -m 8 -F F
> >>
> >> Please let me know,
> >> Thank you,
> >> Dina
> >>
> >> _______________________________________________
> >> Bioclusters maillist  -  Bioclusters at bioinformatics.org
> >> https://bioinformatics.org/mailman/listinfo/bioclusters
> >_______________________________________________
> >Bioclusters maillist  -  Bioclusters at bioinformatics.org
> >https://bioinformatics.org/mailman/listinfo/bioclusters
> 
> ===============================
> Dinanath Sulakhe
> Mathematics & Computer Science Division
> Argonne National Laboratory
> Ph: (630)-252-7856      Fax: (630)-252-5986
> 
> _______________________________________________
> Bioclusters maillist  -  Bioclusters at bioinformatics.org
> https://bioinformatics.org/mailman/listinfo/bioclusters


More information about the Bioclusters mailing list