[BioBrew Users] Unable to get mpiblast running

Glen Otero glen at callident.com
Tue Apr 11 20:26:08 EDT 2006


Glad to hear you've found a solution with mpiBLAST 1.3 and LAM/MPI.  
Please post what you find about mpiBLAST 1.4 and MPICH from the  
mpiBLAST list to this list.

What size hard drives and memory do you have on the head node and  
compute nodes?

Glen

On Apr 11, 2006, at 1:20 AM, Bastian Friedrich wrote:

> Hi,
>
> On Tuesday 11 April 2006 06:50, Glen Otero wrote:
>> Bastian-
>>
>> Any progress running mpiBLAST?
>
> yesterday, I compiled mpiblast 1.3, linked against LAM/MPI (lam-gnu
> 7.1.1 as included in the Rocks "hpc" roll). This seems to be running
> fine so far.
>
> As I said, your mpiblast 1.4 with MPICH worked for the "mini data set"
> as described by you (p53/p53db). I was not successful getting it
> running with any other data set I tested - the Hs.seq.uniq/IL2RA
> combination as described in your tutorial, as well as genbank. This
> seems to be an MPICH problem I'm encountering (although it is unclear
> whether openmpi would run if this problem did not exist... :)
>
> When compiling mpiblast 1.4 with LAM/MPI, I get an interesting result:
> The Hs.seq.uniq/IL2RA combination _does_ run (I just found out about
> the --debug flag yesterday), but it takes more than 8 hours (!) to
> finish. This is a little too long, I think ;) Here is a little snippet
> of it's output:
> =============================
> [...]
> [9]     26.7763 rank 9 exiting successfully
> [9]     26.7763 MPI startup  time is 0
> [0]     26.9766 Processing result_id: 0
> [0]     26.9767 Query 0 now has 2 / 28 fragments searched
> [0]     26.9767 Got result count of 1
> [0]     26.9767 receiveResults to read 1 records
> [0]     27.4961 Read SeqAnnot
> [0]     27.4961 Reading bioseqs for frag/query 4/0
> [0]     108.928 Receive was successful -- about to merge
> [0]     108.94  Query results have been merged
> [0]     108.952 Processing result_id: 0
> [0]     108.952 Query 0 now has 3 / 28 fragments searched
> [0]     108.952 Got result count of 1
> [0]     108.953 receiveResults to read 1 records
> [0]     109.473 Read SeqAnnot
> [0]     109.473 Reading bioseqs for frag/query 5/0
> [0]     245.66  Receive was successful -- about to merge
> [...]
> =============================
>
> The concrete blast jobs seem to finish in a reasonable time (the 26
> seconds are more or less the same as when running mpiblast 1.3), but
> the receiving and merging of results takes hours. The load on the
> compute nodes drops to zero, but stays high (1) on the head node. I
> have not fully investigated this topic, as even a rather small job
> takes hours to finish :)
>
> This topic - or a similar one - is currently discussed on the mpiblast
> mailinglist, to which I subscribed yesterday.
>
> When compiling mpiblast 1.4 with OpenMPI, load on all nodes (head and
> compute nodes) stays high for hours; I have not fully investigated  
> this
> topic, but there seems to be some incompatibility between mpiblast and
> OpenMPI... mpiblast 1.3 does not work, either; OpenMPI is not actively
> supported by mpiblast, so I'll stick with the alternatives for now.
>
> So, once more: mpiblast 1.3.0 with LAM/MPI works well; all  
> alternatives
> failed so far. We will see if mpiblast 1.3.0 is sufficient for our
> needs later today.
>
> If there is any more information I can provide, feel free to ask.
>
> Thx again, Regards,
>    Bastian
>
> -- 
>  Bastian Friedrich                  bastian at bastian-friedrich.de
>  Adress & Fon available on my HP   http://www.bastian-friedrich.de/
> \~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\
> \ ... All the world's a stage, and I missed rehearsal.
> _______________________________________________
> BioBrew-Users mailing list
> BioBrew-Users at bioinformatics.org
> https://bioinformatics.org/mailman/listinfo/BioBrew-Users




More information about the BioBrew-Users mailing list