[Bioclusters] limiting job memory size on G5s
Barry J Mcinnes
Barry.J.Mcinnes at noaa.gov
Wed Dec 15 16:27:22 EST 2004
We have a test cluster of G5s running 10.3.6 and GE 6.
We want to be able to limit the physical memory usage, that works on SGE
under Solaris.
The current limits for the queue are set via qmon, and are printed out
for each job as
cputime unlimited
filesize unlimited
datasize 2097152 kbytes
stacksize 65536 kbytes
coredumpsize 0 kbytes
memoryuse 1048576 kbytes
descriptors 1024
memorylocked unlimited
maxproc 100
When we run our memory allocating test job, grabbing n times 4 bytes,
the following
happens, 2, 2.5, 2.75 x 10**8 all run fine, for
300x10**8 GE tries running the job on a node then puts the node in E state,
tries another node, puts it in E state, then actually runs on a third node.
All G5s are identical hardware. By my calculations the 2.75x4x10**8
should fail.
Anyway is there a way to physically limit a job to use only 1GB of
memory and the job fail
with a error without putting the node in Error and locking the node for
other jobs ?
thanks barry
More information about the Bioclusters
mailing list