[slurm-users] Checking memory requirements in job_submit.lua
pbisbal at pppl.gov
Wed Jun 13 11:59:31 MDT 2018
In my environment, we have several partitions that are 'general access',
with each partition providing different hardware resources (IB, large
mem, etc). Then there are other partitions that are for specific
departments/projects. Most of this configuration is historical, and I
can't just rearrange the partition layout, etc, which would allow Slurm
to apply it's own logic to redirect jobs to the appropriate nodes.
For the general access partitions, I've decided apply some of this logic
in my job_submit.lua script. This logic would look at some of the job
specifications and change the QOS/Partition for the job as appropriate.
One thing I'm trying to do is have large memory jobs be assigned to my
large memory partition, which is named mque for historical reasons.
To do this, I have added the following logic to my job_submit.lua script:
if job_desc.pn_min_mem > 65536 then
slurm.user_msg("NOTICE: Partition switched to mque due to memory
job_desc.partition = 'mque'
job_desc.qos = 'mque'
This works when --mem is specified, doesn't seem to work when
--mem-per-cpu is specified. What is the best way to check this when
--mem-per-cpu is specified instead? Logically, one would have to calculate
mem per node = ntasks_per_node * ( ntasks_per_core / min_mem_per_cpu )
Is correct? If so, are there any flaws in the logic/variable names
above? Also, is this quantity automatically calculated in Slurm by a
variable that is accessible by job_submit.lua at this point, or do I
need to calculate this myself?
More information about the slurm-users