[slurm-users] MaxJobs-limits

Renfro, Michael Renfro at tntech.edu
Tue Jan 28 14:04:22 UTC 2020


For the first question: you should be able to define each node’s core count, hyperthreading, or other details in slurm.conf. That would allow Slurm to schedule (well-behaved) tasks to each node without anything getting overloaded.

For the second question about jobs that aren’t well-behaved (a job requesting 1 CPU, but starting multiple parallel threads, or multiple MPI processes), you’ll also want to set up cgroups to constrain each job’s processes to its share of the node (so a 1-core job starting N threads will end up with each thread getting a 1/N share of a CPU).

On Jan 28, 2020, at 6:12 AM, zz <anand633r at gmail.com> wrote:

Hi,

I am testing slurm for a small cluster, I just want to know that is there anyway I could set a max job limit per node, I have nodes with different specs running under same qos. Please ignore if it is a stupid question.

Also I would like to know what will happen when a process which is running on a dual core system which requires say 4 cores at  some step.

Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200128/571759c3/attachment.htm>


More information about the slurm-users mailing list