[slurm-users] MaxJobs-limits

zz anand633r at gmail.com
Wed Jan 29 04:19:08 UTC 2020


Hi Michael,

Thanks for quick response what if we submit multiple job with out
specifying core or thread in the same, all jobs will run parallely depends
on the cpu available in the node, when no resouce available the job will go
to queue as pending job. if i have 10 cpu in the system, if i submit 8 jobs
(that each job requore 1 cpu max ) simultanously all 8 will run in the
single node. Is there any way to limit the number of such jobs per node?
ie: even the resouce availble can we say the node will only accept N jobs
and kept all later jobs in pending queue. (like maxjobs, but per node not
on cluster level), only cgroups is the solution  I suppose.

On Tue, Jan 28, 2020 at 7:42 PM Renfro, Michael <Renfro at tntech.edu> wrote:

> For the first question: you should be able to define each node’s core
> count, hyperthreading, or other details in slurm.conf. That would allow
> Slurm to schedule (well-behaved) tasks to each node without anything
> getting overloaded.
>
> For the second question about jobs that aren’t well-behaved (a job
> requesting 1 CPU, but starting multiple parallel threads, or multiple MPI
> processes), you’ll also want to set up cgroups to constrain each job’s
> processes to its share of the node (so a 1-core job starting N threads will
> end up with each thread getting a 1/N share of a CPU).
>
> On Jan 28, 2020, at 6:12 AM, zz <anand633r at gmail.com> wrote:
>
> Hi,
>
> I am testing slurm for a small cluster, I just want to know that is there
> anyway I could set a max job limit per node, I have nodes with different
> specs running under same qos. Please ignore if it is a stupid question.
>
> Also I would like to know what will happen when a process which is running
> on a dual core system which requires say 4 cores at  some step.
>
> Thanks
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200129/45155bd9/attachment-0001.htm>


More information about the slurm-users mailing list