[slurm-users] Limit number of CPU in a partition

Nicolò Parmiggiani nicolo.parmiggiani at gmail.com
Fri Jan 5 04:40:21 MST 2018


I have for instance two nodes:

1) 30 CPUs
2) 70 CPUs

so in total I have 100 CPUs.

Two partition

1) low priority
2) high priority

I also have two data processing pipelines, the first one uses low priority
partition and it can use all CPUs available. The second one uses high
priority partition and trigger analysis sometimes. I implemented preemption
so when the high priority pipeline start analysis slurm suspend low
priority jobs and start high priority jobs. How can I limit the CPUs of
second pipeline for instance at 50 (independently from node) ? in this way
I have always minimum 50 CPUs that work for low level pipeline.

The second step could be limit the number of CPUs but in relation with the
total number of available CPUs , otherwise if the second node goes down, I
have 30 CPUs, but with the previous limit of 50, the high priority pipeline
will use all those 30 CPUs.

If I want to preserve some CPUs for database and web operation I need to
set a lower CPUs number in slurm.conf on node row?

Thank you!




2018-01-05 12:15 GMT+01:00 Ade Fewings <ade.fewings at hpcwales.co.uk>:

> Are the partitions dynamic?  i.e. is the desire to limit based on
> partitions that float across a larger number of nodes and so the desire is
> to limit a particular partition to maximum amount of those nodes?
>
>
>
> ~~
> Ade
>
>
>
>
>
>
>
> *From:* slurm-users [mailto:slurm-users-bounces at lists.schedmd.com] *On
> Behalf Of *Nicolò Parmiggiani
> *Sent:* 05 January 2018 09:56
> *To:* slurm-users at googlegroups.com
> *Cc:* slurm-users at lists.schedmd.com
> *Subject:* Re: [slurm-users] Limit number of CPU in a partition
>
>
>
> Hi,
>
>
>
> can someone help me? How can I limit the maximum number of CPUs that a
> partition can use.
>
>
>
> Thank You.
>
>
>
> 2018-01-02 18:28 GMT+01:00 Nicolò Parmiggiani <
> nicolo.parmiggiani at gmail.com>:
>
> I have only one server and two data analysis pipelines, one for standard
> jobs and other one for high priority job that can be triggered sometimes.
>
>
>
> My first solution was to split the CPU of the server in two partition, one
> for each pipeline.
>
>
>
> A more complex (but i suppose better) solution could be to use one
> partition and manage the priority of the two pipeline and when i get jobs
> from high priority pipeline those jobs must start immediately, temporarily
> blocking standard jobs.
>
>
>
> What solution do you recommend? And how can I implement it?
>
>
>
> Thank You.
>
>
>
> 2018-01-02 13:33 GMT+01:00 Ole Holm Nielsen <Ole.H.Nielsen at fysik.dtu.dk>:
>
> On 01/02/2018 12:59 PM, Nicolò Parmiggiani wrote:
>
> My problem is that i have for instance 100 CPU, and i want to create two
> partition each with 50 CPU maximum usage. In this way i can submit job to
> both partitions independently.
>
>
> I wonder what you really want to achieve?  Why do you want to divide your
> 100 CPUs (nodes or CPU cores?) into 2 partitions?  The queue will be much
> more flexible if identical nodes constitute 1 partition.
>
> If you really insist on splitting up your nodes into 2 partitions, you can
> configure that in slurm.conf, for example with 100 nodes named a001..a100:
>
> PartitionName=partition1 Nodes=a[001-050]
> PartitionName=partition2 Nodes=a[051-100]
>
> If you wish, you can look at my Slurm Wiki pages for inspiration:
> https://wiki.fysik.dtu.dk/niflheim/SLURM
> https://wiki.fysik.dtu.dk/niflheim/Slurm_configuration
>
> 2018-01-02 11:29 GMT+01:00 Nicolò Parmiggiani <
> nicolo.parmiggiani at gmail.com <mailto:nicolo.parmiggiani at gmail.com>>:
>
>     Hi,
>
>     how can i limit the number of CPU that a partition can use?
>
>     For instance when a partition reach its maximum CPUs number you can
>     submit new job but they are put in queue.
>
>
> /Ole
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "slurm-users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/slurm-users/lLpRTC1rvKA/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> slurm-users+unsubscribe at googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> ------------------------------
>
>    [image: HPC Wales - www.hpcwales.co.uk] <http://www.hpcwales.co.uk>
> ------------------------------
>
> The contents of this email and any files transmitted with it are
> confidential and intended solely for the named addressee only.  Unless you
> are the named addressee (or authorised to receive this on their behalf) you
> may not copy it or use it, or disclose it to anyone else.  If you have
> received this email in error, please notify the sender by email or
> telephone.  All emails sent by High Performance Computing Wales have been
> checked using an Anti-Virus system.  We would advise you to run your own
> virus check before opening any attachments received as we will not in any
> event accept any liability whatsoever, once an email and/or attachment is
> received.
>
> High Performance Computing Wales is a private limited company incorporated
> in Wales on 8 March 2010 as company number 07181701.
>
> Our registered office is at Finance Office, Bangor University, Cae Derwen,
> College Road, Bangor, Gwynedd. LL57 2DG. UK.
>
> High Performance Computing Wales is part funded by the European Regional
> Development Fund through the Welsh Government.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "slurm-users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/slurm-users/lLpRTC1rvKA/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> slurm-users+unsubscribe at googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180105/ef33e582/attachment.html>


More information about the slurm-users mailing list