[slurm-users] Creating a partition with memory and CPU limits

Calvin Dodge caldodge at gmail.com
Fri Jul 19 20:58:21 UTC 2019


I'm trying to create a partition with memory and CPU limits, enforced by
cgroups. The goal is to limit jobs in the partition to 1/4 the CPUs and
memory available on a single node.

I've created a QoS with memory limits, and then a partition which specifies
that QoS.  But when I run jobs in the partition, the cgroups limits are
just under the actual amount of memory in the node (2G), rather than the
lower amount I specified.

I've also created an association for the partition, QoS, and my user
account with that limit.  Still no effect on the cgroups memory limits.

What am I missing? Is this even possible?

/etc/slurm/cgroup.conf
CgroupAutomount=yes
ConstrainCores=yes
ConstrainRAMSpace=yes

excerpts from /etc/slurm/slurm.conf
EnforcePartLimits=yes
ProctrackType=proctrack/cgroup
TaskPlugin=task/cgroup
AccountingStorageEnforce=limits,associations

QoS:
      Name   Priority  GraceTime    Preempt PreemptMode
               Flags UsageThres UsageFactor       GrpTRES   GrpTRESMins
GrpTRESRunMin GrpJobs GrpSubmit     GrpWall       MaxTRES MaxTRESPerNode
MaxTRESMins     MaxWall     MaxTRESPU MaxJobsPU MaxSubmitPU     MaxTRESPA
MaxJobsPA MaxSubmitPA       MinTRES
quarterte+          0   00:00:00                cluster
                                   1.000000
                                          cpu=1,mem=400M

Association:
   Cluster    Account       User  Partition     Share GrpJobs       GrpTRES
GrpSubmit     GrpWall   GrpTRESMins MaxJobs       MaxTRES MaxTRESPerNode
MaxSubmit     MaxWall   MaxTRESMins                  QOS   Def QOS
GrpTRESRunMin
     linux    science     calvin      light         1              mem=400M
                                                 mem=400M
                                          quartertest


This is Slurm 18.08, running on an OHPC cluster with the CentOS 7.6 OS.

Sincerely,

Calvin Dodge
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190719/ab4469ad/attachment.htm>


More information about the slurm-users mailing list