[slurm-users] cgroup limits not created for jobs

Paul Raines raines at nmr.mgh.harvard.edu
Fri Jul 24 16:48:35 UTC 2020


I am not seeing any cgroup limits being put in place on the nodes
when jobs run.  I have slurm 20.02 running on CentOS 7.8

In slurm.conf I have

ProctrackType=proctrack/cgroup
TaskPlugin=task/affinity,task/cgroup
SelectTypeParameters=CR_Core_Memory
JobAcctGatherType=jobacct_gather/cgroup

and cgroup.conf has

CgroupAutomount=yes
ConstrainCores=yes
ConstrainDevices=yes
ConstrainRAMSpace=yes

But when I run a job on the node it runs I can find no
evidence in cgroups of any limits being set

Example job:

mlscgpu1[0]:~$ salloc -n1 -c3 -p batch --gres=gpu:quadro_rtx_6000:1 --mem=1G
salloc: Granted job allocation 17
mlscgpu1[0]:~$ echo $$
137112
mlscgpu1[0]:~$

But in the cgroup fs

[root at mlscgpu1 slurm]# find /sys/fs/cgroup/ -name slurm 
[root at mlscgpu1 slurm]#
[root at mlscgpu1 slurm]# find /sys/fs/cgroup/ -name tasks -exec grep -l 137112 
{} \;
/sys/fs/cgroup/pids/user.slice/tasks
/sys/fs/cgroup/memory/user.slice/tasks
/sys/fs/cgroup/cpuset/tasks
/sys/fs/cgroup/blkio/user.slice/tasks
/sys/fs/cgroup/cpu,cpuacct/user.slice/tasks
/sys/fs/cgroup/devices/user.slice/tasks
/sys/fs/cgroup/net_cls,net_prio/tasks
/sys/fs/cgroup/perf_event/tasks
/sys/fs/cgroup/hugetlb/tasks
/sys/fs/cgroup/freezer/tasks
/sys/fs/cgroup/systemd/user.slice/user-5829.slice/session-80624.scope/tasks


---------------------------------------------------------------
Paul Raines                     http://help.nmr.mgh.harvard.edu
MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging
149 (2301) 13th Street     Charlestown, MA 02129	    USA






More information about the slurm-users mailing list