[slurm-users] Tell me about MaxKmemPercent.

fj2770fj at fujitsu.com fj2770fj at fujitsu.com
Tue Dec 24 05:36:32 UTC 2019


Hi, All

I want to check the behavior of MaxKmemPercent in cgroup.conf.
In the following configuration, I thought that 'kmem.limit_in_bytes' would be 1G by MaxKmemPercent(RealMemory=1024).
However, the value of AllowedKmemSpace(104857600) was set to kmem.limit_in_bytes.
Please tell me how to confirm MaxKmemPercent.

[root at ohpc137pbsop-sms ~]# scontrol show node ohpc137pbsop-c001
NodeName=ohpc137pbsop-c001 Arch=x86_64 CoresPerSocket=12
   CPUAlloc=2 CPUTot=48 CPULoad=0.84
   AvailableFeatures=(null)
   ActiveFeatures=(null)
   Gres=(null)
   NodeAddr=ohpc137pbsop-c001 NodeHostName=ohpc137pbsop-c001 Version=18.08
   OS=Linux 3.10.0-957.10.1.el7.x86_64 #1 SMP Thu Feb 7 07:12:53 UTC 2019
   RealMemory=1024 AllocMem=1024 FreeMem=190328 Sockets=2 Boards=1
   State=MIXED ThreadsPerCore=2 TmpDisk=4096 Weight=1 Owner=N/A MCS_label=N/A
   Partitions=normal
   BootTime=2019-12-24T13:40:27 SlurmdStartTime=2019-12-24T14:21:40
   CfgTRES=cpu=48,mem=1G,billing=48
   AllocTRES=cpu=2,mem=1G
   CapWatts=n/a
   CurrentWatts=0 LowestJoules=0 ConsumedJoules=0
   ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s
   
[root at ohpc137pbsop-sms ~]# cat /etc/slurm/cgroup.conf
###
#
# Slurm cgroup support configuration file
#
# See man slurm.conf and man cgroup.conf for further
# information on cgroup configuration parameters
#--
ConstrainCores=yes
TaskAffinity=no
CgroupMountpoint=/cgroup
CgroupAutomount=yes
ConstrainRAMSpace=yes
ConstrainKmemSpace=yes
ConstrainDevices=no
AllowedKmemSpace=104857600
MaxKmemPercent=100
MinKmemSpace=30

[root at ohpc137pbsop-sms ~]#

[test at ohpc137pbsop-sms ~]$ cat yes.sh
#!/bin/bash

#SBATCH -J yes          # Job name
#SBATCH -o job.%j.out         # Name of stdout output file (%j expands to jobId)

srun yes > /dev/null

[test at ohpc137pbsop-sms ~]$ sbatch yes.sh
Submitted batch job 160
[test at ohpc137pbsop-sms ~]$

[root at ohpc137pbsop-c001 ~]# lscgroup |grep slurm
pids:/system.slice/slurmd.service
cpu,cpuacct:/system.slice/slurmd.service
freezer:/slurm
freezer:/slurm/uid_1001
freezer:/slurm/uid_1001/job_159
freezer:/slurm/uid_1001/job_159/step_0
freezer:/slurm/uid_1001/job_159/step_batch
blkio:/system.slice/slurmd.service
devices:/system.slice/slurmd.service
cpuset:/slurm
cpuset:/slurm/uid_1001
cpuset:/slurm/uid_1001/job_159
cpuset:/slurm/uid_1001/job_159/step_0
cpuset:/slurm/uid_1001/job_159/step_batch
memory:/slurm
memory:/slurm/uid_1001
memory:/slurm/uid_1001/job_159
memory:/slurm/uid_1001/job_159/step_0
memory:/slurm/uid_1001/job_159/step_batch
[root at ohpc137pbsop-c001 ~]#

[root at ohpc137pbsop-c001 step_0]# cat /cgroup/memory/slurm/uid_1001/job_159/step_0/memory.kmem.limit_in_bytes
104857600
[root at ohpc137pbsop-c001 step_0]#
   


More information about the slurm-users mailing list