[slurm-users] Running job using our serial queue

Juergen Salk juergen.salk at uni-ulm.de
Mon Nov 4 21:30:38 UTC 2019


* David Baker <D.J.Baker at soton.ac.uk> [191104 15:14]:

> It looks like the downside of the serial queue is that jobs from
> different users can interact quite badly. 

Hi David,

what exactly do you mean with "jobs from different users can interact 
quite badly"?

> [...] On the other hand I wonder if our cgroups setup is optimal 
> for the serial queue.
> Our cgroup.conf contains...
> 
> CgroupAutomount=yes
> CgroupReleaseAgentDir="/etc/slurm/cgroup"
> 
> ConstrainCores=yes
> ConstrainRAMSpace=yes
> ConstrainDevices=yes
> TaskAffinity=no
> 
> CgroupMountpoint=/sys/fs/cgroup
> 
> The relevant cgroup configuration in the slurm.conf is...
> ProctrackType=proctrack/cgroup
> TaskPlugin=affinity,cgroup

This should be:

TaskPlugin=task/affinity,task/cgroup

> Could someone please advise us on the required/recommended cgroup setup for the 
> above scenario? For example, should we really set "TaskAffinity=yes"? 

To my very best knowledge, it is recommended to set TaskAffinity to "no" 
when using both task/affinity and task/cgroup together. 

See the NOTE for TaskPlugin in slurm.conf man page:

 It is recommended to stack task/affinity,task/cgroup  together  when
 configuring TaskPlugin, and  setting  TaskAffinity=no  and
 ConstrainCores=yes  in cgroup.conf.  This  setup  uses  the
 task/affinity  plugin  for setting the affinity of the tasks (which
 is better and different than task/cgroup) and uses the task/cgroup
 plugin to fence tasks into the specified resources, thus combining
 the best of both pieces.

Best regards
Jürgen

-- 
Jürgen Salk
Scientific Software & Compute Services (SSCS)
Kommunikations- und Informationszentrum (kiz)
Universität Ulm
Telefon: +49 (0)731 50-22478
Telefax: +49 (0)731 50-22471



More information about the slurm-users mailing list