[slurm-users] Use all cores with HT node

Sidiney Crescencio sidiney.crescencio at clustervision.com
Fri Dec 7 04:12:49 MST 2018


Hello All,

I'm facing some issues to use the HT on my compute nodes, I'm running slurm
17.02.7

SelectTypeParameters    = CR_CORE_MEMORY

cgroup.conf

CgroupAutomount=yes
CgroupReleaseAgentDir="/etc/slurm/cgroup"

# cpuset subsystem
ConstrainCores=yes
TaskAffinity=no

# memory subsystem
ConstrainRAMSpace=yes
ConstrainSwapSpace=yes

# device subsystem
ConstrainDevices=yes

If I try to allocate the 80 CPUs it will not work, I couldn't find why this
doesn't work. Do you guys have any ideas or could cause this issue? I've
been playing with several different parameters on the node definition, also
using --threads-per-core, etc.. but still I should be able to allocate the
80 cpus.

Thanks in advance.

srun --reservation=test_ht -p defq -n 80 sleep 100
srun: error: Unable to allocate resources: Requested node configuration is
not available

--------------

[root at csk007 ~]# slurmd -C
NodeName=csk007 CPUs=80 Boards=1 SocketsPerBoard=2 CoresPerSocket=20
ThreadsPerCore=2 RealMemory=385630 TmpDisk=217043
UpTime=84-00:36:44
[root at csk007 ~]# scontrol show node csk007
NodeName=csk007 Arch=x86_64 CoresPerSocket=20
   CPUAlloc=0 CPUErr=0 CPUTot=80 CPULoad=4.03
   AvailableFeatures=(null)
   ActiveFeatures=(null)
   Gres=(null)
   NodeAddr=csk007 NodeHostName=csk007 Version=17.02
   OS=Linux RealMemory=380000 AllocMem=0 FreeMem=338487 Sockets=2 Boards=1
   State=RESERVED ThreadsPerCore=2 TmpDisk=0 Weight=1 Owner=N/A
MCS_label=N/A
   Partitions=defq
   BootTime=2018-09-14T12:31:05 SlurmdStartTime=2018-11-29T15:25:03
   CfgTRES=cpu=80,mem=380000M
   AllocTRES=
   CapWatts=n/a
   CurrentWatts=0 LowestJoules=0 ConsumedJoules=0
   ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s

-----------------------

-- 
Best Regards,
Sidiney
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20181207/9ac2c71c/attachment-0001.html>


More information about the slurm-users mailing list