<div dir="ltr"><div>Running Slurm 20.02 on Centos 7.7 on Bright Cluster 8.2. slurm.conf is on the head node. I don't see these errors on the other 2 nodes. After restarting slurmd on node003 I see this:</div><div><br></div><div>slurmd[400766]: error: Node configuration differs from hardware: CPUs=24:48(hw) Boards=1:1(hw) SocketsPerBoard=2:2(hw) CoresPerSocket=12:12(hw) ThreadsPerCore=1:2(hw)<br>Apr 23 10:05:49 node003 slurmd[400766]: Message aggregation disabled<br>Apr 23 10:05:49 node003 slurmd[400766]: CPU frequency setting not configured for this node<br>Apr 23 10:05:49 node003 slurmd[400770]: CPUs=24 Boards=1 Sockets=2 Cores=12 Threads=1 Memory=191880 TmpDisk=2038 Uptime=2488268 CPUSpecList=(null) FeaturesAvail=(null) FeaturesActive=(null)<br><br>From slurm.conf:<br># Nodes<br>NodeName=node[001-003]  CoresPerSocket=12 RealMemory=191800 Sockets=2 Gres=gpu:v100:1<br># Partitions<br>$O Hidden=NO OverSubscribe=FORCE:12 GraceTime=0 PreemptMode=OFF ReqResv=NO AllowAccounts=ALL AllowQos=ALL LLN=NO ExclusiveUser=N$<br>PartitionName=gpuq Default=NO MinNodes=1 AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1 DisableRootJobs=NO RootOnly=NO Hidde$<br># Generic resources types<br>GresTypes=gpu,mic<br>SelectType=select/cons_tres<br>SelectTypeParameters=CR_CPU<br>SchedulerTimeSlice=60<br>EnforcePartLimits=YES<br><br>lscpu<br>Architecture:          x86_64<br>CPU op-mode(s):        32-bit, 64-bit<br>Byte Order:            Little Endian<br>CPU(s):                48<br>On-line CPU(s) list:   0-47<br>Thread(s) per core:    2<br>Core(s) per socket:    12<br>Socket(s):             2<br>NUMA node(s):          2<br>Vendor ID:             GenuineIntel<br>CPU family:            6<br>Model:                 85<br>Model name:            Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz<br>Stepping:              4<br>CPU MHz:               2600.000<br>BogoMIPS:              5200.00<br>Virtualization:        VT-x<br>L1d cache:             32K<br>L1i cache:             32K<br>L2 cache:              1024K<br>L3 cache:              19712K<br>NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46<br>NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47<br><br>cat /etc/slurm/cgroup.conf| grep -v '#'<br>CgroupMountpoint="/sys/fs/cgroup"<br>CgroupAutomount=no<br>AllowedDevicesFile="/etc/slurm/cgroup_allowed_devices_file.conf"<br>TaskAffinity=no<br>ConstrainCores=no<br>ConstrainRAMSpace=no<br>ConstrainSwapSpace=no<br>ConstrainDevices=no<br>ConstrainKmemSpace=yes<br>AllowedRamSpace=100<br>AllowedSwapSpace=0<br>MinKmemSpace=30<br>MaxKmemPercent=100<br>MaxRAMPercent=100<br>MaxSwapPercent=100<br>MinRAMSpace=30<br><br></div><div>What else can I check?</div></div>