[slurm-users] CPUAlloc incorrect value
Stéphane Larose
Stephane.Larose at ibis.ulaval.ca
Tue Feb 20 08:46:55 MST 2018
Hi all,
I have a single server with 64 cores. The first 4 cores are reserved for slurm daemons (CPUSpecList=0-3) and for ssh connections. So slurm have 60 cores to assign jobs to.
I do not know why but at a certain time, the CPUAlloc value becomes incorrect and wrongly calculated. For example, right now, there are 68 cores assigned by slurm :
squeue -o "%.18i %.8j %.2t %C"
JOBID NAME ST CPUS
30975 map2R R 4
30976 map2R R 4
31016 bwa_mem_ R 8
31091 wgstest R 10
31225 wgstest R 10
31226 wgstest R 10
31227 wgstest R 10
31232 complete R 1
31265 repeatMM R 10
31295 LR2016Se R 1
But slurm thinks there are only 57:
katak:~ # scontrol
scontrol: show node
NodeName=katak Arch=x86_64 CoresPerSocket=8
CPUAlloc=57 CPUErr=0 CPUTot=64 CPULoad=62.18
If I start a 10 cores task, it will start and the CPUAlloc value will be increased only by 1 (to 58).
I think (but not sure) this wrong behavior starts when users forget to submit their jobs in slurm (so these jobs starts in the 4 cores reserved for slurm daemons). It makes slurm and ssh connections more unresponsive until I kill those jobs.
Any help or advice appreciated,
Thank you!
---
Stéphane Larose
Analyste de l'informatique
Institut de Biologie Intégrative et des Systèmes (IBIS)
Pavillon Charles-Eugène-Marchand
Université Laval
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180220/b8b9490f/attachment.html>
More information about the slurm-users
mailing list