I have a SLURM configuration of 2 hosts with 6 + 4 CPUs.

I am submitting jobs with sbatch -n <CPU slots> <job script>.

However, I see that even when I have exhausted all 10 CPU slots for the running jobs it's still allowing subsequent jobs to run !

The CPU slots availability is also show as full for the 2 hosts. No job is found pending.

What could be problem?

My Slurm.conf looks like (host names are changed to generic):

ClusterName=MyCluster
ControlMachine=host1
ControlAddr=<some address>
SlurmUser=slurmsa
#AuthType=auth/munge
StateSaveLocation=/var/spool/slurmd
SlurmdSpoolDir=/var/spool/slurmd
SlurmctldLogFile=/var/log/slurm/slurmctld.log
SlurmdDebug=3
SlurmctldDebug=6
SlurmdLogFile=/var/log/slurm/slurmd.log
AccountingStorageType=accounting_storage/slurmdbd
AccountingStorageHost=host1
#AccountingStoragePass=medslurmpass
#AccountingStoragePass=/var/run/munge/munge.socket.2
AccountingStorageUser=slurmsa
#TaskPlugin=task/cgroup
NodeName=host1 CPUs=6 SocketsPerBoard=3 CoresPerSocket=2 ThreadsPerCore=1 State=UNKNOWN
NodeName=host2 CPUs=4 ThreadsPerCore=1 State=UNKNOWN
PartitionName=debug Nodes=host1,host2 Default=YES MaxTime=INFINITE State=UP
JobAcctGatherType=jobacct_gather/linux
JobAcctGatherFrequency=30

SelectType=select/cons_tres
SelectTypeParameters=CR_CPU
TaskPlugin=task/affinity

Thanks in advance for any help!

Regards,
Bhaskar.