Hi,
Not sure if I do not understand the scheduling algorithm correctly of if this is a bug. The bug seems similar to the one described and fixed here https://github.com/SchedMD/slurm/commit/6a2c99edbf96e50463cc2f16d8e5eb955c82...
Given the node definition:
NodeName=cluster01 CPUs=8 Boards=1 SocketsPerBoard=1 CoresPerSocket=4 ThreadsPerCore=2 RealMemory=64283 NodeName=cluster02 CPUs=8 Boards=1 SocketsPerBoard=1 CoresPerSocket=4 ThreadsPerCore=2 RealMemory=128814 PartitionName=DEFAULT MaxTime=2-0 MaxNodes=2 PartitionName=mini Nodes=cluster01,cluster02 State=UP
and the batch script:
for i in {1..6} do srun --exclusive --mem=100 --threads-per-core=1 -N1 -c1 -n1 bash -c 'echo $(hostname) $(date); sleep 10 ; echo done $(hostname) $(date)' & done wait
When run with:
time sbatch --wait --partition=mini --ntasks=10 --cpus-per-task=1 --mem=1000 mini.slurm
I get the following log after 2m 18s:
cluster01 Sat Jan 27 02:05:12 PM CET 2024 cluster01 Sat Jan 27 02:05:12 PM CET 2024 cluster01 Sat Jan 27 02:05:13 PM CET 2024 cluster01 Sat Jan 27 02:05:13 PM CET 2024 cluster02 Sat Jan 27 02:05:13 PM CET 2024 done cluster01 Sat Jan 27 02:05:22 PM CET 2024 done cluster01 Sat Jan 27 02:05:22 PM CET 2024 done cluster01 Sat Jan 27 02:05:23 PM CET 2024 done cluster01 Sat Jan 27 02:05:23 PM CET 2024 done cluster02 Sat Jan 27 02:05:23 PM CET 2024 srun: Job 650 step creation temporarily disabled, retrying (Requested nodes are busy) srun: Step created for StepId=650.6 cluster01 Sat Jan 27 02:07:14 PM CET 2024 done cluster01 Sat Jan 27 02:07:24 PM CET 2024
So apparently I am getting 4 sruns on cluster01 (why not 6 when I have 8 CPUs?), but only 1 is scheduled on cluster02. Then it is exhausted and waits endlessly long (almost 2 minutes?) before scheduling again.
The step log tells me:
slurmctld: debug: laying out the 1 tasks on 1 hosts cluster01 dist 1 slurmctld: STEPS: _step_alloc_lps: JobId=650 StepId=4 node 0 (cluster01) gres_cpus_alloc=0 tasks=1 cpus_per_task=1 slurmctld: STEPS: _pick_step_cores: step JobId=650 StepId=4 requires 1 cores on node 0 with cpus_per_core=1, available cpus from job: 8 slurmctld: STEPS: _pick_step_core: alloc Node:0 Socket:0 Core:3 slurmctld: STEPS: step alloc on job node 0 (cluster01) used 4 of 8 CPUs slurmctld: STEPS: _slurm_rpc_job_step_create: JobId=650 StepId=4 cluster01 usec=5502 slurmctld: STEPS: _pick_step_nodes: JobId=650 Currently running steps use 4 of allocated 8 CPUs on node cluster01 slurmctld: STEPS: _pick_step_nodes: JobId=650 Currently running steps use 1 of allocated 2 CPUs on node cluster02 slurmctld: STEPS: _mark_busy_nodes: JobId=650 StepId=0 has nodes cluster01 slurmctld: STEPS: _mark_busy_nodes: JobId=650 StepId=1 has nodes cluster02 slurmctld: STEPS: _mark_busy_nodes: JobId=650 StepId=2 has nodes cluster01 slurmctld: STEPS: _mark_busy_nodes: JobId=650 StepId=3 has nodes cluster01 slurmctld: STEPS: _mark_busy_nodes: JobId=650 StepId=4 has nodes cluster01 slurmctld: STEPS: _pick_step_nodes: step pick 1-1 nodes, avail:cluster[01-02] idle: picked:NONE slurmctld: STEPS: _pick_step_nodes: step picked 0 of 1 nodes slurmctld: STEPS: Picked nodes cluster01 when accumulating from cluster01 slurmctld: debug: laying out the 1 tasks on 1 hosts cluster01 dist 1 slurmctld: STEPS: _step_alloc_lps: JobId=650 StepId=5 node 0 (cluster01) gres_cpus_alloc=0 tasks=1 cpus_per_task=1 slurmctld: STEPS: _pick_step_cores: step JobId=650 StepId=5 requires 1 cores on node 0 with cpus_per_core=1, available cpus from job: 8 slurmctld: STEPS: unable to pick step cores for job node 0 (cluster01): Requested nodes are busy slurmctld: STEPS: Deallocating 100MB of memory on node 0 (cluster01) now used: 400 of 1000 slurmctld: STEPS: step dealloc on job node 0 (cluster01) used: 4 of 8 CPUs slurmctld: STEPS: _slurm_rpc_job_step_create for JobId=650: Requested nodes are busy
So why is it unable to allocate on cluster02? And why does its need to when it is only using 4 of 8 CPUs on cluster01 anyway?
Now if I change to --ntasks=8, the algorithm doesn’t even schedule cluster02 at all:
cluster01 Sat Jan 27 02:15:30 PM CET 2024 cluster01 Sat Jan 27 02:15:30 PM CET 2024 cluster01 Sat Jan 27 02:15:30 PM CET 2024 cluster01 Sat Jan 27 02:15:30 PM CET 2024 done cluster01 Sat Jan 27 02:15:40 PM CET 2024 done cluster01 Sat Jan 27 02:15:40 PM CET 2024 done cluster01 Sat Jan 27 02:15:40 PM CET 2024 done cluster01 Sat Jan 27 02:15:40 PM CET 2024 srun: Job 651 step creation temporarily disabled, retrying (Requested nodes are busy) srun: Step created for StepId=651.6 cluster01 Sat Jan 27 02:17:35 PM CET 2024 srun: Job 651 step creation temporarily disabled, retrying (Requested nodes are busy) srun: Step created for StepId=651.7 cluster01 Sat Jan 27 02:17:36 PM CET 2024 done cluster01 Sat Jan 27 02:17:45 PM CET 2024 done cluster01 Sat Jan 27 02:17:46 PM CET 2024
Thanks for your help. Best /rike