<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div>Hi,</div><div><br></div><div>Not sure if I do not understand the scheduling algorithm correctly of if this is a bug. The bug seems similar to the one described and fixed here <a href="https://github.com/SchedMD/slurm/commit/6a2c99edbf96e50463cc2f16d8e5eb955c82a8ab#diff-0e12d64dc32e4174fe827d104245d2d690e4c929a6cfd95a2d52f65683a6dca5">https://github.com/SchedMD/slurm/commit/6a2c99edbf96e50463cc2f16d8e5eb955c82a8ab#diff-0e12d64dc32e4174fe827d104245d2d690e4c929a6cfd95a2d52f65683a6dca5</a> </div><div><br></div><div>Given the node definition:</div><div><br></div><div> NodeName=cluster01 CPUs=8 Boards=1 SocketsPerBoard=1 CoresPerSocket=4 ThreadsPerCore=2 RealMemory=64283</div><div> NodeName=cluster02 CPUs=8 Boards=1 SocketsPerBoard=1 CoresPerSocket=4 ThreadsPerCore=2 RealMemory=128814</div><div><div> PartitionName=DEFAULT MaxTime=2-0 MaxNodes=2</div><div> PartitionName=mini Nodes=cluster01,cluster02 State=UP</div></div><div><br></div><div>and the batch script:</div><div><br></div><div><div> for i in {1..6}</div><div> do</div><div> srun --exclusive --mem=100 --threads-per-core=1 -N1 -c1 -n1 bash -c 'echo $(hostname) $(date); sleep 10 ; echo done $(hostname) $(date)' &</div><div> done</div><div> wait</div></div><div><br></div><div>When run with:</div><div><br></div><div> time sbatch --wait --partition=mini --ntasks=10 --cpus-per-task=1 --mem=1000 mini.slurm</div><div><br></div><div>I get the following log after 2m 18s:</div><div><br></div><div><div> cluster01 Sat Jan 27 02:05:12 PM CET 2024</div><div> cluster01 Sat Jan 27 02:05:12 PM CET 2024</div><div> cluster01 Sat Jan 27 02:05:13 PM CET 2024</div><div> cluster01 Sat Jan 27 02:05:13 PM CET 2024</div><div> cluster02 Sat Jan 27 02:05:13 PM CET 2024</div><div> done cluster01 Sat Jan 27 02:05:22 PM CET 2024</div><div> done cluster01 Sat Jan 27 02:05:22 PM CET 2024</div><div> done cluster01 Sat Jan 27 02:05:23 PM CET 2024</div><div> done cluster01 Sat Jan 27 02:05:23 PM CET 2024</div><div> done cluster02 Sat Jan 27 02:05:23 PM CET 2024</div><div> srun: Job 650 step creation temporarily disabled, retrying (Requested nodes are busy)</div><div> srun: Step created for StepId=650.6</div><div> cluster01 Sat Jan 27 02:07:14 PM CET 2024</div><div> done cluster01 Sat Jan 27 02:07:24 PM CET 2024</div></div><div><br></div><div>So apparently I am getting 4 sruns on cluster01 (why not 6 when I have 8 CPUs?), but only 1 is scheduled on cluster02. Then it is exhausted and waits endlessly long (almost 2 minutes?) before scheduling again.</div><div><br></div><div>The step log tells me:</div><div><br></div><div><div> slurmctld: debug: laying out the 1 tasks on 1 hosts cluster01 dist 1 </div><div> slurmctld: STEPS: _step_alloc_lps: JobId=650 StepId=4 node 0 (cluster01) gres_cpus_alloc=0 tasks=1 cpus_per_task=1 </div><div> slurmctld: STEPS: _pick_step_cores: step JobId=650 StepId=4 requires 1 cores on node 0 with cpus_per_core=1, available cpus from job: 8 </div><div> slurmctld: STEPS: _pick_step_core: alloc Node:0 Socket:0 Core:3 </div><div> slurmctld: STEPS: step alloc on job node 0 (cluster01) used 4 of 8 CPUs </div><div> slurmctld: STEPS: _slurm_rpc_job_step_create: JobId=650 StepId=4 cluster01 usec=5502 </div><div> slurmctld: STEPS: _pick_step_nodes: JobId=650 Currently running steps use 4 of allocated 8 CPUs on node cluster01 </div><div> slurmctld: STEPS: _pick_step_nodes: JobId=650 Currently running steps use 1 of allocated 2 CPUs on node cluster02 </div><div> slurmctld: STEPS: _mark_busy_nodes: JobId=650 StepId=0 has nodes cluster01 </div><div> slurmctld: STEPS: _mark_busy_nodes: JobId=650 StepId=1 has nodes cluster02 </div><div> slurmctld: STEPS: _mark_busy_nodes: JobId=650 StepId=2 has nodes cluster01 </div><div> slurmctld: STEPS: _mark_busy_nodes: JobId=650 StepId=3 has nodes cluster01 </div><div> slurmctld: STEPS: _mark_busy_nodes: JobId=650 StepId=4 has nodes cluster01 </div><div> slurmctld: STEPS: _pick_step_nodes: step pick 1-1 nodes, avail:cluster[01-02] idle: picked:NONE </div><div> slurmctld: STEPS: _pick_step_nodes: step picked 0 of 1 nodes </div><div> slurmctld: STEPS: Picked nodes cluster01 when accumulating from cluster01 </div><div> slurmctld: debug: laying out the 1 tasks on 1 hosts cluster01 dist 1 </div><div> slurmctld: STEPS: _step_alloc_lps: JobId=650 StepId=5 node 0 (cluster01) gres_cpus_alloc=0 tasks=1 cpus_per_task=1 </div><div> slurmctld: STEPS: _pick_step_cores: step JobId=650 StepId=5 requires 1 cores on node 0 with cpus_per_core=1, available cpus from job: 8 </div><div> slurmctld: STEPS: unable to pick step cores for job node 0 (cluster01): Requested nodes are busy </div><div> slurmctld: STEPS: Deallocating 100MB of memory on node 0 (cluster01) now used: 400 of 1000 </div><div> slurmctld: STEPS: step dealloc on job node 0 (cluster01) used: 4 of 8 CPUs </div><div> slurmctld: STEPS: _slurm_rpc_job_step_create for JobId=650: Requested nodes are busy </div></div><div><br></div><div>So why is it unable to allocate on cluster02? And why does its need to when it is only using 4 of 8 CPUs on cluster01 anyway?</div><div><br></div><div>Now if I change to --ntasks=8, the algorithm doesn’t even schedule cluster02 at all:</div><div><br></div><div><div> cluster01 Sat Jan 27 02:15:30 PM CET 2024</div><div> cluster01 Sat Jan 27 02:15:30 PM CET 2024</div><div> cluster01 Sat Jan 27 02:15:30 PM CET 2024</div><div> cluster01 Sat Jan 27 02:15:30 PM CET 2024</div><div> done cluster01 Sat Jan 27 02:15:40 PM CET 2024</div><div> done cluster01 Sat Jan 27 02:15:40 PM CET 2024</div><div> done cluster01 Sat Jan 27 02:15:40 PM CET 2024</div><div> done cluster01 Sat Jan 27 02:15:40 PM CET 2024</div><div> srun: Job 651 step creation temporarily disabled, retrying (Requested nodes are busy)</div><div> srun: Step created for StepId=651.6</div><div> cluster01 Sat Jan 27 02:17:35 PM CET 2024</div><div> srun: Job 651 step creation temporarily disabled, retrying (Requested nodes are busy)</div><div> srun: Step created for StepId=651.7</div><div> cluster01 Sat Jan 27 02:17:36 PM CET 2024</div><div> done cluster01 Sat Jan 27 02:17:45 PM CET 2024</div><div> done cluster01 Sat Jan 27 02:17:46 PM CET 2024</div></div><div><br></div><div><br></div><div><br></div><div>Thanks for your help. Best</div><div>/rike</div><div><br></div></body></html>