<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Following up with a bit more specific color as to what I’m seeing, as well as a solution that I’m ashamed I didn’t come back to it. <div class=""><br class=""></div><div class="">If there is exclusively tier3 work queued up, gang scheduling never comes into play.</div><div class="">If there is tier3+tier1 work queued up, tier1 gets requeued, and tier3 preempts as expected.</div><div class="">If enough work is queued in tier3 that it then triggers a suspend preemption in tier2, thats when things fall over and gang scheduling starts happening inside of tier3 queue.</div><div class=""><br class=""></div><div class="">So the issue seems to have stemmed from my use of <font face="Menlo" class="">OverSubscribe=FORCE:1</font> in my tier3 partition (separate from the tier1/2 partition).</div><div class="">This was set in anticipation of increasing the forced oversubscription limit in the future, but wanting to keep oversubscription “off” for now.</div><div class="">However, by setting OverSubscribe=NO on the tier3 partition, and leaving OverSubscribe=FORCE:1 on the tier1/2 partition.</div><div class=""><br class=""></div><div class="">So, this gets me to where I wanted to be in the first place, which is tier3 not gang scheduling, while still allowing tier1/tier2 to be requeued/suspended.</div><div class="">So I answered my own question, and hopefully someone will benefit from this.</div><div class=""><br class=""></div><div class="">Reed</div><div class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Aug 8, 2022, at 11:27 AM, Reed Dier <<a href="mailto:reed.dier@focusvq.com" class="">reed.dier@focusvq.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">I’ve got essentially 3 “tiers” of jobs.<div class=""><br class=""></div><div class="">tier1 are stateless and can be requeued</div><div class="">tier2 are stateful and can be suspended</div><div class="">tier3 are “high priority” and can preempt tier1 and tier2 with the requisite preemption modes.</div><div class=""><br class=""></div><div class=""><blockquote type="cite" class=""><font face="Menlo" class="">$ sacctmgr show qos format=name%10,priority%10,preempt%12,preemptmode%10</font></blockquote><blockquote type="cite" class=""><div class=""><font face="Menlo" class=""> Name Priority Preempt PreemptMod</font></div><div class=""><font face="Menlo" class="">---------- ---------- ------------ ----------</font></div><div class=""><font face="Menlo" class=""> normal 0 cluster</font></div><div class=""><font face="Menlo" class=""> tier1 10 requeue</font></div><div class=""><font face="Menlo" class=""> tier2 10 suspend</font></div><div class=""><font face="Menlo" class=""> tier3 100 tier1,tier2 cluster</font></div></blockquote><div class=""><br class=""></div>I also have a separate partition for the same hardware nodes to allow for tier3 to cross partitions to suspend tier2 (if its possible to have this all work in a single partition, please let me know).<br class=""><br class=""></div><div class="">tier1 and tier2 get preempted by tier3 perfectly, but the problem is now that tier3 gets gang scheduled in times of big queues in tier3, when I never want gang scheduling anywhere, but especially not tier3.</div><div class=""><br class=""></div><div class=""><font face="Menlo" class=""><blockquote type="cite" class="">PreemptType=preempt/qos<br class="">PreemptMode=SUSPEND,GANG</blockquote><div class=""><font face="Menlo" class=""><br class=""></font></div>This is what is in my slurm.conf, because if I try to set PreemptMode=SUSPEND</font>, the ctld won’t start due to:<font face="Menlo" class=""><br class=""></font></div><div class=""><blockquote type="cite" class=""><font face="Menlo" class="">slurmctld: error: PreemptMode=SUSPEND requires GANG too</font></blockquote><div class=""><br class=""></div>I have also tried to set<font face="Menlo" class=""> PreemptMode=OFF</font> in the (tier3) partition as well, but this has had no effect on gang scheduling that I can see.<br class=""><br class=""></div><div class="">Right now, my hit-it-with-a-hammer solution is increasing <font face="Menlo" class="">SchedulerTimeSlice</font> to 65535 that should effectively prevent jobs from gang scheduling.</div><div class="">While this effectively gets me to the goal I’m looking for, it's inelegant, and if I end up with jobs that go past ~18 hours, this is not going to work as I want/hope/expect.</div><div class=""><br class=""></div><div class="">So I’m hoping that there is a better solution to this that would solve the root issue to have the tier3 qos/partition not preempt itself.</div><div class=""><br class=""></div><div class="">Hopefully I’ve described this well enough and someone can offer some pointers on how to have suspend-able jobs in tier2, without having incidental gang-suspension in tier3.</div><div class=""><br class=""></div><div class="">This is 21.08.8-2 in the production cluster, and I’m testing 22.05.2 in my testing cluster which is behaving the same way.</div><div class=""><br class=""></div><div class="">Reed</div></div></div></blockquote></div><br class=""></div></body></html>