<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div><blockquote type="cite" class=""><div class="">On Oct 20, 2018, at 3:06 AM, Chris Samuel <<a href="mailto:chris@csamuel.org" class="">chris@csamuel.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">On Saturday, 20 October 2018 9:57:16 AM AEDT Noam Bernstein wrote:<br class=""><br class=""><blockquote type="cite" class="">If not, is there another way to do this?<br class=""></blockquote><br class="">You can use --exclusive for jobs that want whole nodes.<br class=""><br class="">You will likely also want to use:<br class=""><br class="">SelectTypeParameters=CR_Core_Memory,CR_ONE_TASK_PER_CORE<br class=""><br class="">to ensure jobs are given one core (with all its associated threads) per task.<br class=""><br class="">Also set DefMemPerCPU so that jobs get allocated a default amount of RAM per <br class="">core if they forget to ask for it.<br class=""></div></div></blockquote><div><br class=""></div><div>Thanks for the suggestions. I’ve tried this now, and even when I don’t set —exclusive jobs seem to refuse to run on nodes that already have jobs on them. I’m using<blockquote style="margin: 0px 0px 0px 40px; border: none; padding: 0px;" class=""><div class="">SelectType=select/cons_res</div><div class="">SelectTypeParameters=CR_Core_Memory</div></blockquote>Partitions don’t have OverSubscribe set explicitly, but my understanding is that since I’m using CR_Core_Memory it should still allow for sharing nodes, just not cores or memory Nevertheless, when I submit N+1 jobs (I have N nodes) each of which requesting half as many tasks as a node has cores, the N+1st job remains pending with reason Resources. Turning on OverSubscribe and adding —oversubscribe to the sbatch options doesn’t change anything either. </div><div><br class=""></div><div> Is there any way to explicitly find out which resources are the limiting ones?</div><div><br class=""></div><blockquote type="cite" class=""><div class=""><div class=""><br class=""><blockquote type="cite" class="">And however we achieve this, how does slurm decide what order to assign<br class="">nodes to jobs in the presence of jobs that don't take entire nodes. If we<br class="">have a 2 16 core nodes and two 8 task jobs, are they going to be packed<br class="">into a single node, or each on its own node (leaving no free node for<br class="">another 16 task job that requires an entire node)?<br class=""></blockquote><br class="">As long as you don't use CR_LLN (least loaded node) as your select parameter <br class="">and you don't use pack_serial_at_end in SchedulerParameters then Slurm (I <br class="">believe) is meant to use a best fit algorithm.<br class=""></div></div></blockquote><div><br class=""></div>Hmm - that’s not consistent with what I’m seeing either. Each of the jobs I describe above, which asks for half as many tasks as there are cores, ends up on a separate node. I’m not using CR_LLN, or pack_serial_at_end. I guess it’s obvious this is the case given that my jobs refuse to share nodes.</div><div><br class=""></div><div>Any ideas as to what might be happening?</div><div><br class=""></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>thanks,</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>Noam</div><br class=""></body></html>