<div dir="ltr"><div><div><div><div>We have <br>* high priority qos for short jobs. The qos is set at submission time by the user or in the lua script.</div><div>* partitions for jobs of certain length or other requirements. Sometimes several partitions overlap. <br></div></div></div></div><div>* a script that adjusts priorities according to our policies every 5 minutes.<br></div><div><div><br></div><div>By combining these three methods, we've managed to get a pretty good balance for our needs.</div><div><br></div></div><div>Best regards, <br></div><div>Jessica Nettelblad, UPPMAX, Sweden<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Nov 22, 2017 at 5:53 PM, Satrajit Ghosh <span dir="ltr"><<a href="mailto:satra@mit.edu" target="_blank">satra@mit.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>slurm has a way of giving larger jobs more priority. is it possible to do the reverse?</div><div><br></div>i.e., is there a way to configure priority to give smaller jobs (use less resources) higher priority than bigger ones?<div><br clear="all"><div><div class="m_572041031682416625gmail_signature"><div dir="ltr">cheers,<br><br>satra<br><br></div><div>resources: can be a weighted combination depending on system resources available:</div><div><br></div><div>w1*core + w2*memory + w3*time + w4*gpu<br></div><div><br></div><div>where core, memory, time, gpu are those requested by the job, and w1-4 are determined by system resources/group allocations.</div></div></div>
</div></div>
</blockquote></div><br></div>