<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
span.EmailStyle19
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style>
</head>
<body lang="EN-US" link="#0563C1" vlink="#954F72" style="word-wrap:break-word">
<div class="WordSection1">
<p class="MsoNormal">We have overlapping partitions for GPU work and some kinds non-GPU work (both large memory and regular memory jobs).<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">For 28-core nodes with 2 GPUs, we have:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">PartitionName=gpu MaxCPUsPerNode=16 … Nodes=gpunode[001-004]<o:p></o:p></p>
<p class="MsoNormal">PartitionName=any-interactive MaxCPUsPerNode=12 … Nodes=node[001-040],gpunode[001-004]<o:p></o:p></p>
<p class="MsoNormal">PartitionName=bigmem MaxCPUsPerNode=12 … Nodes=gpunode[001-003]<o:p></o:p></p>
<p class="MsoNormal">PartitionName=hugemem MaxCPUsPerNode=12 … Nodes=gpunode004<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Worst case, non-GPU jobs could reserve up to 24 of the 28 cores on a GPU node, but only for a limited time (our any-interactive partition has a 2 hour time limit). In practice, it's let us use a lot of otherwise idle CPU capacity in the
GPU nodes for short test runs.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span style="font-size:12.0pt;color:black">From:
</span></b><span style="font-size:12.0pt;color:black">slurm-users <slurm-users-bounces@lists.schedmd.com><br>
<b>Date: </b>Wednesday, December 16, 2020 at 1:04 PM<br>
<b>To: </b>Slurm User Community List <slurm-users@lists.schedmd.com><br>
<b>Subject: </b>[slurm-users] using resources effectively?<o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt">External Email Warning<br>
<br>
This email originated from outside the university. Please use caution when opening attachments, clicking links, or responding to requests.<br>
<br>
________________________________<br>
<br>
Hi,<br>
<br>
Say if I have a Slurm node with 1 x GPU and 112 x CPU cores, and:<br>
<br>
1) there is a job running on the node using the GPU and 20 x CPU cores<br>
<br>
2) there is a job waiting in the queue asking for 1 x GPU and 20 x<br>
CPU cores<br>
<br>
Is it possible to a) let a new job asking for 0 x GPU and 20 x CPU cores<br>
(safe for the queued GPU job) start immediately; and b) let a new job<br>
asking for 0 x GPU and 100 x CPU cores (not safe for the queued GPU job)<br>
wait in the queue? Or c) is it doable to put the node into two Slurm<br>
partitions, 56 CPU cores to a "cpu" partition, and 56 CPU cores to a<br>
"gpu" partition, for example?<br>
<br>
Thank you in advance for any suggestions / tips.<br>
<br>
Best,<br>
<br>
Weijun<br>
<br>
===========<br>
Weijun Gao<br>
Computational Research Support Specialist<br>
Department of Psychology, University of Toronto Scarborough<br>
1265 Military Trail, Room SW416<br>
Toronto, ON M1C 1M2<br>
E-mail: weijun.gao@utoronto.ca<br>
<br>
<o:p></o:p></p>
</div>
</div>
</body>
</html>