<div dir="ltr">Thanks for the replies.<div><br>I didn't specify earlier but we're using Inte MPI and the following environment variable, I_MPI_JOB_RESPECT_PROCESS_PLACEMENT, fixed my issue.<br><br>#SBATCH --ntasks=980<br>#SBATCH --ntasks-per-node=16<br>#SBATCH --exclusive<br><br>export I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=off<br>mpirun -np $SLURM_NTASKS -perhost $SLURM_NTASKS_PER_NODE /path/to/MPI/app<br><br>Thanks,<br><br>- Chansup<br><div><p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-family:"Courier New";color:rgb(31,73,125);background:yellow"><br></span></p></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jul 31, 2019 at 2:01 AM Daniel Letai <<a href="mailto:dani@letai.org.il">dani@letai.org.il</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div style="direction:ltr" bgcolor="#FFFFFF">
<br>
<div class="gmail-m_3759498918133458892moz-cite-prefix">On 7/30/19 6:03 PM, Brian Andrus wrote:<br>
</div>
<blockquote type="cite">
<p>I think this may be more on how you are calling mpirun and the
mapping of processes.</p>
<p>With the "--exclusive" option, the processes are given access
to all the cores on each box, so mpirun has a choice. IIRC, the
default is to pack them by slot, so fill one node, then move to
the next. Whereas you want to map by node (one process per node
cycling by node)</p>
<p>From the man for mpirun (openmpi):<br>
</p>
<dl style="color:rgb(0,0,0);font-family:verdana,arial,helvetica;font-size:12px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration-style:initial;text-decoration-color:initial">
<dt><b>--map-by <foo></b></dt>
<dd>Map to the specified object, defaults to<span> </span><i>socket</i>.
Supported options include slot, hwthread, core, L1cache,
L2cache, L3cache, socket, numa, board, node, sequential,
distance, and ppr. Any object can include modifiers by adding
a : and any combination of PE=n (bind n processing elements to
each proc), SPAN (load balance the processes across the
allocation), OVERSUBSCRIBE (allow more processes on a node
than processing elements), and NOOVERSUBSCRIBE. This includes
PPR, where the pattern would be terminated by another colon to
separate it from the modifiers.</dd>
</dl>
<div class="gmail-m_3759498918133458892moz-cite-prefix"><br>
</div>
<div class="gmail-m_3759498918133458892moz-cite-prefix">so adding "--map-by node" would give
you what you are looking for.</div>
<div class="gmail-m_3759498918133458892moz-cite-prefix">Of course, this syntax is for
Openmpi's mpirun command, so YMMV<br>
</div>
</blockquote>
<p>If using srun (as recommended) instead of invoking mpirun
directly, you can still achieve the same functionality using
exported environment variables as per the mpirun man page, like
this:</p>
<p>OMPI_MCA_rmaps_base_mapping_policy=node srun --export
OMPI_MCA_rmaps_base_mapping_policy ...</p>
<p>in you sbatch script.<br>
</p>
<blockquote type="cite">
<div class="gmail-m_3759498918133458892moz-cite-prefix"> </div>
<div class="gmail-m_3759498918133458892moz-cite-prefix">Brian Andrus<br>
</div>
<div class="gmail-m_3759498918133458892moz-cite-prefix"><br>
</div>
<div class="gmail-m_3759498918133458892moz-cite-prefix"><br>
</div>
<div class="gmail-m_3759498918133458892moz-cite-prefix">On 7/30/2019 5:14 AM, CB wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Everyone,
<div><br>
</div>
<div>I've recently discovered that when an MPI job is
submitted with the --exclusive flag, Slurm fills up each
node even if the --ntasks-per-node flag is used to set how
many MPI processes is scheduled on each node. Without the
--exclusive flag, Slurm works fine as expected.</div>
<div><br>
</div>
<div>Our system is running with Slurm 17.11.7.</div>
<div><br>
</div>
<div>The following options works that each node has 16 MPI
processes until all 980 MPI processes are scheduled.with
total of 62 compute nodes. Each of the 61 nodes has 16 MPI
processes and the last one has 4 MPI processes, which is 980
MPI processes in total.</div>
<div>#SBATCH -n 980 <br>
#SBATCH --ntasks-per-node=16<br>
</div>
<div><br>
</div>
<div>However, if the --exclusive option is added, Slurm fills
up each node with 28 MPI processes (the compute node has 28
cores). Interestingly, Slurm still allocates 62 compute
nodes although only 35 nodes of them are actually used to
distribute 980 MPI processes.</div>
<div><br>
</div>
<div>
<div>#SBATCH -n 980 <br>
#SBATCH --ntasks-per-node=16<br>
</div>
<div>#SBATCH --exclusive<br>
</div>
</div>
<div><br>
</div>
<div>Has anyone seen this behavior?</div>
<div><br>
</div>
<div>Thanks,</div>
<div>- Chansup</div>
</div>
</blockquote>
</blockquote>
</div>
</blockquote></div>