<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<div class="moz-cite-prefix">On 01/17/2019 07:55 PM, Fulcomer,
Samuel wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAOORAuGGL8uqomfbH7cCevi+oF1LYZ+=LZwr7eSkhr8vdaiNrg@mail.gmail.com">
<div dir="ltr">We use topology.conf to segregate architectures
(Sandy->Skylake), and also to isolate individual nodes with
1Gb/s Ethernet rather than IB (older GPU nodes with deprecated
IB cards). In the latter case, topology.conf had a switch entry
for each node. <br>
</div>
</blockquote>
So Slurm thinks each node has its own switch that is not shared with
any other node? <br>
<blockquote type="cite"
cite="mid:CAOORAuGGL8uqomfbH7cCevi+oF1LYZ+=LZwr7eSkhr8vdaiNrg@mail.gmail.com">
<div dir="ltr">
<div><br>
</div>
<div>It used to be the case that SLURM was unhappy with nodes
defined in slurm.conf not appearing in topology.conf. This may
have changed....</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Thu, Jan 17, 2019 at 6:37
PM Ryan Novosielski <<a href="mailto:novosirj@rutgers.edu"
moz-do-not-send="true">novosirj@rutgers.edu</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I
don’t actually know the answer to this one, but we have it
provisioned to all nodes.<br>
<br>
Note that if you care about node weights (eg.
NodeName=whatever001 Weight=2, etc. in slurm.conf), using the
topology function will disable it. I believe I was promised a
warning about that in the future in a conversation with
SchedMD.<br>
<br>
> On Jan 17, 2019, at 4:52 PM, Prentice Bisbal <<a
href="mailto:pbisbal@pppl.gov" target="_blank"
moz-do-not-send="true">pbisbal@pppl.gov</a>> wrote:<br>
> <br>
> And a follow-up question: Does topology.conf need to be
on all the nodes, or just the slurm controller? It's not clear
from that web page. I would assume only the controller needs
it.<br>
> <br>
> Prentice<br>
> <br>
> On 1/17/19 4:49 PM, Prentice Bisbal wrote:<br>
>> From <a
href="https://slurm.schedmd.com/topology.html"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://slurm.schedmd.com/topology.html</a>:<br>
>> <br>
>>> Note that compute nodes on switches that lack a
common parent switch can be used, but no job will span leaf
switches without a common parent (unless the
TopologyParam=TopoOptional option is used). For example, it is
legal to remove the line "SwitchName=s4 Switches=s[0-3]" from
the above topology.conf file. In that case, no job will span
more than four compute nodes on any single leaf switch. This
configuration can be useful if one wants to schedule multiple
phyisical clusters as a single logical cluster under the
control of a single slurmctld daemon.<br>
>> <br>
>> My current environment falls into the category of
multiple physical clusters being treated as a single logical
cluster under the control of a single slurmctld daemon. At
least, that's my goal.<br>
>> <br>
>> In my environment, I have 2 "clusters" connected by
their own separate IB fabrics, and one "cluster" connected
with 10 GbE. I have a fourth cluster connected with only 1GbE.
For this 4th cluster, we don't want jobs to span nodes, due to
the slow performance of 1 GbE. (This cluster is intended for
serial and low-core count parallel jobs) If I just leave those
nodes out of the topology.conf file, will that have the
desired affect of not allocating multi-node jobs to those
nodes, or will it result in an error of some sort?<br>
>> <br>
> <br>
<br>
</blockquote>
</div>
</blockquote>
<br>
</body>
</html>