[slurm-users] Topology configuration questions:

Fulcomer, Samuel samuel_fulcomer at brown.edu
Fri Jan 18 01:12:25 UTC 2019


Yes, well, the trivial cat-skinning method is to use topology.conf to
describe multiple switch topologies confining each architecture to their
meta-fabric. We use GPFS as a parallel filesystem, and all nodes are
connected, but topology.conf keeps jobs on uniform-architecture collectives.

On Thu, Jan 17, 2019 at 8:05 PM Nicholas McCollum <nmccollum at asc.edu> wrote:

> I recommend putting heterogeneous node types each into their own patition
> to keep jobs from spanning multiple node types.  You can also set QoS's for
> different partitions and make a job in that QoS only able to be scheduled
> on nodes=1.  You could also accomplished this with a partition config in
> your slurm.conf... or use the job_submit.lua plugin to capture jobs
> submitted to that partition and change max nodes =1.
> There's a lot of easy ways to skin that cat.
>
> I personally like using the submit all jobs to all partitions plugin and
> having users constrain to specific types of nodes using the
> --constraint=whatever flag.
>
>
> Nicholas McCollum
> Alabama Supercomputer Authority
> ------------------------------
> *From:* "Fulcomer, Samuel" <samuel_fulcomer at brown.edu>
> *Sent:* Thursday, January 17, 2019 5:58 PM
> *To:* Slurm User Community List
> *Subject:* Re: [slurm-users] Topology configuration questions:
>
> We use topology.conf to segregate architectures (Sandy->Skylake), and also
> to isolate individual nodes with 1Gb/s Ethernet rather than IB (older GPU
> nodes with deprecated IB cards). In the latter case, topology.conf had a
> switch entry for each node.
>
> It used to be the case that SLURM was unhappy with nodes defined in
> slurm.conf not appearing in topology.conf. This may have changed....
>
> On Thu, Jan 17, 2019 at 6:37 PM Ryan Novosielski <novosirj at rutgers.edu>
> wrote:
>
>> I don’t actually know the answer to this one, but we have it provisioned
>> to all nodes.
>>
>> Note that if you care about node weights (eg. NodeName=whatever001
>> Weight=2, etc. in slurm.conf), using the topology function will disable it.
>> I believe I was promised a warning about that in the future in a
>> conversation with SchedMD.
>>
>> > On Jan 17, 2019, at 4:52 PM, Prentice Bisbal <pbisbal at pppl.gov> wrote:
>> >
>> > And a follow-up question: Does topology.conf need to be on all the
>> nodes, or just the slurm controller? It's not clear from that web page. I
>> would assume only the controller needs it.
>> >
>> > Prentice
>> >
>> > On 1/17/19 4:49 PM, Prentice Bisbal wrote:
>> >> From https://slurm.schedmd.com/topology.html:
>> >>
>> >>> Note that compute nodes on switches that lack a common parent switch
>> can be used, but no job will span leaf switches without a common parent
>> (unless the TopologyParam=TopoOptional option is used). For example, it is
>> legal to remove the line "SwitchName=s4 Switches=s[0-3]" from the above
>> topology.conf file. In that case, no job will span more than four compute
>> nodes on any single leaf switch. This configuration can be useful if one
>> wants to schedule multiple phyisical clusters as a single logical cluster
>> under the control of a single slurmctld daemon.
>> >>
>> >> My current environment falls into the category of multiple physical
>> clusters being treated as a single logical cluster under the control of a
>> single slurmctld daemon. At least, that's my goal.
>> >>
>> >> In my environment, I have 2 "clusters" connected by their own separate
>> IB fabrics, and one "cluster" connected with 10 GbE. I have a fourth
>> cluster connected with only 1GbE. For this 4th cluster, we don't want jobs
>> to span nodes, due to the slow performance of 1 GbE. (This cluster is
>> intended for serial and low-core count parallel jobs) If I just leave those
>> nodes out of the topology.conf file, will that have the desired affect of
>> not allocating multi-node jobs to those nodes, or will it result in an
>> error of some sort?
>> >>
>> >
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190117/4d0fa5b6/attachment-0001.html>


More information about the slurm-users mailing list