[slurm-users] Configuring slurm.conf and using subpartitions

Rémi Palancher remi at rackslab.io
Wed Oct 4 08:44:56 UTC 2023


Le mercredi 4 octobre 2023 à 06:03, Kratz, Zach <ZKratz at clarku.edu> a écrit :

> We use an interactive node that will randomly select from our list of computing nodes to complete the job. We would like to find a way to select from our list of old nodes first, before using the newer ones. We tried using weight and assigned each of the old nodes a lower weight than the new nodes, but in testing the new nodes were still assigned, even if the old nodes were available.

Unless confidential, can you show the configuration Node and Partition configuration lines you have tested unsuccessfully?

> Is there any way to configure this in the line that configures the interactive node in slurm.conf, for example: 
> 
> PartitionName=interactive-cpu   Nodes=node[1-17] weight =10 node[18-24] weight=50

Mind that Weight is a *Node* parameter, to be defined on Node setting lines[1], not on Partition line.

Another less optimal option is to define a default partition with the old nodes and another overlapping partition including the new nodes that users would need to specify explicitely on job submission to access the new nodes.

[1] https://slurm.schedmd.com/slurm.conf.html#SECTION_NODE-CONFIGURATION
--
Rémi Palancher
Rackslab: Open Source Solutions for HPC Operations
https://rackslab.io



More information about the slurm-users mailing list