Hi,
I seem to remember that in the past, if a node was configured to be in two partitions, the actual partition of the node was determined by the partition associated with the jobs running on it. Moreover, at any instance where the node was running one or more jobs, the node could only actually be in a single partition.
Was this indeed the case and is it still the case with version Slurm 23.02.7?
Cheers,
Loris
That certainly isn't the case in our configuration. We have multiple overlapping partitions and our nodes have a mix of jobs from all different partitions. So the default behavior is to have a mixing of partitions on a node governed by the Priority Tier of the partition. Namely the highest priority tier always goes first but jobs from the lower tiers can fill in the gaps on a node.
Having multiple partitions and then having only one own a node if it happens to have a job running isn't a standard option to my knowledge. You can accomplish this though with MCS which I know can lock down nodes to specific users and groups. But what you describe sounds more like you are locking down based on partition not on user or group, which I'm not how to accomplish in the current version of slurm.
Doesn't mean its not possible, I just don't know how unless it is some obscure option.
-Paul Edmon-
On 1/29/2024 9:25 AM, Loris Bennett wrote:
Hi,
I seem to remember that in the past, if a node was configured to be in two partitions, the actual partition of the node was determined by the partition associated with the jobs running on it. Moreover, at any instance where the node was running one or more jobs, the node could only actually be in a single partition.
Was this indeed the case and is it still the case with version Slurm 23.02.7?
Cheers,
Loris