You could add features (constraints) to project_A_* nodes in slurm.conf:


NodeName=DEFAULT ...

NodeName=project_A_[01-80] Features=project_A


and then in your batch script template for project_A add something like


#SBATCH --prefer=project_A


Of course this will require project_A accounts to use your template, since Slurm doesn't have a builtin mechanism to require accounts to use constraints.


References:

https://slurm.schedmd.com/slurm.conf.html#OPT_Features

https://slurm.schedmd.com/sbatch.html#OPT_prefer


I didn't test this solution, just basing it on documentation.


This should overcome the issue with using multiple partitions in sbatch -p, namely:

If project_A job requires 2 nodes, but currently only 1 node is available in partition project_A_part, the job will be allocated both nodes from general_part, and there is no guarantee the free project_A_ node will be used.


HTH,

--Dani_L.



On 22/05/2025 9:14, Bjørn-Helge Mevik via slurm-users wrote:
"thomas.hartmann--- via slurm-users" <slurm-users@lists.schedmd.com>
writes:

I have three sets of accounts (each can have child accounts):
1. "General" accounts: These are allowed to use all physical nodes.
2. "ForProfit" accounts: These absolutely must not use the project_A_* nodes
3. "Project_A" accounts: Their jobs should run first and foremost on the project_A_* nodes but they are also allowed to run on the node* nodes.

My first idea would be to create two partitions, one with all nodes in
there and a second one with only those nodes that the "ForProfit" are
allowed to use. Using "AllowAccounts" or "DenyAccounts" would
implement the restriction I need.
That will work, but will not ensure that Project_A jobs first land on
the projectt_A_* nodes (unless you give these nodes a lower weight - but
then the General jobs will also start there first).

You could add a third partition with the project_A_* nodes, and only
give Project_A access to it.  You can then give this partition a higher
PriorityTier than the other partitions, meaning that jobs asking for
that partitions will be scheduled first.  Users of project_A should then
submit jobs with -p project_A_part,General_part, or you could add some
logic in your job_submit.lua that makes sure this happens.