Thank you for your answer.

To test it I tried:
sacctmgr update qos normal set maxtresperuser=cpu=2
# Then in slurm.conf
PartitionName=test […] qos=normal

But then if I submit several 1-cpu jobs only two start and the others stay pending, even though I have several nodes available. So it seems that MaxTRESPerUser is a QoS-wide limit, and doesn't limit TRES per user and per node but rather per user and QoS (or rather partition since I applied the QoS on the partition). Did I miss something?

Thanks again,
Guillaume


De: "Groner, Rob" <rug262@psu.edu>
À: slurm-users@lists.schedmd.com, "Guillaume COCHARD" <guillaume.cochard@cc.in2p3.fr>
Envoyé: Mardi 24 Septembre 2024 15:45:08
Objet: Re: Max TRES per user and node

You have the right idea.

On that same page, you'll find MaxTRESPerUser, as a QOS parameter.

You can create a QOS with the restrictions you'd like, and then in the partition definition, you give it that QOS.  The QOS will then apply its restrictions to any jobs that use that partition.

Rob

From: Guillaume COCHARD via slurm-users <slurm-users@lists.schedmd.com>
Sent: Tuesday, September 24, 2024 9:30 AM
To: slurm-users@lists.schedmd.com <slurm-users@lists.schedmd.com>
Subject: [slurm-users] Max TRES per user and node
 
Hello,

We are looking for a method to limit the TRES used by each user on a per-node basis. For example, we would like to limit the total memory allocation of jobs from a user to 200G per node.

There is MaxTRESperNode (https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fslurm.schedmd.com%2Fsacctmgr.html%23OPT_MaxTRESPerNode&data=05%7C02%7Crug262%40psu.edu%7Ca5ac74d119fb4b1e2a6a08dcdc9d71f4%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C638627815993703402%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=ovXl4if01XtEDBQy3GxOG%2BrpH1GiDYFEOjNtz7gpkUs%3D&reserved=0), but unfortunately, this is a per-job limit, not per user.

Ideally, we would like to apply this limit on partitions and/or QoS. Does anyone know if this is possible and how to achieve it?

Thank you,

--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-leave@lists.schedmd.com