[slurm-users] Allow certain users to run over partition limit

Matthew BETTINGER matthew.bettinger at external.total.com
Wed Jul 8 17:53:11 UTC 2020

Ok I see the resource hierarchy limits :

Partition QOS limit
Job QOS limit
User association
Account association(s), ascending the hierarchy
Root/Cluster association
Partition limit

Where in this list does the reservations fall under?  Do reservations override all of these if they are set to exceed resources imposed by the partition configuration?  Thanks!

On 7/7/20, 6:02 PM, "slurm-users on behalf of Sebastian T Smith" <slurm-users-bounces at lists.schedmd.com on behalf of stsmith at unr.edu> wrote:


    We use Job QOS and Resource Reservations for this purpose.  QOS is a good option for a "permanent" change to a user's resource limits.  We use reservations similar to how you're currently using partitions to "temporarily" provide a resource boost without the complexities of re-partitioning or mucking with associations.

    Precedence  in resource limits: https://slurm.schedmd.com/resource_limits.

    From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of Matthew BETTINGER <matthew.bettinger at external.total.com>
    Sent: Tuesday, July 7, 2020 9:40 AM
    To: slurm-users at lists.schedmd.com <slurm-users at lists.schedmd.com>
    Subject: [slurm-users] Allow certain users to run over partition limit  


    We have a slurm system with partitions set for max runtime of 24hours.  What would be the proper way to allow a certain set of users to run jobs on the current partitions beyond the partition limits?  In the past we would isolate some nodes based on their job requirements , make a new partition and a reservation with the users and have to push out the new configuration.  This is pretty unwieldy but works but doing it this way the nodes are basically wasted unless they are not being used by these special users and unavailable for others.  

    Is there some way we can allow some users sometimes to run over partition run time more easily than manually modifying slurm.conf.  Possibly with qos?


More information about the slurm-users mailing list