[slurm-users] Allow certain users to run over partition limit

Sebastian T Smith stsmith at unr.edu
Tue Jul 7 22:59:50 UTC 2020


Hi,

We use Job QOS and Resource Reservations for this purpose.  QOS is a good option for a "permanent" change to a user's resource limits.  We use reservations similar to how you're currently using partitions to "temporarily" provide a resource boost without the complexities of re-partitioning or mucking with associations.

Precedence  in resource limits: https://slurm.schedmd.com/resource_limits.html
QOS: https://slurm.schedmd.com/qos.html
Resource reservations: https://slurm.schedmd.com/reservations.html

Best Regards,

Sebastian

--

[University of Nevada, Reno]<http://www.unr.edu/>
Sebastian Smith
High-Performance Computing Engineer
Office of Information Technology
1664 North Virginia Street
MS 0291

work-phone: 775-682-5050<tel:7756825050>
email: stsmith at unr.edu<mailto:stsmith at unr.edu>
website: http://rc.unr.edu<http://rc.unr.edu/>

________________________________
From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of Matthew BETTINGER <matthew.bettinger at external.total.com>
Sent: Tuesday, July 7, 2020 9:40 AM
To: slurm-users at lists.schedmd.com <slurm-users at lists.schedmd.com>
Subject: [slurm-users] Allow certain users to run over partition limit

Hello,

We have a slurm system with partitions set for max runtime of 24hours.  What would be the proper way to allow a certain set of users to run jobs on the current partitions beyond the partition limits?  In the past we would isolate some nodes based on their job requirements , make a new partition and a reservation with the users and have to push out the new configuration.  This is pretty unwieldy but works but doing it this way the nodes are basically wasted unless they are not being used by these special users and unavailable for others.

Is there some way we can allow some users sometimes to run over partition run time more easily than manually modifying slurm.conf.  Possibly with qos?

Thanks.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200707/bceaff40/attachment.htm>


More information about the slurm-users mailing list