[slurm-users] Priority Access to GPU?

Fulcomer, Samuel samuel_fulcomer at brown.edu
Mon Jul 12 21:32:14 UTC 2021


Jason,

I've just been working through a similar scenario to handle access to our
3090 nodes that have been purchased by researchers.

I suggest putting the node into an additional partition, and then add a QOS
for the lab group that has grptres=gres/gpu=1,cpu=M,mem=N (where cpu and
mem are whatever are reasonable). Only create associations for the lab
group members for that partition. The lab group members can then submit to
both partitions using "--partition=special,regular", where "special" is the
additional partition, and "regular" is the original partition. If the QOS
or partition has a high priority assigned to it, then a lab group
member's job should always run next on the same gpu that had been
previously allocated. That way only one job should be preempted to allow
the execution of multiple, successive lab group jobs.

Regards,
Sam

On Mon, Jul 12, 2021 at 3:38 PM Jason Simms <simmsj at lafayette.edu> wrote:

> Dear all,
>
> I feel like I've attempted to track this down before but have never fully
> understood how to accomplish this.
>
> I have a GPU node with three GPU cards, one of which was purchased by a
> user. I want to provide priority access for that user to the card, while
> still allowing it to be used by the community when not in use by that
> particular user. Well, more specifically, I'd like a specific account
> within Slurm to have priority access; the account contains multiple
> accounts that are part of the faculty's lab group.
>
> I have such access properly configured for the actual nodes (priority
> preempt), but the GPU (which is configured as a GRES) seems like a beast of
> a different color.
>
> Warmest regards,
> Jason
>
> --
> *Jason L. Simms, Ph.D., M.P.H.*
> Manager of Research and High-Performance Computing
> XSEDE Campus Champion
> Lafayette College
> Information Technology Services
> 710 Sullivan Rd | Easton, PA 18042
> Office: 112 Skillman Library
> p: (610) 330-5632
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20210712/a7fbed36/attachment.htm>


More information about the slurm-users mailing list