[slurm-users] TRES sreport per association

David drhey at umich.edu
Thu Nov 16 14:57:11 UTC 2023


Hello,

Perhaps`scontrol show assoc` might be able to help you here, in part? Or
even sshare. Those would be the raw usage numbers, if I remember correctly.
But it might help get you some insight as to usages (though not analogous
to what sreport would show). As a note: `scontrol show assoc` will be very
lengthy output.

HTH,

David

On Sun, Nov 12, 2023 at 6:03 PM Kamil Wilczek <kmwil at mimuw.edu.pl> wrote:

> Dear All,
>
> is is possible to report GPU Minutes per association? Suppose
> I have two associations like this:
>
>    sacctmgr show assoc where user=$(whoami)
> format=account%10,user%16,partition%12,qos%12,grptresmins%20
>
>     Account             User    Partition          QOS          GrpTRESMins
> ---------- ---------------- ------------ ------------ --------------------
>       staff            kmwil      gpu_adv       1gpu1d       gres/gpu=10000
>       staff            kmwil       common       4gpu4d         gres/gpu=100
>
> When I run "sreport" I get (I think) the cumulative report. There
> is no "association" option for the "--format" flag for "sreport".
>
> In my setup I divide the cluster using GPU generations. Older
> cards, like TITAN V are accessible for all users (a common
> partition), but, for example, a partition with nodes with A100
> is accessible only for selected users.
>
> Each user gets a QoS ("4gpu4d" means that a user can allocate
> 4 GPUs at most and a single job time limit is 4 days). Each
> user is also limited to a number of GPUMinutes for each
> association and it would be nice to know how many minutes
> are left per assoc.
>
> Kind regards
> --
> Kamil Wilczek [https://keys.openpgp.org/]
> [6C4BE20A90A1DBFB3CBE2947A832BF5A491F9F2A]
>


-- 
David Rhey
---------------
Advanced Research Computing
University of Michigan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20231116/2abdc0db/attachment.htm>


More information about the slurm-users mailing list