[slurm-users] Per-user TRES summary?

Jeffrey R. Lang JRLang at uwyo.edu
Tue Nov 29 00:17:48 UTC 2022


You might try the slurmuserjobs command as part of the Slurm_tools package found here https://github.com/OleHolmNielsen/Slurm_tools



From: slurm-users <slurm-users-bounces at lists.schedmd.com> On Behalf Of Djamil Lakhdar-Hamina
Sent: Monday, November 28, 2022 5:49 PM
To: Slurm User Community List <slurm-users at lists.schedmd.com>
Subject: Re: [slurm-users] Per-user TRES summary?

◆ This message was sent from a non-UWYO address. Please exercise caution when clicking links or opening attachments from external sources.



On Mon, Nov 28, 2022 at 10:17 AM Pacey, Mike <m.pacey at lancaster.ac.uk<mailto:m.pacey at lancaster.ac.uk>> wrote:
Hi folks,

Does anyone have suggestions as to how to produce a summary of a user’s TRES resources for running jobs? I’d like to able to see how each user is fairing against their qos resource limits. (I’m looking for something functionally equivalent to Grid Engine’s qquota command). The info must be in the scheduler somewhere in order for it to enforce qos TRES limits, but as a SLURM novice I’ve not found any way to do this.

To summarise TRES qos limits I can do this:

% sacctmgr list qos format=Name,MaxTRESPerUser%50
      Name                                          MaxTRESPU
---------- --------------------------------------------------
    normal                                    cpu=80,mem=320G

But to work out what a user is currently using in currently running jobs, the nearest I can work out is:

% sacct -X -s R --units=G -o User,ReqTRES%50
     User                                            ReqTRES
--------- --------------------------------------------------
    pacey                   billing=1,cpu=1,mem=0.49G,node=1
    pacey                   billing=1,cpu=1,mem=0.49G,node=1

With a little scripting I can sum those up, but there might be a neater way to do this?

Regards,
Mike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20221129/1d78df23/attachment.htm>


More information about the slurm-users mailing list