[slurm-users] [External] Re: Per-user TRES summary?
Pacey, Mike
m.pacey at lancaster.ac.uk
Tue Nov 29 15:55:37 UTC 2022
Hi Ole,
On my system that doesn't show me any MaxTRESPU info, which is how I've implemented user limits. E.g.:
% showuserlimits -q normal
scontrol -o show assoc_mgr users=pacey account=local qos=normal flags=QOS
Slurm share information:
Account User RawShares NormShares RawUsage EffectvUsage FairShare
-------------------- ---------- ---------- ----------- ----------- ------------- ----------
local pacey 1 0 0.000000 0.000000
However, using the oneliner (-o) version of the scontrol output from below it was only a few minutes work to write a small script to extract the info I needed.
Regards,
Mike
-----Original Message-----
From: slurm-users <slurm-users-bounces at lists.schedmd.com> On Behalf Of Ole Holm Nielsen
Sent: 29 November 2022 15:25
To: Slurm User Community List <slurm-users at lists.schedmd.com>
Subject: Re: [slurm-users] [External] Re: Per-user TRES summary?
Hi Mike,
That sounds great! It seems to me that "showuserlimits -q <qos>" would also print the QOS information, but maybe this is not what you are after? Have you tried this -q option, or should the script perhaps be generalized to cover your needs?
/Ole
On 29-11-2022 14:39, Pacey, Mike wrote:
> Hi Ole (and Jeffrey),
>
> Thanks for the pointer - those are some very useful scripts. I couldn't get showslurmlimits or showslurmjobs to get quite what I was after (it wasn't showing me memory usage). However, it pointed me in the right direction - the scontrol command. I can run the following:
>
> scontrol show assoc_mgr flags=qos
>
> and part of the output reads:
>
> User Limits
> [myuid]
> MaxJobsPU=N(2) MaxJobsAccruePU=N(0) MaxSubmitJobsPU=N(2)
>
> MaxTRESPU=cpu=80(2),mem=327680(1000),energy=N(0),node=N(1),billing=N(2
> ),fs/disk=N(0),vmem=N(0),pages=N(0)
>
> Which is exactly what I'm looking for. The values outside the brackets are the qos limit, and the values within are the current usage.
>
> Regards,
> Mike
>
> -----Original Message-----
> From: slurm-users <slurm-users-bounces at lists.schedmd.com> On Behalf Of
> Ole Holm Nielsen
> Sent: 28 November 2022 18:58
> To: slurm-users at lists.schedmd.com
> Subject: [External] Re: [slurm-users] Per-user TRES summary?
>
> This email originated outside the University. Check before clicking links or attachments.
>
> Hi Mike,
>
> Would the "showuserlimits" tool give you the desired information?
> Check out
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith
> ub.com%2FOleHolmNielsen%2FSlurm_tools%2Ftree%2Fmaster%2Fshowuserlimits
> &data=05%7C01%7Cpacey%40live.lancs.ac.uk%7C72d39cac43214bffb36008d
> ad22110fa%7C9c9bcd11977a4e9ca9a0bc734090164a%7C0%7C0%7C638053336698158
> 268%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTi
> I6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=iuhXkuGAu2oOKLqLI62
> Og5JwgLJ218ZBJuznIXWz1UA%3D&reserved=0
>
> /Ole
>
>
> On 28-11-2022 16:16, Pacey, Mike wrote:
>> Does anyone have suggestions as to how to produce a summary of a
>> user's TRES resources for running jobs? I'd like to able to see how
>> each user is fairing against their qos resource limits. (I'm looking
>> for something functionally equivalent to Grid Engine's qquota
>> command). The info must be in the scheduler somewhere in order for it
>> to enforce qos TRES limits, but as a SLURM novice I've not found any way to do this.
>>
>> To summarise TRES qos limits I can do this:
>>
>> % sacctmgr list qos format=Name,MaxTRESPerUser%50
>>
>> Name MaxTRESPU
>>
>> ---------- --------------------------------------------------
>>
>> normal cpu=80,mem=320G
>>
>> But to work out what a user is currently using in currently running
>> jobs, the nearest I can work out is:
>>
>> % sacct -X -s R --units=G -o User,ReqTRES%50
>>
>> User ReqTRES
>>
>> --------- --------------------------------------------------
>>
>> pacey billing=1,cpu=1,mem=0.49G,node=1
>>
>> pacey billing=1,cpu=1,mem=0.49G,node=1
>>
>> With a little scripting I can sum those up, but there might be a
>> neater way to do this?
>
More information about the slurm-users
mailing list