[slurm-users] Question on billing tres information from sacct, sshare, and scontrol
David Rhey
drhey at umich.edu
Thu Feb 21 22:24:09 UTC 2019
Hello,
I have a small vagrant setup I use for prototyping/testing various things.
Right now, it's running Slurm 18.08.4. I am noticing some differences for
the billing TRES in the output of various commands (notably that of sacct,
sshare, and scontrol show assoc).
On a freshly built cluster, therefore with no prior usage data, I run a
basic job to generate some usage data:
[vagrant at head vagrant]$ sshare -n -P -A drhey1 -o GrpTRESRaw
cpu=3,mem=1199,energy=0,node=3,billing=59,fs/disk=0,vmem=0,pages=0
cpu=3,mem=1199,energy=0,node=3,billing=59,fs/disk=0,vmem=0,pages=0
[vagrant at head vagrant]$ sshare -n -P -A drhey1 -o RawUsage
3611
3611
When I look at the same info within sacct I see:
[vagrant at head vagrant]$ sacct -X
--format=User,JobID,Account,AllocTRES%50,AllocGRES,ReqGRES,Elapsed,ExitCode
User JobID Account
AllocTRES AllocGRES ReqGRES Elapsed ExitCode
--------- ------------ ----------
-------------------------------------------------- ------------
------------ ---------- --------
vagrant 2 drhey1
billing=30,cpu=2,mem=600M,node=2 00:02:00
0:0
Of note is that the billing TRES shows as being equal to 30 in sacct, but
50 in sshare. Something similar happens in scontrol show assoc:
...
GrpTRESMins=cpu=N(3),mem=N(1199),energy=N(0),node=N(3),billing=N(59),fs/disk=N(0),vmem=N(0),pages=N(0)
...
Can anyone explain the difference in billing TRES value output between the
various commands? I have a couple of theories, and have been looking
through source code to try and understand a bit better. For context, I am
trying to understand what a job costs, and what usage for an account over a
span of say a month costs.
Any insight is most appreciated!
--
David Rhey
---------------
Advanced Research Computing - Technology Services
University of Michigan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190221/6e906c92/attachment.html>
More information about the slurm-users
mailing list