[slurm-users] slurm accounting shows more MaxRSS than physically available memory
Ohlerich, Martin
Martin.Ohlerich at lrz.de
Wed Nov 2 14:09:56 UTC 2022
Dear Jürgen,
man thanks for your reply, and your thoughts!
What you say makes deductively much sense to me ;) It is only confusing in respect to the SchedMD documentation, as I pointed out. (MaxRSSTask is then distinguished to be what?)
So, if you are right, this includes really everything, including the OS. And all dynamic libraries double-counted? Hm. Really not a good metric then.
I then wonder why Slurm does not use immediately the /proc/meminfo information. Seems that this one should be much more precise under these circumstances ... (at least I hope the Linux kernel people do a much better job here ;) )
About PSS, I didn't know ... until now. Maybe it's worth some tests ... if I can convince our admins to collaborate ;)
https://bugs.schedmd.com/show_bug.cgi?id=9010 is very interesting. Seems as it exactly resembles my observation. And therein, cgroup seems to be considered as a cause. Although we also have cgroups active, afaik, I wonder a bit how this might influence counting memory occupation.
It is maybe not a very good idea generally. But for the moment, I use "\time -v" to scrutinize rank-wise MaxRSS. I'm unsure about its thread-safety. But the values I usually observe seem reasonable. And here, there is no doubt that it is indeed MPI rank-wise (of the code, I really want to investigate ... I hope so ...). But I never summed up all these values for a node. Maybe it's a good occasion to try now. If that's also larger than 100% ... ;)
Kind regards,
Martin
________________________________
Von: slurm-users <slurm-users-bounces at lists.schedmd.com> im Auftrag von Juergen Salk <juergen.salk at uni-ulm.de>
Gesendet: Mittwoch, 2. November 2022 14:25
An: Slurm User Community List
Betreff: Re: [slurm-users] slurm accounting shows more MaxRSS than physically available memory
Hi Martin,
to my very best knowledge MaxRSS does report aggregated memory consumption
of all tasks but including all the shared libraries that the individual
processes uses, even though a shared library is only loaded into memory
once regardless of how many processes use it.
So shared libraries do count multiple times (once for every individual
process) to MaxRSS when summed up. This can even result in a higher
MaxRSS value reported by sacct than the total amount of memory that is
physically available.
For exactly that reason we have decided to use
`JobAcctGatherParams=UsePss´ in slurm.conf as we think proportional
set size (PSS) is more useful than RSS because when the PSS for all
processes are summed together, that seems to be a better
representation for the "true" total memory consumption of the job.
Also see:
https://en.wikipedia.org/wiki/Proportional_set_size
https://bugs.schedmd.com/show_bug.cgi?id=9010
I would also be interested what others think.
Best regards
Jürgen
* Ohlerich, Martin <Martin.Ohlerich at lrz.de> [221102 11:53]:
> Dear "Commiserates".
>
> I wonder a bit about the meaning of MaxRSS. The documentation says:
> "Maximum resident set size of all tasks in job."
>
> To what refers here "maximum"? The maximum over job period, I
> understand hopefully correctly. But it does not seem to be the size
> of all tasks (summed up, so-to-speak), but the maximum size of RSS
> of that task with the largest RSS of all tasks during the job's
> period. Right?
>
> In any case, I observed something like this:
>
> login08:~> sacct -j 2408392 -o 'maxrss,maxrssnode%20'
> MaxRSS MaxRSSNode
> ---------- --------------------
> 102124920K i02r09c03s02
>
> ... so 102 GB if I counted the decimal positions correctly. On the other hand, for specifically this node, I actually only have
>
> i02r09c03s02:~> cat /proc/meminfo
> MemTotal: 98436736 kB
>
> i.e. 98 GB RAM ...
>
> Does anybody know whether there is a reasonable explanation how this can be? Specifically is the situation even worse, if MaxRSS is the maximum RSS of only one task (rank) on that node. What about the other tasks (which certainly also consume memory). And also the OS is quite large on these disk-less compute nodes.
>
> Would be nice if you could share any ideas about my finding. Thank you!
> Kind regards,
> Martin
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20221102/85fb4cab/attachment.htm>
More information about the slurm-users
mailing list