[slurm-users] Using free memory available when allocating a node to a job
Loris Bennett
loris.bennett at fu-berlin.de
Tue May 29 06:06:38 MDT 2018
John Hearns <hearnsj at googlemail.com> writes:
> Alexandre, you have made a very good point here. "Oftentimes users only input 1G as they really have no idea of the memory requirements,"
> At my last job we introduced cgroups. (this was in PBSPro). We had to enforce a minumum request for memory.
> Users then asked us how much memory their jobs used - so that they could request an amoutn of memory next time which would let the job run to completion.
> We were giving users information manually regarding how much memory their jobs used.
>
> I realise tha tthe tools are there for users to get the information on memory usage after a job, but I really do not expec tusrs to have to figure this out.
> What do other sites do in this case?
We tack information from sacct regarding memory usage onto the end of
the output file via the epilog. We also periodically send an email
summary of memory usage with a heuristic verdict per job as to whether
the amount of memory requested was OK or too large.
That doesn't of course prevent us from still having a few users who
insist on chronically overestimating their memory requirements, so we
have to write to them individually. Pointing out that memory
overestimation not only hampers other users' job but also their own
usually helps.
Cheers,
Loris
--
Dr. Loris Bennett (Mr.)
ZEDAT, Freie Universität Berlin Email loris.bennett at fu-berlin.de
More information about the slurm-users
mailing list