[slurm-users] job_desc.pn-min-memory in LUA jobsubmit-plugin

Davide DelVento davide.quantum at gmail.com
Fri Nov 17 13:48:35 UTC 2023


I don't have an answer for you, but I found your message in my spam folder.
I brought it out and I'm replying to it in the hope that it gets some
visibility in people's mailboxes.

Note that in the US it's SC week and many people are or have been busy with
it and will be travelling in the next days, then next week is Thanksgiving
and some people take the week off, so you may still have to wait for a week
or so to get an answer -- unless you get from somewhere else in the world,
that is ;-)

On Thu, Nov 16, 2023 at 12:54 AM Marx, Wolfgang <
wolfgang.marx at tu-darmstadt.de> wrote:

> Hello,
>
>
>
> we are using Slurm Version 23.02.3 and are working on a job_submit-plugin
> written in LUA.
>
>
>
> During the development of the script we found out, that values we give for
>
> --mem will appear in the job-submit-plugin in the variable
> job_desc.pn-min-memory
>
> and for
>
> --mem-per-cpu will appear in the variable job_desc.min_mem_per_cpu
>
>
>
> During our tests we now see a strange behavior:
>
> When we start a job without --mem or --mem-per-cpu ,
> job_desc.pn-min-memory shows up with the value  1.844674407371e+19
>
> When we start a job without --mem but --mem-per-cpu is set,
> job_desc.pn-min-meory shows up with the value 9.2233720368548e+18
>
>
>
> Why does the NO_VAL value for --mem differs depending if --mem-per-cpu is
> set or not ?
>
>
>
> In the documentation i could not find a proper explanation.
>
>
>
> Kind regards
>
>     Wolfgang
>
>
>
> Wolfgang Marx, Basisdienste, Gruppe Hochleistungrechnen
>
> Technische Universität Darmstadt, Hochschulrechenzentrum
>
> Alexanderstraße 2, 64283 Darmstadt
>
> Tel.: +496151/16-71158
>
> E-Mail: wolfgang.marx at tu-darmstadt.de
>
> Web: www.hrz.tu-darmstadt.de
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20231117/79ababc5/attachment-0001.htm>


More information about the slurm-users mailing list