[slurm-users] Increasing MaxArraySize
stolarek.marcin at gmail.com
Tue Nov 28 04:54:19 MST 2017
I think it's more related to your configuration than general slurm
capabilities. For example if you have quite long prolog/epilog scripts it
may be good idea to discourage users from submitting huge job arrays (with
very short tasks?).
In my case it's quite common to see users submitting arrays with 150-200k
of jobs, slurmctl+slurmdbd+mysql runs on the same server- 32GB, never had
issues with lack of free memory.
2017-11-22 15:42 GMT+01:00 Loris Bennett <loris.bennett at fu-berlin.de>:
> In the documentation on job arrays
> it says
> Be mindful about the value of MaxArraySize as job arrays offer an easy
> way for users to submit large numbers of jobs very quickly.
> How much do I have to worry about this, if I am using fairshare
> scheduling, since at some point the user's shares will have been
> consumed and new jobs will only start running after a certain period has
> elapsed? Or is it referring to the amount of memory the scheduler might
> need in order to manage an enormous queue? For our standard QOS we
> currently use neither MaxJobs nor MaxSubmitJobs.
> Dr. Loris Bennett (Mr.)
> ZEDAT, Freie Universität Berlin Email loris.bennett at fu-berlin.de
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the slurm-users