[slurm-users] job priority keeping resources from being used?

c b breedthoughts.www at gmail.com
Tue Nov 5 22:07:11 UTC 2019


On Sun, Nov 3, 2019 at 7:18 AM Juergen Salk <juergen.salk at uni-ulm.de> wrote:

>
> Hi,
>
> maybe I missed it, but what does squeue say in the reason field for
> your pending jobs that you expect to slip in?
>
>
the reason on these jobs is just "Priority".




> Is your partition maybe configured for exclusive node access, e.g. by
> setting `OverSubscribe=EXCLUSIVE´?
>
>
We don't have that setting on and i believe we are not configured for
exclusive node access otherwise.  When my small jobs that each require one
core are running, we get as many jobs as we have cores running
simultaneously on each machine.

thanks


> Best regards
> Jürgen
>
> --
> Jürgen Salk
> Scientific Software & Compute Services (SSCS)
> Kommunikations- und Informationszentrum (kiz)
> Universität Ulm
> Telefon: +49 (0)731 50-22478
> Telefax: +49 (0)731 50-22471
>
>
> * c b <breedthoughts.www at gmail.com> [191101 14:44]:
> > I see - yes, to clarify, we are specifying memory for each of these jobs,
> > and there is enough memory on the nodes for both types of jobs to be
> > running simultaneously.
> >
> > On Fri, Nov 1, 2019 at 1:59 PM Brian Andrus <toomuchit at gmail.com> wrote:
> >
> > > I ask if you are specifying it, because if not, slurm will assume a job
> > > will use all the memory available.
> > >
> > > So without specifying, your big job gets allocated 100% of the memory
> so
> > > nothing could be sent to the node. Same if you don't specify for the
> little
> > > jobs. It would want 100%, but if anything is running there, 100% is not
> > > available as far as slurm is concerned.
> > >
> > > Brian
> > > On 11/1/2019 10:52 AM, c b wrote:
> > >
> > > yes, there is enough memory for each of these jobs, and there is enough
> > > memory to run the high resource and low resource jobs at the same time.
> > >
> > > On Fri, Nov 1, 2019 at 1:37 PM Brian Andrus <toomuchit at gmail.com>
> wrote:
> > >
> > >> Are you specifying memory for each of the jobs?
> > >>
> > >> Can't run a small job if there isn't enough memory available for it.
> > >>
> > >> Brian Andrus
> > >> On 11/1/2019 7:42 AM, c b wrote:
> > >>
> > >> I have:
> > >> SelectType=select/cons_res
> > >> SelectTypeParameters=CR_CPU_Memory
> > >>
> > >> On Fri, Nov 1, 2019 at 10:39 AM Mark Hahn <hahn at mcmaster.ca> wrote:
> > >>
> > >>> > In theory, these small jobs could slip in and run alongside the
> large
> > >>> jobs,
> > >>>
> > >>> what are your SelectType and SelectTypeParameters settings?
> > >>> ExclusiveUser=YES on partitions?
> > >>>
> > >>> regards, mark hahn.
> > >>>
> > >>>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20191105/f932ef7b/attachment.htm>


More information about the slurm-users mailing list