[slurm-users] MinTRESPerJob on partitions?

Steven Dick kg4ydw at gmail.com
Wed Oct 14 13:28:40 UTC 2020


You can set MinTRESPerJob in a QOS and then only allow that QOS ni
that partition.
Or have a set of QOS for that partition that have that set...
I'm not sure if a partition QOS would help here, but it could,
basically forcing that QOS on all jobs in the partition.

I've found that debugging lua job submit plugins using slurm.user_msg
can make it easier with things like this:

  if submit_uid==1000 and not job_desc.script then
        slurm.user_msg("Have fun debugging!")
        slurm.log_info("Save this in the slurm log")
  end

I've considered multiple options for a job submit plugin that
autoselects QOS and/or partitions as well as defaulting to multiple
partitions as you suggest, but I haven't settled on a solution yet.

It would be nice if slurm could run through a prioritized list of QOS
or partitions and select the first fit.  SGE did this.
Or if there was a way in the job submit plugin to test if the current
job's TRES fits a particular QOS/partition...

On Wed, Oct 14, 2020 at 6:40 AM Diego Zuccato <diego.zuccato at unibo.it> wrote:
>
> Hello all.
>
> IIUC, it's not possible to set MinTRESPerJob at the partition level,
> just at QoS level.
> Is there a (possibly simple) way to emulate it?
>
> Extended rationale: I'm going to restructure the organization of the
> cluster, splitting it in a set of homogeneous-nodes partitions (the
> default for jobs not requesting a particular partition will come from
> SBATCH_PARTITION, set in /etc/environment to include a list of all the
> partitions). But some nodes require a minimum allocation of 1/o or 1/4
> of the cores.
> I tried with lua script (apropos, is there some simple way to debug it?)
> at submit time, but since the job is not scheduled yet I can't know
> where it will run...
>
> --
> Diego Zuccato
> DIFA - Dip. di Fisica e Astronomia
> Servizi Informatici
> Alma Mater Studiorum - Università di Bologna
> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> tel.: +39 051 20 95786
>



More information about the slurm-users mailing list