[slurm-users] Limiting srun to a specific partition

Ewan Roche ewan.roche at unil.ch
Tue Feb 15 08:08:11 UTC 2022


Hi Peter,
as Rémi said, the way to do this in Slurm is via a job submit plugin. For example in our job_submit.lua we have

if (job_desc.partition == "cpu" or job_desc.partition == "gpu") and job_desc.qos ~= "admin" then

    if job_desc.script == nil or job_desc.script == '' then

        slurm.log_info("slurm_job_submit: jobscript is missing, assuming interactive job")
        slurm.log_info("slurm_job_submit: CPU/GPU partition for interactive job, abort")
        slurm.log_user("submit_job: ERROR: interactive jobs are not allowed in the CPU or GPU partitions. Use the interactive partition")
        return -1

    end
end


Which checks to see if the job script exists - this is more or less the definition of an interactive job.



Ewan Roche

Division Calcul et Soutien à la Recherche
UNIL | Université de Lausanne


> On 15 Feb 2022, at 08:47, Rémi Palancher <remi at rackslab.io> wrote:
> 
> Hi Peter,
> 
> Le lundi 14 février 2022 à 18:37, Peter Schmidt <pschmidt at rivosinc.com> a écrit :
> 
>> slurm newbie here, converting from pbspro. In pbspro there is the capability of limiting interactive jobs (i.e srun) to a specific queue (i.e partition).
> 
> Note that in Slurm, srun and interactive jobs are not the same things. The srun command is for creating steps of jobs (interactive or not), optionally creating a job allocation beforehand if it does not exist.
> 
> You can run interactive jobs with salloc and even attach your PTY to a running batch job to interact with it. On the other hand, batchs jobs can create steps using srun command.
> 
> I don't know any native Slurm feature to restrict interactive jobs (to a specific partition or whatever). However, using job_submit LUA plugin and a custom LUA script, you might be able to accomplish what you are expecting. It has been discussed here:
> 
> https://bugs.schedmd.com/show_bug.cgi?id=3094
> 
> Best,
> --
> Rémi Palancher
> Rackslab: Open Source Solutions for HPC Operations
> https://rackslab.io
> 
> 



More information about the slurm-users mailing list