[slurm-users] Can slurm be configured to only run one job at a time?

Renfro, Michael Renfro at tntech.edu
Mon Mar 23 15:36:40 UTC 2020


Rather than configure it to only run one job at a time, you can use job dependencies to make sure only one job of a particular type at a time. A singleton dependency [1, 2] should work for this. From [1]:

  #SBATCH --dependency=singleton --job-name=big-youtube-upload

in any job script would ensure that only one job with that job name should run at a time.

[1] https://slurm.schedmd.com/sbatch.html
[2] https://hpc.nih.gov/docs/job_dependencies.html

-- 
Mike Renfro, PhD / HPC Systems Administrator, Information Technology Services
931 372-3601     / Tennessee Tech University

> On Mar 23, 2020, at 10:00 AM, Faraz Hussain <faraz_hussain at yahoo.com> wrote:
> 
> External Email Warning
> 
> This email originated from outside the university. Please use caution when opening attachments, clicking links, or responding to requests.
> 
> ________________________________
> 
> I have a five node cluster of raspberry pis. Every hour they all have to upload a local 1 GB file to YouTube. I want it so only one pi can upload at a time so that network doesn't get bogged down.
> 
> Can slurm be configured to only run one job at a time? Or perhaps some other way to accomplish what I want?
> 
> Thanks!
> 




More information about the slurm-users mailing list