[slurm-users] one job at a time - how to set?

Baer, Troy troy at osc.edu
Wed Apr 29 21:04:16 UTC 2020

I don’t think there’s a way to do that in Slurm using just the node declaration, other than the previously mentioned way of configuring it to show up as having only 1 core.  However, you could put the node in a partition that has OverSubscribe=EXCLUSIVE set, and have that partition be the only way to get to it:

# in slurm.conf
NodeName=singlejobnode […settings…]
PartitionName=onejobatatime  Nodes=singlejobnode OverSubscribe=EXCLUSIVE
# singlenodejob isn’t in any other partitions.


From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of Rutger Vos <rutger.vos at naturalis.nl>
Reply-To: Slurm User Community List <slurm-users at lists.schedmd.com>
Date: Wednesday, April 29, 2020 at 2:55 PM
To: "slurm-users at lists.schedmd.com" <slurm-users at lists.schedmd.com>
Subject: [slurm-users] one job at a time - how to set?


for a smallish machine that has been having degraded performance we want to implement a policy where only one job (submitted with sbatch) is allowed to run and any others submitted after it are supposed to wait in line.

I assumed this was straightforward but I can't seem to figure it out. Can I set that up in slurm.conf or in some other way? Thank you very much for your help. BTW we are running slurm 15.08.7 if that is at all relevant.

Best wishes,

Dr. Rutger A. Vos
Researcher / Bioinformatician
[Image removed by sender.]

+31717519600 - +31627085806
rutger.vos at naturalis.nl<mailto:rutger.vos at naturalis.nl> - www.naturalis.nl<https://urldefense.com/v3/__https:/www.naturalis.nl__;!!KGKeukY!lICnByTlEa4aaGokt6HkYTWlNmf7Zbx0wQvzaVx4nR9mTz-BdQKUBmfxaIcu$>
Darwinweg 2, 2333 CR Leiden
Postbus 9517, 2300 RA Leiden

[Image removed by sender.]<https://urldefense.com/v3/__https:/www.naturalis.nl/lang-leve__;!!KGKeukY!lICnByTlEa4aaGokt6HkYTWlNmf7Zbx0wQvzaVx4nR9mTz-BdQKUBnzTiaLN$>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200429/a5e96583/attachment.htm>

More information about the slurm-users mailing list