[slurm-users] One time override to force run job

Henkel, Andreas henkel at uni-mainz.de
Sat Sep 7 14:51:28 UTC 2019


Hi Tina,
We have an additional partition with partitionqos that increase the limits and allows for running short jobs over the limits if nodes are idle. And on Submission in the Standard-Partitions we automatically add the additional partition via a job_submit-plugin.

Best,
Andreas 

> Am 04.09.2019 um 18:55 schrieb Christopher Benjamin Coffey <Chris.Coffey at nau.edu>:
> 
> Hi Tina,
> 
> I think you could just have a qos called "override" that has no limits, or maybe just high limits. Then, just modify the job's qos to be "override" with scontrol. Based on your setup, you may also have to update the jobs account to an "override" type account with no limits.
> 
> We do this from time to time.
> 
> Best,
> Chris
> 
>> Christopher Coffey
> High-Performance Computing
> Northern Arizona University
> 928-523-1167
> 
> 
> On 9/2/19, 12:47 PM, "slurm-users on behalf of Tina Fora" <slurm-users-bounces at lists.schedmd.com on behalf of tfora at riseup.net> wrote:
> 
>    Hello,
> 
>    Is there a way to force a job to run that is being held back for
>    QOSGrpCpuLimit? This is coming from QOS that we have in place. For the
>    most part it works great but every once in a while we have free nodes that
>    are idle and I'd like to force the job to run.
> 
>    Tina
> 
> 
> 
> 


More information about the slurm-users mailing list