[slurm-users] Quick hold on all partitions, all jobs

Stradling, Alden Reid (ars9ac) ars9ac at virginia.edu
Wed Nov 8 17:16:21 MST 2017


We use something like this:

scontrol create reservation starttime=2017-11-08T06:00:00 duration=1440 user=root flags=maint,ignore_jobs nodes=ALL
Reservation

created: root_2


Then confirm:

scontrol show reservation

ReservationName=root_2 StartTime=2017-11-08T06:00:00 EndTime=2017-11-09T06:00:00 Duration=1-00:00:00
   Nodes=[trimmed] Flags=MAINT,IGNORE_JOBS,SPEC_NODES
   Users=root Accounts=(null) Licenses=(null) State=INACTIVE

Cheers,

Alden

On Nov 8, 2017, at 7:01 PM, Lachlan Musicman <datakid at gmail.com<mailto:datakid at gmail.com>> wrote:

The IT team sent an email saying "complete network wide network outage tomorrow night from 10pm across the whole institute".

Our plan is to put all queued jobs on hold, suspend all running jobs, and turning off the login node.

I've just discovered that the partitions have a state, and it can be set to UP, DOWN, DRAIN or INACTIVE.

In this situation - most likely a 4 hour outage with nothing else affected - would you mark your partitions DOWN or INACTIVE?

Ostensibly all users should be off the systems (because no network), but there's always one that sets an at or cron job or finds that corner case.

Cheers
L.


------
"The antidote to apocalypticism is apocalyptic civics. Apocalyptic civics is the insistence that we cannot ignore the truth, nor should we panic about it. It is a shared consciousness that our institutions have failed and our ecosystem is collapsing, yet we are still here — and we are creative agents who can shape our destinies. Apocalyptic civics is the conviction that the only way out is through, and the only way through is together. "

Greg Bloom @greggish https://twitter.com/greggish/status/873177525903609857
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20171109/e553647b/attachment-0001.html>


More information about the slurm-users mailing list