[slurm-users] Automatically cancel jobs not utilizing their GPUs

Steven Dick kg4ydw at gmail.com
Fri Jul 3 23:58:58 UTC 2020


I have collectd running on my gpu nodes with the collectd_nvidianvml
plugin from pip.
I have a collectd frontend that displays that data along with slurm
data for the whole cluster for users to see.
Some of my users watch that carefully and tune their jobs to maximize
utilization.

When I spot jobs that are either not using their gpus effectively or
don't have them open at all, I email users.
Most are appreciative, as they didn't know their job wasn't working
correctly.  Unapologetic repeat offenders find their jobs converted to
preemptive jobs with a job submit plugin and a change of QOS.

I considered writing something to kill jobs outright when they didn't
use the gpu resources they requested, but through the above approach,
I've found it unnecessary.

On Thu, Jul 2, 2020 at 3:00 AM Stephan Roth <stephan.roth at ee.ethz.ch> wrote:
>
> Hi all,
>
> Does anyone have ideas or suggestions on how to automatically cancel
> jobs which don't utilize the GPUs allocated to them?
>
> The Slurm version in use is 19.05.
>
> I'm thinking about collecting GPU utilization per process on all nodes
> with NVML/nvidia-smi, update a mean value of the collected utilization
> per GPU and cancel a job if the mean value is below a to-be-defined
> threshold after a to-be-defined amount of minutes.
>
> Thank you for any input,
>
> Cheers,
> Stephan
>



More information about the slurm-users mailing list