[slurm-users] Prevent users from updating their jobs

Fulcomer, Samuel samuel_fulcomer at brown.edu
Thu Dec 16 21:04:29 UTC 2021


There's no clear answer to this. It depends a bit on how you've segregated
your resources.

In our environment, GPU and bigmem nodes are in their own partitions.
There's nothing to prevent a user from specifying a list of potential
partitions in the job submission, so there would be no need for them to do
a post-submission "scontrol update jobid" to push a job into a partition
that violated the spirit of the service.

Our practice has been to periodically look at running jobs to see if they
are using (or have used, in the case of bigmem) less than their requested
resources, and send them a nastygram telling them to stop doing that.

Creating a LUA submission script that, e.g., blocks jobs from the gpu queue
that don't request gpus only helps to weed out the naive users. A
subversive user could request a gpu and only use the allocated cores and
memory. There's no way to deal with this other than monitoring running jobs
and nastygrams, with removal of access after repeated offenses.

On Thu, Dec 16, 2021 at 3:36 PM Jordi Blasco <jbllistes at gmail.com> wrote:

> Hi everyone,
>
> I was wondering if there is a way to prevent users from updating their
> jobs with "scontrol update job".
>
> Here is the justification.
>
> A hypothetical user submits a job requesting a regular node, but
> he/she realises that the large memory nodes or the GPU nodes are idle.
> Using the previous command, users can request the job to use one of those
> resources to avoid waiting without a real need for using them.
>
> Any suggestions to prevent that?
>
> Cheers,
>
> Jordi
>
> sbatch --mem=1G -t 0:10:00 --wrap="srun -n 1 sleep 360"
> scontrol update job 791 Features=smp
>
> [user01 at slurm-simulator ~]$ sacct -j 791 -o "jobid,nodelist,user"
>        JobID        NodeList      User
> ------------ --------------- ---------
> 791                    smp-1    user01
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20211216/6dde932b/attachment-0001.htm>


More information about the slurm-users mailing list