Using more cores/CPUs that requested with sbatch
Hello, I would like to know if there is any mechanism to avoid a user to do "cheating" when he submits a job. For example, a user submits a OpenMP job requesting 1 node (-N 1) and 2 tasks (-n 2). However, inside his script (or compiled in his binany), user uses more than 2 ntasks, for example, all free cores. For SLURM, job is only using 2, but really, node is 100% assigned to that job. SLURM can control that "abuse" in some way? Cgroups? Thanks.
Hello Gestió, Yes, slurm can restrict the resources that are available to the job using cgroups. I accidentally send my first reply as a separate email in this mailing list, which you can find here: https://lists.schedmd.com/mailman3/hyperkitty/list/slurm-users@lists.schedmd... Sorry about that, --Megan
Also in the cgroup.conf file, you can add constraints on memory, devices(like GPU), etc. Best, Feng On Tue, Mar 25, 2025 at 3:20 AM megan4slurm--- via slurm-users < slurm-users@lists.schedmd.com> wrote:
Hello Gestió,
Yes, slurm can restrict the resources that are available to the job using cgroups. I accidentally send my first reply as a separate email in this mailing list, which you can find here:
https://lists.schedmd.com/mailman3/hyperkitty/list/slurm-users@lists.schedmd...
Sorry about that, --Megan
-- slurm-users mailing list -- slurm-users@lists.schedmd.com To unsubscribe send an email to slurm-users-leave@lists.schedmd.com
participants (3)
-
Feng Zhang -
Gestió Servidors -
megan4slurm@gmail.com