[slurm-users] Limiting the number of CPU

Henkel, Andreas henkel at uni-mainz.de
Thu Nov 14 15:09:53 UTC 2019


Hi again,

I’m pretty sure that’s not valid since your scontrol Show job shows minmemorypernode mich bigger than 1G.

Best
Andreas

Am 14.11.2019 um 14:37 schrieb Nguyen Dai Quy <quy at vnoss.org<mailto:quy at vnoss.org>>:


On Thu, Nov 14, 2019 at 1:59 PM Sukman <sukman at pusat.itb.ac.id<mailto:sukman at pusat.itb.ac.id>> wrote:
Hi Brian,

thank you for the suggestion.

It appears that my node is in drain state.
I rebooted the node and everything became fine.

However, the QOS still cannot be applied properly.
Do you have any opinion regarding this issue?


$ sacctmgr show qos where Name=normal_compute format=Name,Priority,MaxWal,MaxTRESPU
      Name   Priority     MaxWall     MaxTRESPU
---------- ---------- ----------- -------------
normal_co+         10    00:01:00  cpu=2,mem=1G


when I run the following script:

#!/bin/bash
#SBATCH --job-name=hostname
#sbatch --time=00:50
#sbatch --mem=1M
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --nodelist=cn110

srun hostname


It turns out that the QOSMaxMemoryPerUser has been met

$ squeue
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
                88      defq hostname   sukman PD       0:00      1 (QOSMaxMemoryPerUser)



Check QOS defined for user 'sukman':

sacctmgr show user sukman -s

Try to submit a job with "-n 4" only ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20191114/ad94cd25/attachment-0001.htm>


More information about the slurm-users mailing list