[slurm-users] slurm-users Digest, Vol 60, Issue 19
Hemanta Sahu
hemantaku.sahu at gmail.com
Tue Oct 18 14:09:39 UTC 2022
Hi Ole,
I confirm that the Slurm database has been configured and "
AccountingStorageEnforce" parameters have been set.
>>
[admin2 at login01 ~]$ scontrol show config | grep AccountingStorageEnforce
AccountingStorageEnforce = associations,limits,qos,safe
>>
My Question : If I have multiple users under a slurm Account and I
want to limit user xxx to max 1000 CPU core-minutes and user yyy to
max 2000 CPU core-minutes for all past,present and future jobs , what
would be the best way to achieve this ?
Thanks
Hemanta
On Tue, Oct 18, 2022 at 5:31 PM <slurm-users-request at lists.schedmd.com>
wrote:
> Send slurm-users mailing list submissions to
> slurm-users at lists.schedmd.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.schedmd.com/cgi-bin/mailman/listinfo/slurm-users
> or, via email, send a message with subject or body 'help' to
> slurm-users-request at lists.schedmd.com
>
> You can reach the person managing the list at
> slurm-users-owner at lists.schedmd.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of slurm-users digest..."
>
>
> Today's Topics:
>
> 1. How to implement resource restriction for diffrent slurm
> users under same slurm account (Hemanta Sahu)
> 2. Re: How to implement resource restriction for diffrent slurm
> users under same slurm account (Ole Holm Nielsen)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 17 Oct 2022 20:21:59 +0530
> From: Hemanta Sahu <hemantaku.sahu at gmail.com>
> To: slurm-users at lists.schedmd.com
> Subject: [slurm-users] How to implement resource restriction for
> diffrent slurm users under same slurm account
> Message-ID:
> <CAH5HmweLsQ7uUkH=D=
> 6XqbzrxLXsGDvPftSBt6T5snqbqQckXQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear All,
>
> I want to implement resource restriction for different slurm users
> under
> the same slurm account by setting "GrpTRESMins" Flag. For testing purposes
> I set the "GrpTRESMins=cpu=0" and submitted the Job.
>
> I expect the job submission should fail but it is not happening. The jobs
> are still going to Q and running.
>
> Kindly help me if I am missing anything obvious. Command outputs given
> below for reference.
>
>
> >>
> [testfac3 at login04 export_bin]$ sacctmgr modify user name=testuser100
> Account=testfac3_imf set GrpTRESMins=cpu=0
> Modified user associations...
> C = param-shakti A = testfac3_imf U = testuser100
> Would you like to commit changes? (You have 30 seconds to decide)
> (N/y): y
>
> [testuser100 at login04 ~]$ sacctmgr show assoc where Account=testfac3_imf
> user=testuser100 format=Account%15,User%15,GrpTRESMins,QOS%30
> Account User GrpTRESMins
> QOS
> --------------- --------------- -------------
> ------------------------------
> testfac3_imf testuser100 cpu=0
> testfac3_imf
>
> [testuser100 at login04 testuser100]$ sacctmgr show qos sacctmgr show qos
> testfac3_imf
>
> format=Name%20,MaxWall,Flags%20,GrpTRESMins%20,MaxSubmitJobsPerUser,MaxSubmitJobsPeraccount,GrpTRESRunMin,Priority
> Name MaxWall Flags GrpTRESMins
> MaxSubmitPU MaxSubmitPA GrpTRESRunMin Priority
> -------------------- ----------- -------------------- --------------------
> ----------- ----------- ------------- ----------
> testfac3_imf 3-00:00:00 DenyOnLimit,NoDecay cpu=210000000
> 100 500 10000
> [testuser100 at login04 testuser100]$
>
> [testuser100 at login04 testuser100]$ scontrol show job 949622|grep JobState
> JobState=COMPLETED Reason=None Dependency=(null)
> [testuser100 at login04 testuser100]$
>
>
> [testuser100 at login04 testuser100]$ cat testjob.sh
> #!/bin/bash
> #SBATCH -J testjob # name of the job
> #SBATCH -p standard # name of the partition: available
> options "standard,standard-low,gpu,gpu-low,hm"
> #SBATCH -n 2 # no of processes
> #SBATCH -q testfac3_imf
> #SBATCH -A testfac3_imf
> #SBATCH -t 01:00:00 # walltime in HH:MM:SS, Max value
> 72:00:00
> #list of modules you want to use, for example
> module load compiler/intel-mpi/mpi-2020-v4 compiler/intel/2020.4.304
>
> #name of the executable
> exe="uname -n"
>
> #run the application
> mpirun -n $SLURM_NTASKS $exe
>
> [testuser100 at login04 testuser100]$ sbatch testjob.sh
> Submitted batch job 949622
>
> [testuser100 at login04 testuser100]$ squeue
> JOBID PARTITION NAME USER ST TIME NODES
> NODELIST(REASON)
> 949622 standard testjob- testuser R 0:04 2
> cn[304-305]
> >>
>
> Thanks in advance
>
> Best Regards
> Hemanta
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.schedmd.com/pipermail/slurm-users/attachments/20221017/7b137ddb/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 2
> Date: Tue, 18 Oct 2022 07:41:29 +0200
> From: Ole Holm Nielsen <Ole.H.Nielsen at fysik.dtu.dk>
> To: <slurm-users at lists.schedmd.com>
> Subject: Re: [slurm-users] How to implement resource restriction for
> diffrent slurm users under same slurm account
> Message-ID: <e780481e-9314-9a3e-790b-40c08e3f66bd at fysik.dtu.dk>
> Content-Type: text/plain; charset="UTF-8"; format=flowed
>
> On 10/17/22 16:51, Hemanta Sahu wrote:
> > ? ? ?I want to implement resource restriction for different slurm users
> > under
> > the same slurm account by setting ?"GrpTRESMins" Flag. For testing
> > purposes I set the "GrpTRESMins=cpu=0" ?and submitted the Job.
> >
> > ? I expect the job submission should fail but it is not happening. The
> > jobs are still going to Q and running.
> >
> > Kindly help me if I am missing anything obvious. Command outputs given
> > below for reference.
>
> Job submission should not fail due to resource limits.
>
> Read the slurm.conf manual page to make sure you have set this parameter
> correctly, for example:
>
> $ scontrol show config | grep AccountingStorageEnforce
> AccountingStorageEnforce = associations,limits,qos,safe
>
> You should also read this documentation:
> https://slurm.schedmd.com/resource_limits.html
>
> I assume that you have configured a Slurm database?
>
> /Ole
>
>
>
> End of slurm-users Digest, Vol 60, Issue 19
> *******************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20221018/662f720c/attachment.htm>
More information about the slurm-users
mailing list