[slurm-users] Problem with assigning user to partition

Loris Bennett loris.bennett at fu-berlin.de
Thu Apr 19 23:53:31 MDT 2018


Hi Mahmood,

Mahmood Naderan <mahmood.nt at gmail.com> writes:

> Hi,
> I have assigned user/group to a partition I also have set --partition
> correctly in the sbatch script. However, the jobs remains pending with
> the reason AccountNotAllowed. Any idea about that?
>
>
>
> [mahmood at rocks7 g]$ scontrol show partitions
> ....
> PartitionName=MONTHLY1
>    AllowGroups=mahmood AllowAccounts=mahmood AllowQos=ALL
>    AllocNodes=rocks7 Default=NO QoS=N/A
>    DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO
>    MaxNodes=UNLIMITED MaxTime=UNLIMITED MinNodes=1 LLN=NO
> MaxCPUsPerNode=UNLIMITED
>    Nodes=compute-0-0
>    PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
>    OverTimeLimit=NONE PreemptMode=OFF
>    State=UP TotalCPUs=32 TotalNodes=1 SelectTypeParameters=NONE
>    DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED
>
> [mahmood at rocks7 g]$ groups
> mahmood google-otp
> [mahmood at rocks7 g]$ sbatch slurm_script.sh
> Submitted batch job 71
> [mahmood at rocks7 g]$ squeue
>              JOBID PARTITION     NAME     USER ST       TIME  NODES
> NODELIST(REASON)
>                 71  MONTHLY1      g-8  mahmood PD       0:00      1
> (AccountNotAllowed)
> [mahmood at rocks7 g]$ cat slurm_script.sh
> #!/bin/bash
> #SBATCH --output=test.out
> #SBATCH --job-name=g-8
> #SBATCH --ntasks=8
> #SBATCH --mem=8GB
> #SBATCH --time=99:00:00
> #SBATCH --partition=MONTHLY1
> g09 test.gjf

I think you are confusing Unix groups with the groups used by Slurm
accounting.  The problem is that the Slurm documentation often uses
'group' to refer to 'association', i.e. the combination of user and
account.

You probably need to read

  https://slurm.schedmd.com/accounting.html

and then set up accounts and associations as required.

Regards

Loris

-- 
Dr. Loris Bennett (Mr.)
ZEDAT, Freie Universität Berlin         Email loris.bennett at fu-berlin.de



More information about the slurm-users mailing list