[slurm-users] Interactive job with srun
Mahmood Naderan
mahmood.nt at gmail.com
Mon May 14 09:05:07 MDT 2018
OK thanks to robert, the issue has been solved in another topic which
is about salloc.
Regards,
Mahmood
On Mon, May 14, 2018 at 7:08 PM, Mahmood Naderan <mahmood.nt at gmail.com> wrote:
> Seems to be odd. there is no active job, but the command is waiting
> for resources. Why?
> Regards,
> Mahmood
>
>
>
>
> On Sun, May 13, 2018 at 4:25 PM, Mahmood Naderan <mahmood.nt at gmail.com> wrote:
>> Hi,
>> Although there is no active job in the queue, the srun doesn't grant a node.
>>
>> [mahmood at rocks7 ~]$ date
>> Sun May 13 16:20:13 +0430 2018
>> [mahmood at rocks7 ~]$ srun -p IACTIVE -A em1 --pty bash
>> srun: job 194 queued and waiting for resources
>> ^Csrun: Job allocation 194 has been revoked
>> srun: Force Terminated job 194
>> [mahmood at rocks7 ~]$ squeue
>> JOBID PARTITION NAME USER ST TIME NODES
>> NODELIST(REASON)
>> [mahmood at rocks7 ~]$ date
>> Sun May 13 16:20:26 +0430 2018
>> [mahmood at rocks7 ~]$ scontrol show partition IACTIVE
>> PartitionName=IACTIVE
>> AllowGroups=ALL AllowAccounts=em1 AllowQos=ALL
>> AllocNodes=rocks7 Default=NO QoS=N/A
>> DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO
>> MaxNodes=UNLIMITED MaxTime=UNLIMITED MinNodes=1 LLN=NO
>> MaxCPUsPerNode=UNLIMITED
>> Nodes=compute-0-[4-6]
>> PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
>> OverTimeLimit=NONE PreemptMode=OFF
>> State=UP TotalCPUs=144 TotalNodes=3 SelectTypeParameters=NONE
>> DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED
>> [mahmood at rocks7 ~]$ sacctmgr list association
>> format=account,user,partition | grep mahmood
>> em1 mahmood iactive
>> em1 mahmood plan1
>> monthly mahmood plan2
>>
>>
>>
>> Any idea?
>>
>>
>>
>> Regards,
>> Mahmood
More information about the slurm-users
mailing list