[slurm-users] only 1 job running
Chandler
admin at genome.arizona.edu
Thu Jan 28 06:18:20 UTC 2021
Made a little bit of progress by running sinfo:
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
defq* up infinite 3 drain n[011-013]
defq* up infinite 1 alloc n010
not sure why n[011-013] are in drain state, that needs to be fixed.
After some searching, I ran:
scontrol update nodename=n[011-013] state=idle
and now 1 additional job has started on each of the n[011-013], so now 4 jobs are running but the rest are still queued. They should all be running. After some more searching, I guess resource sharing needs to be turned on? Can help with doing that? I also attached the slurm.conf.
Thanks
-------------- next part --------------
#
# See the slurm.conf man page for more information.
#
SlurmUser=slurm
#SlurmdUser=root
SlurmctldPort=6817
SlurmdPort=6818
AuthType=auth/munge
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
SlurmdSpoolDir=/cm/local/apps/slurm/var/spool
SwitchType=switch/none
MpiDefault=none
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmdPidFile=/var/run/slurmd.pid
#ProctrackType=proctrack/pgid
ProctrackType=proctrack/cgroup
#PluginDir=
CacheGroups=0
#FirstJobId=
ReturnToService=2
#MaxJobCount=
#PlugStackConfig=
#PropagatePrioProcess=
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#SrunProlog=
#SrunEpilog=
#TaskProlog=
#TaskEpilog=
TaskPlugin=task/cgroup
#TrackWCKey=no
#TreeWidth=50
#TmpFs=
#UsePAM=
#
# TIMERS
SlurmctldTimeout=300
SlurmdTimeout=300
InactiveLimit=0
MinJobAge=300
KillWait=30
Waittime=0
#
# SCHEDULING
#SchedulerAuth=
#SchedulerPort=
#SchedulerRootFilter=
#PriorityType=priority/multifactor
#PriorityDecayHalfLife=14-0
#PriorityUsageResetPeriod=14-0
#PriorityWeightFairshare=100000
#PriorityWeightAge=1000
#PriorityWeightPartition=10000
#PriorityWeightJobSize=1000
#PriorityMaxAge=1-0
#
# LOGGING
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurmctld
SlurmdDebug=3
SlurmdLogFile=/var/log/slurmd
#JobCompType=jobcomp/filetxt
#JobCompLoc=/cm/local/apps/slurm/var/spool/job_comp.log
#
# ACCOUNTING
JobAcctGatherType=jobacct_gather/linux
#JobAcctGatherType=jobacct_gather/cgroup
#JobAcctGatherFrequency=30
AccountingStorageType=accounting_storage/slurmdbd
AccountingStorageUser=slurm
# AccountingStorageLoc=slurm_acct_db
# AccountingStoragePass=SLURMDBD_USERPASS
# This section of this file was automatically generated by cmd. Do not edit manually!
# BEGIN AUTOGENERATED SECTION -- DO NOT REMOVE
# Server nodes
SlurmctldHost=EagI
AccountingStorageHost=master
# Nodes
NodeName=n[010-013] Procs=256 CoresPerSocket=64 Sockets=2 ThreadsPerCore=2
# Partitions
PartitionName=defq Default=YES MinNodes=1 DefaultTime=UNLIMITED MaxTime=UNLIMITED AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1 OverSubscribe=NO PreemptMode=OFF AllowAccounts=ALL AllowQos=ALL Nodes=n[010-013]
ClusterName=slurm
# Satesave
StateSaveLocation=/cm/shared/apps/slurm/var/cm/statesave/slurm
# Generic resources types
GresTypes=gpu
# Epilog/Prolog section
PrologSlurmctld=/cm/local/apps/cmd/scripts/prolog-prejob
Prolog=/cm/local/apps/cmd/scripts/prolog
Epilog=/cm/local/apps/cmd/scripts/epilog
FastSchedule=0
# Power saving section (disabled)
# END AUTOGENERATED SECTION -- DO NOT REMOVE
More information about the slurm-users
mailing list