<div dir="ltr">Thank you for your help, Sam! The rest of the slurm.conf, excluding the node and partition configuration from the earlier email is below. I've also included scontrol output for a 1 GPU job that runs successfully on node01.<div><br></div><div>Best,</div><div>Andrey<br><div><br></div><div><p class="MsoNormal"><b>Slurm.conf</b></p><p class="MsoNormal">#<u></u><u></u></p><p class="MsoNormal"># See the slurm.conf man page for more information.<u></u><u></u></p><p class="MsoNormal">#</p><p class="MsoNormal">SlurmUser=slurm<u></u><u></u></p><p class="MsoNormal">#SlurmdUser=root<u></u><u></u></p><p class="MsoNormal">SlurmctldPort=6817<u></u><u></u></p><p class="MsoNormal">SlurmdPort=6818<u></u><u></u></p><p class="MsoNormal">AuthType=auth/munge<u></u><u></u></p><p class="MsoNormal">#JobCredentialPrivateKey=<u></u><u></u></p><p class="MsoNormal">#JobCredentialPublicCertificate=<u></u><u></u></p><p class="MsoNormal">SlurmdSpoolDir=/cm/local/apps/slurm/var/spool<u></u><u></u></p><p class="MsoNormal">SwitchType=switch/none<u></u><u></u></p><p class="MsoNormal">MpiDefault=none<u></u><u></u></p><p class="MsoNormal">SlurmctldPidFile=/var/run/slurmctld.pid<u></u><u></u></p><p class="MsoNormal">SlurmdPidFile=/var/run/slurmd.pid<u></u><u></u></p><p class="MsoNormal">#ProctrackType=proctrack/pgid<u></u><u></u></p><p class="MsoNormal">ProctrackType=proctrack/cgroup<u></u><u></u></p><p class="MsoNormal">#PluginDir=<u></u><u></u></p><p class="MsoNormal">CacheGroups=0<u></u><u></u></p><p class="MsoNormal">#FirstJobId=<u></u><u></u></p><p class="MsoNormal">ReturnToService=2<u></u><u></u></p><p class="MsoNormal">#MaxJobCount=<u></u><u></u></p><p class="MsoNormal">#PlugStackConfig=<u></u><u></u></p><p class="MsoNormal">#PropagatePrioProcess=<u></u><u></u></p><p class="MsoNormal">#PropagateResourceLimits=<u></u><u></u></p><p class="MsoNormal">#PropagateResourceLimitsExcept=<u></u><u></u></p><p class="MsoNormal">#SrunProlog=<u></u><u></u></p><p class="MsoNormal">#SrunEpilog=<u></u><u></u></p><p class="MsoNormal">#TaskProlog=<u></u><u></u></p><p class="MsoNormal">#TaskEpilog=<u></u><u></u></p><p class="MsoNormal">TaskPlugin=task/cgroup<u></u><u></u></p><p class="MsoNormal">#TrackWCKey=no<u></u><u></u></p><p class="MsoNormal">#TreeWidth=50<u></u><u></u></p><p class="MsoNormal">#TmpFs=<u></u><u></u></p><p class="MsoNormal">#UsePAM=<u></u><u></u></p><p class="MsoNormal">#<u></u><u></u></p><p class="MsoNormal"># TIMERS<u></u><u></u></p><p class="MsoNormal">SlurmctldTimeout=300<u></u><u></u></p><p class="MsoNormal">SlurmdTimeout=300<u></u><u></u></p><p class="MsoNormal">InactiveLimit=0<u></u><u></u></p><p class="MsoNormal">MinJobAge=300<u></u><u></u></p><p class="MsoNormal">KillWait=30<u></u><u></u></p><p class="MsoNormal">Waittime=0<u></u><u></u></p><p class="MsoNormal">#<u></u><u></u></p><p class="MsoNormal"># SCHEDULING<u></u><u></u></p><p class="MsoNormal">#SchedulerAuth=<u></u><u></u></p><p class="MsoNormal">#SchedulerPort=<u></u><u></u></p><p class="MsoNormal">#SchedulerRootFilter=<u></u><u></u></p><p class="MsoNormal">#PriorityType=priority/multifactor<u></u><u></u></p><p class="MsoNormal">#PriorityDecayHalfLife=14-0<u></u><u></u></p><p class="MsoNormal">#PriorityUsageResetPeriod=14-0<u></u><u></u></p><p class="MsoNormal">#PriorityWeightFairshare=100000<u></u><u></u></p><p class="MsoNormal">#PriorityWeightAge=1000<u></u><u></u></p><p class="MsoNormal">#PriorityWeightPartition=10000<u></u><u></u></p><p class="MsoNormal">#PriorityWeightJobSize=1000<u></u><u></u></p><p class="MsoNormal">#PriorityMaxAge=1-0<u></u><u></u></p><p class="MsoNormal">#<u></u><u></u></p><p class="MsoNormal"># LOGGING<u></u><u></u></p><p class="MsoNormal">SlurmctldDebug=3<u></u><u></u></p><p class="MsoNormal">SlurmctldLogFile=/var/log/slurmctld<u></u><u></u></p><p class="MsoNormal">SlurmdDebug=3<u></u><u></u></p><p class="MsoNormal">SlurmdLogFile=/var/log/slurmd<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">#JobCompType=jobcomp/filetxt<u></u><u></u></p><p class="MsoNormal">#JobCompLoc=/cm/local/apps/slurm/var/spool/job_comp.log<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">#<u></u><u></u></p><p class="MsoNormal"># ACCOUNTING<u></u><u></u></p><p class="MsoNormal">JobAcctGatherType=jobacct_gather/linux<u></u><u></u></p><p class="MsoNormal">#JobAcctGatherType=jobacct_gather/cgroup<u></u><u></u></p><p class="MsoNormal">#JobAcctGatherFrequency=30<u></u><u></u></p><p class="MsoNormal">AccountingStorageType=accounting_storage/slurmdbd<u></u><u></u></p><p class="MsoNormal">AccountingStorageUser=slurm<u></u><u></u></p><p class="MsoNormal"># AccountingStorageLoc=slurm_acct_db<u></u><u></u></p><p class="MsoNormal"># AccountingStoragePass=SLURMDBD_USERPASS<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"># Scheduler<u></u><u></u></p><p class="MsoNormal">SchedulerType=sched/backfill<u></u><u></u></p><p class="MsoNormal"># Statesave<u></u><u></u></p><p class="MsoNormal">StateSaveLocation=/cm/shared/apps/slurm/var/cm/statesave/slurm<u></u><u></u></p><p class="MsoNormal"># Generic resources types<u></u><u></u></p><p class="MsoNormal">GresTypes=gpu<u></u><u></u></p><p class="MsoNormal"># Epilog/Prolog section<u></u><u></u></p><p class="MsoNormal">PrologSlurmctld=/cm/local/apps/cmd/scripts/prolog-prejob<u></u><u></u></p><p class="MsoNormal">Prolog=/cm/local/apps/cmd/scripts/prolog<u></u><u></u></p><p class="MsoNormal">Epilog=/cm/local/apps/cmd/scripts/epilog<u></u><u></u></p><p class="MsoNormal"># Power saving section (disabled)<u></u><u></u></p><p class="MsoNormal"># GPU related plugins<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">#SelectType=select/cons_tres<u></u><u></u></p><p class="MsoNormal">#SelectTypeParameters=CR_Core<u></u><u></u></p><p class="MsoNormal">#AccountingStorageTRES=gres/gpu<u></u><u></u></p><p class="MsoNormal"># END AUTOGENERATED SECTION -- DO NOT REMOVE<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><b>Scontrol for working 1GPU job on node01<u></u><u></u></b></p><p class="MsoNormal">JobId=285 JobName=cryosparc_P2_J232<u></u><u></u></p><p class="MsoNormal"> UserId=cryosparc(1003) GroupId=cryosparc(1003) MCS_label=N/A<u></u><u></u></p><p class="MsoNormal"> Priority=4294901570 Nice=0 Account=(null) QOS=normal<u></u><u></u></p><p class="MsoNormal"> JobState=RUNNING Reason=None Dependency=(null)<u></u><u></u></p><p class="MsoNormal"> Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0<u></u><u></u></p><p class="MsoNormal"> RunTime=00:00:51 TimeLimit=UNLIMITED TimeMin=N/A<u></u><u></u></p><p class="MsoNormal"> <span lang="FR">SubmitTime=2021-08-21T00:05:30 EligibleTime=2021-08-21T00:05:30<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR"> AccrueTime=2021-08-21T00:05:30<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR"> </span>StartTime=2021-08-21T00:05:30 EndTime=Unknown Deadline=N/A<u></u><u></u></p><p class="MsoNormal"> SuspendTime=None SecsPreSuspend=0 LastSchedEval=2021-08-21T00:05:30<u></u><u></u></p><p class="MsoNormal"> Partition=CSLive AllocNode:Sid=headnode:108964<u></u><u></u></p><p class="MsoNormal"> ReqNodeList=(null) ExcNodeList=(null)<u></u><u></u></p><p class="MsoNormal"> NodeList=node01<u></u><u></u></p><p class="MsoNormal"> BatchHost=node01<u></u><u></u></p><p class="MsoNormal"> NumNodes=1 NumCPUs=64 NumTasks=2 CPUs/Task=1 ReqB:S:C:T=0:0:*:*<u></u><u></u></p><p class="MsoNormal"> TRES=cpu=64,node=1,billing=64<u></u><u></u></p><p class="MsoNormal"> Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*<u></u><u></u></p><p class="MsoNormal"> MinCPUsNode=1 MinMemoryNode=24000M MinTmpDiskNode=0<u></u><u></u></p><p class="MsoNormal"> Features=(null) DelayBoot=00:00:00<u></u><u></u></p><p class="MsoNormal"> OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)<u></u><u></u></p><p class="MsoNormal"> Command=/data/backups/takeda2/data/cryosparc_projects/P8/J232/queue_sub_script.sh<u></u><u></u></p><p class="MsoNormal"> WorkDir=/ssd/CryoSparc/cryosparc_master<u></u><u></u></p><p class="MsoNormal"> StdErr=/data/backups/takeda2/data/cryosparc_projects/P8/J232/job.log<u></u><u></u></p><p class="MsoNormal"> StdIn=/dev/null<u></u><u></u></p><p class="MsoNormal"> StdOut=/data/backups/takeda2/data/cryosparc_projects/P8/J232/job.log<u></u><u></u></p><p class="MsoNormal"> Power=<u></u><u></u></p><p class="MsoNormal"> TresPerNode=gpu:1<u></u><u></u></p><p class="MsoNormal"> MailUser=cryosparc MailType=NONE<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><b>Cgroup</b><u></u><u></u></p><p class="MsoNormal"># This section of this file was automatically generated by cmd. Do not edit manually!<u></u><u></u></p><p class="MsoNormal"># BEGIN AUTOGENERATED SECTION -- DO NOT REMOVE<u></u><u></u></p><p class="MsoNormal">CgroupMountpoint="/sys/fs/cgroup"<u></u><u></u></p><p class="MsoNormal">CgroupAutomount=no<u></u><u></u></p><p class="MsoNormal"><span lang="PT-BR">TaskAffinity=no<u></u><u></u></span></p><p class="MsoNormal"><span lang="PT-BR">ConstrainCores=no<u></u><u></u></span></p><p class="MsoNormal"><span lang="PT-BR">ConstrainRAMSpace=no<u></u><u></u></span></p><p class="MsoNormal"><span lang="PT-BR">ConstrainSwapSpace=no<u></u><u></u></span></p><p class="MsoNormal"><span lang="PT-BR">ConstrainDevices=no<u></u><u></u></span></p><p class="MsoNormal">ConstrainKmemSpace=yes<u></u><u></u></p><p class="MsoNormal">AllowedRamSpace=100.00<u></u><u></u></p><p class="MsoNormal">AllowedSwapSpace=0.00<u></u><u></u></p><p class="MsoNormal">MinKmemSpace=30<u></u><u></u></p><p class="MsoNormal">MaxKmemPercent=100.00<u></u><u></u></p><p class="MsoNormal">MaxRAMPercent=100.00<u></u><u></u></p><p class="MsoNormal">MaxSwapPercent=100.00<u></u><u></u></p><p class="MsoNormal">MinRAMSpace=30</p></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Aug 20, 2021 at 3:12 PM Fulcomer, Samuel <<a href="mailto:samuel_fulcomer@brown.edu" target="_blank">samuel_fulcomer@brown.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">...and I'm not sure what "AutoDetect=NVML" is supposed to do in the gres.conf file. We've always used "nvidia-smi topo -m" to confirm that we've got a single-root or dual-root node and have entered the correct info in gres.conf to map connections to the CPU sockets...., e.g.:</div><div dir="ltr"><div><br></div># 8-gpu A6000 nodes - dual-root<br>NodeName=gpu[1504-1506] Name=gpu Type=a6000 File=/dev/nvidia[0-3] CPUs=0-23<br>NodeName=gpu[1504-1506] Name=gpu Type=a6000 File=/dev/nvidia[4-7] CPUs=24-47<div></div><br></div><div dir="ltr"><br></div><div dir="ltr"><br></div><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Aug 20, 2021 at 6:01 PM Fulcomer, Samuel <<a href="mailto:samuel_fulcomer@brown.edu" target="_blank">samuel_fulcomer@brown.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">Well... you've got lots of weirdness, as the scontrol show job command isn't listing any GPU TRES requests, and the scontrol show node command isn't listing any configured GPU TRES resources.<div><br></div><div>If you send me your entire slurm.conf I'll have a quick look-over.</div><div><br></div><div>You also should be using cgroup.conf to fence off the GPU devices so that a job only sees the GPUs that it's been allocated. The lines in the batch file to figure it out aren't necessary. I forgot to ask you about cgroup.conf.</div><div><br></div><div>regards,</div><div>Sam</div><div><br></div></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Aug 20, 2021 at 5:46 PM Andrey Malyutin <<a href="mailto:malyutinag@gmail.com" target="_blank">malyutinag@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="auto"><div>Thank you Samuel,</div><div dir="auto"><br></div><div dir="auto">Slurm version is 20.02.6. I'm not entirely sure about the platform, RTX6000 nodes are about 2 years old, and 3090 node is very recent. Technically we have 4 nodes (hence references to node04 in info below), but one of the nodes is down and out of the system at the moment. As you see, the job really wants to run on the downed node instead of going to node02 or node03. </div><div dir="auto"><br></div><div>Thank you again,</div><div>Andrey</div><div dir="auto"><p class="MsoNormal"> <u></u></p><p class="MsoNormal"><b>scontrol info:</b></p><p class="MsoNormal">JobId=283 JobName=cryosparc_P2_J214<u></u><u></u></p><p class="MsoNormal"> UserId=cryosparc(1003) GroupId=cryosparc(1003) MCS_label=N/A<u></u><u></u></p><p class="MsoNormal"> Priority=4294901572 Nice=0 Account=(null) QOS=normal<u></u><u></u></p><p class="MsoNormal"> JobState=PENDING Reason=ReqNodeNotAvail,_UnavailableNodes:node04 Dependency=(null)<u></u><u></u></p><p class="MsoNormal"> Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0<u></u><u></u></p><p class="MsoNormal"> RunTime=00:00:00 TimeLimit=UNLIMITED TimeMin=N/A<u></u><u></u></p><p class="MsoNormal"> SubmitTime=2021-08-20T20:55:00 EligibleTime=2021-08-20T20:55:00<u></u><u></u></p><p class="MsoNormal"> AccrueTime=2021-08-20T20:55:00<u></u><u></u></p><p class="MsoNormal"> StartTime=Unknown EndTime=Unknown Deadline=N/A<u></u><u></u></p><p class="MsoNormal"> SuspendTime=None SecsPreSuspend=0 LastSchedEval=2021-08-20T23:36:14<u></u><u></u></p><p class="MsoNormal"> Partition=CSCluster AllocNode:Sid=headnode:108964<u></u><u></u></p><p class="MsoNormal"> ReqNodeList=(null) ExcNodeList=(null)<u></u><u></u></p><p class="MsoNormal"> NodeList=(null)<u></u><u></u></p><p class="MsoNormal"> NumNodes=1 NumCPUs=4 NumTasks=4 CPUs/Task=1 ReqB:S:C:T=0:0:*:*<u></u><u></u></p><p class="MsoNormal"> TRES=cpu=4,mem=24000M,node=1,billing=4<u></u><u></u></p><p class="MsoNormal"> Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*<u></u><u></u></p><p class="MsoNormal"> MinCPUsNode=1 MinMemoryNode=24000M MinTmpDiskNode=0<u></u><u></u></p><p class="MsoNormal"> Features=(null) DelayBoot=00:00:00<u></u><u></u></p><p class="MsoNormal"> OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)<u></u><u></u></p><p class="MsoNormal"> Command=/data/backups/takeda2/data/cryosparc_projects/P8/J214/queue_sub_script.sh<u></u><u></u></p><p class="MsoNormal"> WorkDir=/ssd/CryoSparc/cryosparc_master<u></u><u></u></p><p class="MsoNormal"> StdErr=/data/backups/takeda2/data/cryosparc_projects/P8/J214/job.log<u></u><u></u></p><p class="MsoNormal"> StdIn=/dev/null<u></u><u></u></p><p class="MsoNormal"> StdOut=/data/backups/takeda2/data/cryosparc_projects/P8/J214/job.log<u></u><u></u></p><p class="MsoNormal"> Power=<u></u><u></u></p><p class="MsoNormal"> TresPerNode=gpu:1<u></u><u></u></p><p class="MsoNormal"> MailUser=cryosparc MailType=NONE</p><p class="MsoNormal"><br></p><p class="MsoNormal"><b>Script:</b><u></u><u></u></p><p class="MsoNormal">#SBATCH --job-name cryosparc_P2_J214<u></u><u></u></p><p class="MsoNormal">#SBATCH -n 4<u></u><u></u></p><p class="MsoNormal">#SBATCH --gres=gpu:1<u></u><u></u></p><p class="MsoNormal">#SBATCH -p CSCluster<u></u><u></u></p><p class="MsoNormal">#SBATCH --mem=24000MB <u></u><u></u></p><p class="MsoNormal">#SBATCH --output=/data/backups/takeda2/data/cryosparc_projects/P8/J214/job.log<u></u><u></u></p><p class="MsoNormal">#SBATCH --error=/data/backups/takeda2/data/cryosparc_projects/P8/J214/job.log<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal">available_devs=""<u></u><u></u></p><p class="MsoNormal">for devidx in $(seq 0 15);<u></u><u></u></p><p class="MsoNormal">do<u></u><u></u></p><p class="MsoNormal"> if [[ -z $(nvidia-smi -i $devidx --query-compute-apps=pid --format=csv,noheader) ]] ; then<u></u><u></u></p><p class="MsoNormal"> if [[ -z "$available_devs" ]] ; then<u></u><u></u></p><p class="MsoNormal"> available_devs=$devidx<u></u><u></u></p><p class="MsoNormal"> else<u></u><u></u></p><p class="MsoNormal"> <span lang="FR">available_devs=$available_devs,$devidx<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR"> fi<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR"> fi<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR">done<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR">export CUDA_VISIBLE_DEVICES=$available_devs<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR"><u></u> <u></u></span></p><p class="MsoNormal"><span lang="FR">/ssd/CryoSparc/cryosparc_worker/bin/cryosparcw run --project P2 --job J214 --master_hostname headnode.cm.cluster --master_command_core_port 39002 > /data/backups/takeda2/data/cryosparc_projects/P8/J214/job.log 2>&1<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR"><u></u> <u></u></span></p><p class="MsoNormal"><span lang="FR"><u></u> <u></u></span></p><p class="MsoNormal"><span lang="FR"><u></u> <u></u></span></p><p class="MsoNormal"><b>Slurm.conf<u></u><u></u></b></p><p class="MsoNormal"># This section of this file was automatically generated by cmd. Do not edit manually!<u></u><u></u></p><p class="MsoNormal"># BEGIN AUTOGENERATED SECTION -- DO NOT REMOVE<u></u><u></u></p><p class="MsoNormal"># Server nodes<u></u><u></u></p><p class="MsoNormal">SlurmctldHost=headnode<u></u><u></u></p><p class="MsoNormal">AccountingStorageHost=master<u></u><u></u></p><p class="MsoNormal">#############################################################################################<u></u><u></u></p><p class="MsoNormal">#GPU Nodes<u></u><u></u></p><p class="MsoNormal">#############################################################################################<u></u><u></u></p><p class="MsoNormal">NodeName=node[02-04] Procs=64 CoresPerSocket=16 RealMemory=257024 Sockets=2 ThreadsPerCore=2 Feature=RTX6000 Gres=gpu:4<u></u><u></u></p><p class="MsoNormal">NodeName=node01 Procs=64 CoresPerSocket=16 RealMemory=386048 Sockets=2 ThreadsPerCore=2 Feature=RTX3090 Gres=gpu:4<u></u><u></u></p><p class="MsoNormal"><span lang="FR">#NodeName=node[05-08] Procs=8 Gres=gpu:4<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR">#<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR">#############################################################################################<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR"># Partitions<u></u><u></u></span></p><p class="MsoNormal">#############################################################################################<u></u><u></u></p><p class="MsoNormal">PartitionName=defq Default=YES MinNodes=1 DefaultTime=UNLIMITED MaxTime=UNLIMITED AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1 OverSubscribe=NO PreemptMode=OFF AllowAccounts=ALL AllowQos=ALL Nodes=node[01-04]<u></u><u></u></p><p class="MsoNormal">PartitionName=CSLive MinNodes=1 DefaultTime=UNLIMITED MaxTime=UNLIMITED AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1 OverSubscribe=NO PreemptMode=OFF AllowAccounts=ALL AllowQos=ALL Nodes=node01<u></u><u></u></p><p class="MsoNormal">PartitionName=CSCluster MinNodes=1 DefaultTime=UNLIMITED MaxTime=UNLIMITED AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1 OverSubscribe=NO PreemptMode=OFF AllowAccounts=ALL AllowQos=ALL Nodes=node[02-04]<u></u><u></u></p><p class="MsoNormal">ClusterName=slurm<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><b>Gres.conf<u></u><u></u></b></p><p class="MsoNormal"># This section of this file was automatically generated by cmd. Do not edit manually!<u></u><u></u></p><p class="MsoNormal"># BEGIN AUTOGENERATED SECTION -- DO NOT REMOVE<u></u><u></u></p><p class="MsoNormal">AutoDetect=NVML<u></u><u></u></p><p class="MsoNormal"># END AUTOGENERATED SECTION -- DO NOT REMOVE<u></u><u></u></p><p class="MsoNormal">#Name=gpu File=/dev/nvidia[0-3] Count=4<u></u><u></u></p><p class="MsoNormal">#Name=mic Count=0<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><b>Sinfo:<u></u><u></u></b></p><p class="MsoNormal">PARTITION AVAIL TIMELIMIT NODES STATE NODELIST<u></u><u></u></p><p class="MsoNormal">defq* up infinite 1 down* node04<u></u><u></u></p><p class="MsoNormal">defq* up infinite 3 idle node[01-03]<u></u><u></u></p><p class="MsoNormal">CSLive up infinite 1 idle node01<u></u><u></u></p><p class="MsoNormal">CSCluster up infinite 1 down* node04<u></u><u></u></p><p class="MsoNormal">CSCluster up infinite 2 idle node[02-03]<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><b>Node1:<u></u><u></u></b></p><p class="MsoNormal">NodeName=node01 Arch=x86_64 CoresPerSocket=16<u></u><u></u></p><p class="MsoNormal"> CPUAlloc=0 CPUTot=64 CPULoad=0.04<u></u><u></u></p><p class="MsoNormal"> AvailableFeatures=RTX3090<u></u><u></u></p><p class="MsoNormal"> ActiveFeatures=RTX3090<u></u><u></u></p><p class="MsoNormal"> Gres=gpu:4<u></u><u></u></p><p class="MsoNormal"> NodeAddr=node01 NodeHostName=node01 Version=20.02.6<u></u><u></u></p><p class="MsoNormal"> OS=Linux 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020<u></u><u></u></p><p class="MsoNormal"> RealMemory=386048 AllocMem=0 FreeMem=16665 Sockets=2 Boards=1<u></u><u></u></p><p class="MsoNormal"> State=IDLE ThreadsPerCore=2 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/A<u></u><u></u></p><p class="MsoNormal"> Partitions=defq,CSLive<u></u><u></u></p><p class="MsoNormal"> BootTime=2021-08-04T13:59:08 SlurmdStartTime=2021-08-10T09:32:43<u></u><u></u></p><p class="MsoNormal"> CfgTRES=cpu=64,mem=377G,billing=64<u></u><u></u></p><p class="MsoNormal"> AllocTRES=<u></u><u></u></p><p class="MsoNormal"> CapWatts=n/a<u></u><u></u></p><p class="MsoNormal"> CurrentWatts=0 AveWatts=0<u></u><u></u></p><p class="MsoNormal"> ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s<u></u><u></u></p><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><b>Node2-3<u></u><u></u></b></p><p class="MsoNormal">NodeName=node02 Arch=x86_64 CoresPerSocket=16<u></u><u></u></p><p class="MsoNormal"> CPUAlloc=0 CPUTot=64 CPULoad=0.48<u></u><u></u></p><p class="MsoNormal"> AvailableFeatures=RTX6000<u></u><u></u></p><p class="MsoNormal"> ActiveFeatures=RTX6000<u></u><u></u></p><p class="MsoNormal"> Gres=gpu:4(S:0-1)<u></u><u></u></p><p class="MsoNormal"> NodeAddr=node02 NodeHostName=node02 Version=20.02.6<u></u><u></u></p><p class="MsoNormal"> OS=Linux 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020<u></u><u></u></p><p class="MsoNormal"> RealMemory=257024 AllocMem=0 FreeMem=2259 Sockets=2 Boards=1<u></u><u></u></p><p class="MsoNormal"> State=IDLE ThreadsPerCore=2 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/A<u></u><u></u></p><p class="MsoNormal"> <span lang="FR">Partitions=defq,CSCluster<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR"> BootTime=2021-07-29T20:47:32 SlurmdStartTime=2021-08-10T09:32:55<u></u><u></u></span></p><p class="MsoNormal"><span lang="FR"> </span>CfgTRES=cpu=64,mem=251G,billing=64<u></u><u></u></p><p class="MsoNormal"> AllocTRES=<u></u><u></u></p><p class="MsoNormal"> CapWatts=n/a<u></u><u></u></p><p class="MsoNormal"> CurrentWatts=0 AveWatts=0<u></u><u></u></p><p class="MsoNormal"> ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s</p><br><div class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">On Thu, Aug 19, 2021, 6:07 PM Fulcomer, Samuel <<a href="mailto:samuel_fulcomer@brown.edu" target="_blank">samuel_fulcomer@brown.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">What SLURM version are you running?<div><br></div><div>What are the #SLURM directives in the batch script? (or the sbatch arguments)</div><div><br></div><div>When the single GPU jobs are pending, what's the output of 'scontrol show job JOBID'?<br><div><br></div><div>What are the node definitions in slurm.conf, and the lines in gres.conf?</div><div><br></div><div>Are the nodes all the same host platform (motherboard)?</div><div><br></div><div>We have P100s, TitanVs, Titan RTXs, Quadro RTX 6000s, 3090s, V100s, DGX 1s, A6000s, and A40s, with a mix of single and dual-root platforms, and haven't seen this problem with SLURM 20.02.6 or earlier versions.</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Aug 19, 2021 at 8:38 PM Andrey Malyutin <<a href="mailto:malyutinag@gmail.com" rel="noreferrer" target="_blank">malyutinag@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hello, <div><br></div><div>We are in the process of finishing up the setup of a cluster with 3 nodes, 4 GPUs each. One node has RTX3090s and the other 2 have RTX6000s.Any job asking for 1 GPU in the submission script will wait to run on the 3090 node, no matter resource availability. Same job requesting 2 or more GPUs will run on any node. I don't even know where to begin troubleshooting this issue; entries for the 3 nodes are effectively identical in slurm.conf. Any help would be appreciated. (If helpful - this cluster is used for structural biology, with cryosparc and relion packages). </div><div><br></div><div>Thank you,</div><div>Andrey</div></div>
</blockquote></div>
</blockquote></div></div></div>
</div>
</blockquote></div></div>
</blockquote></div></div>
</blockquote></div>