<div dir="ltr"><div>We run Bright 8.1 and Slurm 17.11. We are trying to allow for multiple concurrent jobs to run on our small 4 node cluster.</div><div><br></div><div>Based on <a href="https://community.brightcomputing.com/question/5d6614ba08e8e81e885f1991?action=artikel&cat=14&id=410&artlang=en&highlight=slurm+%2526%252334%253Bgang+scheduling%2526%252334%253B">https://community.brightcomputing.com/question/5d6614ba08e8e81e885f1991?action=artikel&cat=14&id=410&artlang=en&highlight=slurm+%2526%252334%253Bgang+scheduling%2526%252334%253B</a> and<br><a href="https://slurm.schedmd.com/cons_res_share.html">https://slurm.schedmd.com/cons_res_share.html</a></div><div><br></div><div>Here are some settings in /etc/slurm/slurm.conf:<br><br>SchedulerType=sched/backfill<br># Nodes<br>NodeName=node[001-003] CoresPerSocket=12 RealMemory=191800 Sockets=2 Gres=gpu:1<br># Partitions<br>PartitionName=defq Default=YES MinNodes=1 AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1 DisableRootJobs=NO RootOnly=NO Hidden=NO Shared=NO GraceTime=0 PreemptMode=OFF ReqResv=NO AllowAccounts=ALL AllowQos=ALL LLN=NO ExclusiveUser=NO OverSubscribe=FORCE:12 OverTimeLimit=0 State=UP Nodes=node[001-003]<br>PartitionName=gpuq Default=NO MinNodes=1 AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1 DisableRootJobs=NO RootOnly=NO Hidden=NO Shared=NO GraceTime= 0 PreemptMode=OFF ReqResv=NO AllowAccounts=ALL AllowQos=ALL LLN=NO ExclusiveUser=NO OverSubscribe=FORCE:12 OverTimeLimit=0 State=UP<br># Generic resources types<br>GresTypes=gpu,mic<br># Epilog/Prolog parameters<br>PrologSlurmctld=/cm/local/apps/cmd/scripts/prolog-prejob<br>Prolog=/cm/local/apps/cmd/scripts/prolog<br>Epilog=/cm/local/apps/cmd/scripts/epilog<br># Fast Schedule option<br>FastSchedule=1<br># Power Saving<br>SuspendTime=-1 # this disables power saving<br>SuspendTimeout=30<br>ResumeTimeout=60<br>SuspendProgram=/cm/local/apps/cluster-tools/wlm/scripts/slurmpoweroff<br>ResumeProgram=/cm/local/apps/cluster-tools/wlm/scripts/slurmpoweron<br># END AUTOGENERATED SECTION -- DO NOT REMOVE<br># <a href="http://kb.brightcomputing.com/faq/index.php?action=artikel&cat=14&id=410&artlang=en&highlight=slurm+%26%2334%3Bgang+scheduling%26%2334%3B">http://kb.brightcomputing.com/faq/index.php?action=artikel&cat=14&id=410&artlang=en&highlight=slurm+%26%2334%3Bgang+scheduling%26%2334%3B</a><br>SelectType=select/cons_res<br>SelectTypeParameters=CR_CPU<br>SchedulerTimeSlice=60<br>EnforcePartLimits=YES<br><br>But it appears each job takes 1 of the 3 nodes and all other jobs are back scheduled. Do we have an incorrect option set?<br><br>squeue -a<br>JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br>1937 defq PaNet5 user1 PD 0:00 1 (Resources)<br>1938 defq PoNet5 

user1 

PD 0:00 1 (Priority)<br>1964 defq SENet5 

user1 

PD 0:00 1 (Priority)<br>1979 defq IcNet5 

user1 

PD 0:00 1 (Priority)<br>1980 defq runtrain 

user2 PD 0:00 1 (Priority)<br>1981 defq InRes5 

user1   PD 0:00 1 (Priority)<br>1983 defq run_LSTM 

user3 PD 0:00 1 (Priority)<br>1984 defq run_hui. 

user4 PD 0:00 1 (Priority)<br>1936 defq SeRes5 

user1   R 10:02:39 1 node003<br>1950 defq sequenti 

user5  R 1-02:03:00 1 node001<br>1978 defq run_hui. 

user16 R 13:48:21 1 node002<br></div><div><br></div><div>Am I misunderstanding some of the settings?</div><div><br></div><div><br></div></div>