<div dir="ltr"><div>Thank you Tina, I hadn't realised that would show as "n/a" not "down" in that case (which IMO would have been confusing). For anyone else hitting this I think the minimum you can do is something like:</div><div><br></div><div>PartitionName=compute Default=YES <options> State=UP Nodes=nosuch<br>NodeName=nosuch<br></div><div><br></div><div>The documented approach would have been easier in my case given the constraints of the template logic generating this but at least it's a workaround.</div><div><br></div><div>thanks</div><div>Steve</div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><a href="http://stackhpc.com/" target="_blank">http://stackhpc.com/</a></div><div>Please note I work Tuesday to Friday.</div></div></div></div></div><div><br></div><div>On 18/12/2020 12:45:26, Tina Friedrich wrote:</div><div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">Yeah, I had that problem as well (trying to set up a partition that <br>didn't have any nodes - they're not here yet).
I figured that one can have partitions with nodes that don't exist, <br>though. As in, not even in DNS.
I currently have this:
[arc-slurm ~]$ sinfo<br>PARTITION AVAIL TIMELIMIT NODES STATE NODELIST<br>short up 12:00:00 1 down* arc-c023<br>short up 12:00:00 1 alloc arc-c001<br>short up 12:00:00 43 idle arc-c[002-022,024-045]<br>medium up 2-00:00:00 0 n/a<br>long* up infinite 0 n/a
with medium & long partition containing nodes 'arc-c[046-297]':
PartitionName=medium<br> AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL<br> AllocNodes=ALL Default=NO QoS=N/A<br> DefaultTime=12:00:00 DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 <br>Hidden=NO<br> MaxNodes=UNLIMITED MaxTime=2-00:00:00 MinNodes=0 LLN=NO <br>MaxCPUsPerNode=UNLIMITED<br> Nodes=arc-c[046-297]...
which don't exist as of today:
[arc-slurm ~]$ host arc-c046<br>Host arc-c046 not found: 3(NXDOMAIN)
which - as you can see - simply ends up with SLURM showing the partition <br>with no nodes.
So you could just put a dummy nodename in the slurm.conf file?
Tina</blockquote></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 18 Dec 2020 at 11:13, Steve Brasier <<a href="mailto:steveb@stackhpc.com">steveb@stackhpc.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Having tried just not even defining any partitions you hit this <a href="https://github.com/SchedMD/slurm/blob/master/src/common/node_conf.c#L383" target="_blank">this </a>check which seems to ensure you can't create a cluster with no nodes. Is it possible to create a control node without any compute nodes, e.g. as part of a staged deployment?<div><br clear="all"><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><a href="http://stackhpc.com/" target="_blank">http://stackhpc.com/</a></div><div>Please note I work Tuesday to Friday.</div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 18 Dec 2020 at 10:56, Steve Brasier <<a href="mailto:steveb@stackhpc.com" target="_blank">steveb@stackhpc.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi all,<div><br></div><div>According to the relevant <a href="https://slurm.schedmd.com/archive/slurm-20.02.5/slurm.conf.html" target="_blank">manpage</a> it's possible to define an empty partition using "Nodes= ".</div><div><br></div><div>However this doesn't seem to work (slurm 20.2.05):</div><div><br></div><div>[centos@testohpc-login-0 ~]$ grep -n Partition /etc/slurm/slurm.conf<br>72:PriorityWeightPartition=1000<br>105:PartitionName=compute Default=YES MaxTime=86400 State=UP Nodes= <br></div><div><br></div><div>(note there is a space after that final "=" but I've tried both with and without)</div><div><br></div><div>[centos@testohpc-login-0 ~]$ sinfo<br>sinfo: error: Parse error in file /etc/slurm/slurm.conf line 105: " Nodes= "<br>sinfo: fatal: Unable to process configuration file<br></div><div><br></div><div>Is this a bug, or am I doing it wrong?</div><div><br></div><div>thanks for any suggestions</div><div><br></div><div>Steve</div><div><br clear="all"><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><a href="http://stackhpc.com/" target="_blank">http://stackhpc.com/</a></div><div>Please note I work Tuesday to Friday.</div></div></div></div></div></div></div>
</blockquote></div>
</blockquote></div>