<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body dir="auto">
Make sure you don’t have a firewall blocking connections back to the login node from the cluster. We had that problem at Rutgers before.<br>
<br>
<div dir="ltr">Sent from my iPhone</div>
<div dir="ltr"><br>
<blockquote type="cite">On May 19, 2023, at 13:13, Prentice Bisbal <pbisbal@pppl.gov> wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<p>Brian, <br>
</p>
<p>Thanks for the reply, and I was hoping that would be the fix, but that doesn't seem to be the case. I'm using 22.05.8, which isn't that old. I double-checked the documentation archives for version 22.05.08's documetation, and setting
<br>
</p>
<pre><font size="4">LaunchParameters=use_interactive_step
</font></pre>
<p>should be valid here. From <a class="moz-txt-link-freetext" href="https://slurm.schedmd.com/archive/slurm-22.05.8/slurm.conf.html">
https://slurm.schedmd.com/archive/slurm-22.05.8/slurm.conf.html</a>:</p>
<p></p>
<blockquote type="cite">
<dl compact="compact"><dt><b>use_interactive_step</b> </dt><dd>Have salloc use the Interactive Step to launch a shell on an allocated compute node rather than locally to wherever salloc was invoked. This is accomplished by launching the srun command with InteractiveStepOptions as options.
<p>This does not affect salloc called with a command as an argument. These jobs will continue to be executed as the calling user on the calling host.
</p>
</dd></dl>
</blockquote>
<p></p>
<p>and <br>
</p>
<p></p>
<blockquote type="cite">
<dl compact="compact"><dt><b>InteractiveStepOptions</b> </dt><dd>When LaunchParameters=use_interactive_step is enabled, launching salloc will automatically start an srun process with InteractiveStepOptions to launch a terminal on a node in the job allocation. The default value is "--interactive --preserve-env --pty $SHELL".
The "--interactive" option is intentionally not documented in the srun man page. It is meant only to be used in
<b>InteractiveStepOptions</b> in order to create an "interactive step" that will not consume resources so that other steps may run in parallel with the interactive step.
</dd></dl>
</blockquote>
<p></p>
<p>According to that, setting LaunchParameters=use_interactive_step should be enough, since "--interactive --preserve-env --pty $SHELL" is the default.
<br>
</p>
<p>A colleague pointed out that my slurm.conf was setting LaunchParameters to "user_interactive_step" when it should be "use_interactive_step", but changing that didn't fix my problem, just changed it. Now when I try to start an interactive shell, it just hangs
and eventually returns an error: <br>
</p>
<p>[pbisbal@ranger ~]$ salloc -n 1 -t 00:10:00 --mem=1G<br>
salloc: Granted job allocation 29<br>
salloc: Waiting for resource configuration<br>
salloc: Nodes ranger-s22-07 are ready for job<br>
srun: error: timeout waiting for task launch, started 0 of 1 tasks<br>
srun: launch/slurm: launch_p_step_launch: StepId=29.interactive aborted before step completely launched.<br>
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.<br>
srun: error: Timed out waiting for job step to complete<br>
salloc: Relinquishing job allocation 29<br>
[pbisbal@ranger ~]$ <br>
<br>
</p>
<p><br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 5/19/23 11:28 AM, Brian Andrus wrote:<br>
</div>
<blockquote type="cite" cite="mid:dc9c3457-560b-a801-3ca8-e9889bd96eac@gmail.com">
<p>Defaulting to a shell for salloc is a newer feature.</p>
<p>For your version, you should:</p>
<p> srun -n 1 -t 00:10:00 --mem=1G --pty bash</p>
<p>Brian Andrus<br>
</p>
<div class="moz-cite-prefix">On 5/19/2023 8:24 AM, Ryan Novosielski wrote:<br>
</div>
<blockquote type="cite" cite="mid:F5B30A35-AB12-41A8-A4C8-9D3DE17C9547@rutgers.edu">
I’m not at a computer, and we run an older version of Slurm yet so I can’t say with 100% confidence that his this has changed and I can’t be too specific, but I know that this is the behavior you should expect from that command. I believe that there are configuration
options to make it behave differently.
<div><br>
</div>
<div>Otherwise, you can use srun to run commands on the assigned node.</div>
<div><br>
</div>
<div>I think if you search this list for “interactive,” or search the Slurm bugs database, you will see some other conversations about this.<br>
<br>
<div dir="ltr">Sent from my iPhone</div>
<div dir="ltr"><br>
<blockquote type="cite">On May 19, 2023, at 10:35, Prentice Bisbal <a class="moz-txt-link-rfc2396E" href="mailto:pbisbal@pppl.gov" moz-do-not-send="true">
<pbisbal@pppl.gov></a> wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<p>I'm setting up Slurm from scratch for the first time ever. Using 22.05.8 since I haven't had a changed to upgrade our DB server to 23.02 yet. When I try to use salloc to get a shell on a compute node (ranger-s22-07), I end up with a shell on the login node
(ranger): <br>
</p>
<pre><font size="4">[pbisbal@ranger ~]$ salloc -n 1 -t 00:10:00 --mem=1G
salloc: Granted job allocation 23
salloc: Waiting for resource configuration
salloc: Nodes ranger-s22-07 are ready for job
[pbisbal@ranger ~]$ </font>
</pre>
<p>Any ideas what's going wrong here? I have the following line in my slurm.conf:
<br>
</p>
<pre><font size="4">LaunchParameters=user_interactive_step
</font></pre>
<p>When I run salloc with -vvvvv, here's what I see: <br>
</p>
<pre>[pbisbal@ranger ~]$ salloc -vvvvv -n 1 -t 00:10:00 --mem=1G
salloc: defined options
salloc: -------------------- --------------------
salloc: mem : 1G
salloc: ntasks : 1
salloc: time : 00:10:00
salloc: verbose : 5
salloc: -------------------- --------------------
salloc: end of defined options
salloc: debug3: Trying to load plugin /usr/lib64/slurm/select_cons_res.so
salloc: debug3: plugin_load_from_file->_verify_syms: found Slurm plugin name:Consumable Resources (CR) Node Selection plugin type:select/cons_res version:0x160508
salloc: select/cons_res: common_init: select/cons_res loaded
salloc: debug3: Success.
salloc: debug3: Trying to load plugin /usr/lib64/slurm/select_cons_tres.so
salloc: debug3: plugin_load_from_file->_verify_syms: found Slurm plugin name:Trackable RESources (TRES) Selection plugin type:select/cons_tres version:0x160508
salloc: select/cons_tres: common_init: select/cons_tres loaded
salloc: debug3: Success.
salloc: debug3: Trying to load plugin /usr/lib64/slurm/select_cray_aries.so
salloc: debug3: plugin_load_from_file->_verify_syms: found Slurm plugin name:Cray/Aries node selection plugin type:select/cray_aries version:0x160508
salloc: select/cray_aries: init: Cray/Aries node selection plugin loaded
salloc: debug3: Success.
salloc: debug3: Trying to load plugin /usr/lib64/slurm/select_linear.so
salloc: debug3: plugin_load_from_file->_verify_syms: found Slurm plugin name:Linear node selection plugin type:select/linear version:0x160508
salloc: select/linear: init: Linear node selection plugin loaded with argument 20
salloc: debug3: Success.
salloc: debug: Entering slurm_allocation_msg_thr_create()
salloc: debug: port from net_stream_listen is 43881
salloc: debug: Entering _msg_thr_internal
salloc: debug4: eio: handling events for 1 objects
salloc: debug3: eio_message_socket_readable: shutdown 0 fd 6
salloc: debug3: Trying to load plugin /usr/lib64/slurm/auth_munge.so
salloc: debug3: plugin_load_from_file->_verify_syms: found Slurm plugin name:Munge authentication plugin type:auth/munge version:0x160508
salloc: debug: auth/munge: init: Munge authentication plugin loaded
salloc: debug3: Success.
salloc: debug3: Trying to load plugin /usr/lib64/slurm/hash_k12.so
salloc: debug3: plugin_load_from_file->_verify_syms: found Slurm plugin name:KangarooTwelve hash plugin type:hash/k12 version:0x160508
salloc: debug: hash/k12: init: init: KangarooTwelve hash plugin loaded
salloc: debug3: Success.
salloc: Granted job allocation 24
salloc: Waiting for resource configuration
salloc: Nodes ranger-s22-07 are ready for job
salloc: debug: laying out the 1 tasks on 1 hosts ranger-s22-07 dist 8192
[pbisbal@ranger ~]$ </pre>
<p>This is all I see in /var/log/slurm/slurmd.log on the compute node: <br>
</p>
<pre>[2023-05-19T10:21:36.898] [24.extern] task/cgroup: _memcg_initialize: job: alloc=1024MB mem.limit=1024MB memsw.limit=unlimited
[2023-05-19T10:21:36.899] [24.extern] task/cgroup: _memcg_initialize: step: alloc=1024MB mem.limit=1024MB memsw.limit=unlimited
</pre>
And this is all I see in /var/log/slurm/slurmctld.log on the controller: <br>
<br>
<pre>[2023-05-19T10:18:16.815] sched: _slurm_rpc_allocate_resources JobId=23 NodeList=ranger-s22-07 usec=1136
[2023-05-19T10:18:22.423] Time limit exhausted for JobId=22
[2023-05-19T10:21:36.861] sched: _slurm_rpc_allocate_resources JobId=24 NodeList=ranger-s22-07 usec=1039</pre>
Here's my slurm.conf file: <br>
<br>
<pre># grep -v ^# /etc/slurm/slurm.conf | grep -v ^$ </pre>
<pre>ClusterName=ranger
SlurmctldHost=ranger-master
EnforcePartLimits=ALL
JobSubmitPlugins=lua,require_timelimit
LaunchParameters=user_interactive_step
MaxStepCount=2500
MaxTasksPerNode=32
MpiDefault=none
ProctrackType=proctrack/cgroup
PrologFlags=contain
ReturnToService=0
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
StateSaveLocation=/var/spool/slurmctld
SwitchType=switch/none
TaskPlugin=task/affinity,task/cgroup
TopologyPlugin=topology/tree
CompleteWait=32
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0|
DefMemPerCPU=5000
SchedulerType=sched/backfill
SelectType=select/cons_tres
SelectTypeParameters=CR_Core_Memory
PriorityType=priority/multifactor
PriorityDecayHalfLife=15-0
PriorityCalcPeriod=15
PriorityFavorSmall=NO
PriorityMaxAge=180-0
PriorityWeightAge=5000
PriorityWeightFairshare=5000
PriorityWeightJobSize=5000
AccountingStorageEnforce=all
AccountingStorageHost=slurm.pppl.gov
AccountingStorageType=accounting_storage/slurmdbd
AccountingStoreFlags=job_script
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
JobAcctGatherParams=UsePss
JobAcctGatherType=jobacct_gather/cgroup
SlurmctldDebug=info
SlurmctldLogFile=/var/log/slurm/slurmctld.log
SlurmdDebug=info
SlurmdLogFile=/var/log/slurm/slurmd.log
NodeName=ranger-s22-07 CPUs=72 Boards=1 SocketsPerBoard=4 CoresPerSocket=18 ThreadsPerCore=1 RealMemory=384880 State=UNKNOWN
PartitionName=all Nodes=ALL Default=YES GraceTime=300 MaxTime=24:00:00 State=UP</pre>
<pre class="moz-signature" cols="72">--
Prentice </pre>
</div>
</blockquote>
</div>
</blockquote>
</blockquote>
</div>
</blockquote>
</body>
</html>