<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>I'm not sure which specific item to look at, but this seems like
a race condition.<br>
Likely you need to add an override to your slurmd startup
(/etc/systemd/system/slurmd.service/override.conf) and put a
dependency there so it won't start until that is done.</p>
<p>I have mine wait for a few things:</p>
<p>[Unit]<br>
After=autofs.service getty.target sssd.service</p>
<p><br>
</p>
<p>That makes it wait for all of those before trying to start.</p>
<p>Brian Andrus<br>
</p>
<div class="moz-cite-prefix">On 3/10/2023 7:41 AM, Tristan LEFEBVRE
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:fc5905f3-9bfb-ec9e-c9c9-a7ce5cdfde59@irt-jules-verne.fr">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<p>Hello to all,</p>
<p>I'm trying to do an installation of Slurm with cgroupv2
activated.</p>
<p>But I'm facing an odd thing : when slurmd is enabled it crash
at the next reboot and will never start unless i disable it.
<br>
</p>
<p>Here is a full example of the situation <br>
</p>
<p><br>
</p>
<pre><font size="4">[root@compute ~]# systemctl start slurmd
[root@compute ~]# systemctl status slurmd
● slurmd.service - Slurm node daemon
Loaded: loaded (/usr/lib/systemd/system/slurmd.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2023-03-10 15:57:00 CET; 967ms ago
Main PID: 8053 (slurmd)
Tasks: 1
Memory: 3.1M
CGroup: /system.slice/slurmd.service
└─8053 /opt/slurm_bin/sbin/slurmd -D --conf-server XXXXX:6817 -s
mars 10 15:57:00 compute.cluster.lab systemd[1]: Started Slurm node daemon.
mars 10 15:57:00 compute.cluster.lab slurmd[8053]: slurmd: slurmd version 23.02.0 started
mars 10 15:57:00 compute.cluster.lab slurmd[8053]: slurmd: slurmd started on Fri, 10 Mar 2023 15:57:00 +0100
mars 10 15:57:00 compute.cluster.lab slurmd[8053]: slurmd: CPUs=48 Boards=1 Sockets=2 Cores=24 Threads=1 Memory=385311 TmpDisk=19990 Uptime=12>
[root@compute ~]# systemctl enable slurmd
Created symlink /etc/systemd/system/multi-user.target.wants/slurmd.service → /usr/lib/systemd/system/slurmd.service.
</font>
<font size="4"><font size="4">[root@compute ~]# reboot now</font>
</font></pre>
<p>> [ reboot of the node] <br>
</p>
<pre><font size="4">[adm@compute ~]$ sudo systemctl status slurmd
● slurmd.service - Slurm node daemon
Loaded: loaded (/usr/lib/systemd/system/slurmd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2023-03-10 16:00:33 CET; 1min 0s ago
Process: 2659 ExecStart=/opt/slurm_bin/sbin/slurmd -D --conf-server XXXX:6817 -s $SLURMD_OPTIONS (code=exited, status=1/FAILURE)
Main PID: 2659 (code=exited, status=1/FAILURE)
mars 10 16:00:33 compute.cluster.lab slurmd[2659]: slurmd: slurmd version 23.02.0 started
mars 10 16:00:33 compute.cluster.lab slurmd[2659]: slurmd: error: Controller cpuset is not enabled!
mars 10 16:00:33 compute.cluster.lab slurmd[2659]: slurmd: error: Controller cpu is not enabled!
mars 10 16:00:33 compute.cluster.lab slurmd[2659]: slurmd: error: cpu cgroup controller is not available.
mars 10 16:00:33 compute.cluster.lab slurmd[2659]: slurmd: error: There's an issue initializing memory or cpu controller
mars 10 16:00:33 compute.cluster.lab slurmd[2659]: slurmd: error: Couldn't load specified plugin name for jobacct_gather/cgroup: Plugin init()>
mars 10 16:00:33 compute.cluster.lab slurmd[2659]: slurmd: error: cannot create jobacct_gather context for jobacct_gather/cgroup
mars 10 16:00:33 compute.cluster.lab slurmd[2659]: slurmd: fatal: Unable to initialize jobacct_gather
mars 10 16:00:33 compute.cluster.lab systemd[1]: slurmd.service: Main process exited, code=exited, status=1/FAILURE
mars 10 16:00:33 compute.cluster.lab systemd[1]: slurmd.service: Failed with result 'exit-code'.
[adm@compute ~]$ sudo systemctl start slurmd
[adm@compute ~]$ sudo systemctl status slurmd
● slurmd.service - Slurm node daemon
Loaded: loaded (/usr/lib/systemd/system/slurmd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2023-03-10 16:01:37 CET; 1s ago
Process: 3321 ExecStart=/opt/slurm_bin/sbin/slurmd -D --conf-server XXXX:6817 -s $SLURMD_OPTIONS (code=exited, status=1/FAILURE)
Main PID: 3321 (code=exited, status=1/FAILURE)
mars 10 16:01:37 compute.cluster.lab slurmd[3321]: slurmd: slurmd version 23.02.0 started
mars 10 16:01:37 compute.cluster.lab slurmd[3321]: slurmd: error: Controller cpuset is not enabled!
mars 10 16:01:37 compute.cluster.lab slurmd[3321]: slurmd: error: Controller cpu is not enabled!
mars 10 16:01:37 compute.cluster.lab slurmd[3321]: slurmd: error: cpu cgroup controller is not available.
mars 10 16:01:37 compute.cluster.lab slurmd[3321]: slurmd: error: There's an issue initializing memory or cpu controller
mars 10 16:01:37 compute.cluster.lab slurmd[3321]: slurmd: error: Couldn't load specified plugin name for jobacct_gather/cgroup: Plugin init()>
mars 10 16:01:37 compute.cluster.lab slurmd[3321]: slurmd: error: cannot create jobacct_gather context for jobacct_gather/cgroup
mars 10 16:01:37 compute.cluster.lab slurmd[3321]: slurmd: fatal: Unable to initialize jobacct_gather
mars 10 16:01:37 compute.cluster.lab systemd[1]: slurmd.service: Main process exited, code=exited, status=1/FAILURE
mars 10 16:01:37 compute.cluster.lab systemd[1]: slurmd.service: Failed with result 'exit-code'.
[adm@compute ~]$ sudo systemctl disable slurmd
Removed /etc/systemd/system/multi-user.target.wants/slurmd.service.
[adm@compute ~]$ sudo systemctl start slurmd
[adm@compute ~]$ sudo systemctl status slurmd
● slurmd.service - Slurm node daemon
Loaded: loaded (/usr/lib/systemd/system/slurmd.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2023-03-10 16:01:45 CET; 1s ago
Main PID: 3358 (slurmd)
Tasks: 1
Memory: 6.1M
CGroup: /system.slice/slurmd.service
└─3358 /opt/slurm_bin/sbin/slurmd -D --conf-server XXXX:6817 -s
mars 10 16:01:45 compute.cluster.lab systemd[1]: Started Slurm node daemon.
mars 10 16:01:45 compute.cluster.lab slurmd[3358]: slurmd: slurmd version 23.02.0 started
mars 10 16:01:45 compute.cluster.lab slurmd[3358]: slurmd: slurmd started on Fri, 10 Mar 2023 16:01:45 +0100
mars 10 16:01:45 compute.cluster.lab slurmd[3358]: slurmd: CPUs=48 Boards=1 Sockets=2 Cores=24 Threads=1 Memory=385311 TmpDisk=19990 Uptime=84></font>
</pre>
As you can see. Slurmd successfully start only when not enable
after a reboot.<br>
<p>- I'm using Rocky Linux 8 and I've configured cgroupv2 with
grubby <br>
</p>
<pre><font size="4">> grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=1 systemd.legacy_systemd_cgroup_controller=0 cgroup_no_v1=all"</font></pre>
<p>- Slurm 23.02 is build with rpmbuild and slurmd on the compute
node is installed with rpm</p>
<p>- Here is my cgroup.conf : <br>
</p>
<pre><font size="4">CgroupPlugin=cgroup/v2
ConstrainCores=yes
ConstrainRAMSpace=yes
ConstrainSwapSpace=yes
ConstrainDevices=no
</font></pre>
<p>And my slurm.conf have : </p>
<pre><font size="4">ProctrackType=proctrack/cgroup
TaskPlugin=task/cgroup,task/affinity
JobAcctGatherType=jobacct_gather/cgroup</font></pre>
<p><br>
</p>
<p>- If i do "systemctl start slurmd" on a compute node it's a
success. <br>
</p>
<p>- If i do "systemctl enable slurmd" and then "systemctl restart
slurmd" it's still ok</p>
<p>- if i enable and reboot, slurmd send this error :</p>
<pre><font size="4">slurmd: error: Controller cpuset is not enabled!
slurmd: error: Controller cpu is not enabled!
slurmd: error: cpu cgroup controller is not available.
slurmd: error: There's an issue initializing memory or cpu controlle</font></pre>
<p>- I've done some research and read about
cgroup.subtree_control. And so if i do:</p>
<pre><font size="4">cat /sys/fs/cgroup/cgroup.subtree_control
memory pids</font></pre>
<p>So I've tried to follow the RedHat documentation with there
example : ( the link of the RedHat page
<a moz-do-not-send="true"
href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/using-cgroups-v2-to-control-distribution-of-cpu-time-for-applications_managing-monitoring-and-updating-the-kernel">here
</a>)<br>
</p>
<pre><code class="!whitespace-pre hljs language-javascript"><font size="4">echo "+cpu" >> /sys/fs/cgroup/cgroup.subtree_control
echo "+cpuset" >> /sys/fs/cgroup/cgroup.subtree_control
<span class="hljs-property"></span>cat /sys/fs/cgroup/cgroup.subtree_control
cpuset cpu memory pids</font>
</code></pre>
<p>And indeed i can restart slurmd.<br>
</p>
<p>But at the next boot it failed again and
/sys/fs/cgroup/cgroup.subtree_control is back with "memory pids"
only.</p>
<p>And strangely i found if slurmd is enabled and then i disable
it, it change the value of /sys/fs/cgroup/cgroup.subtree_control
:<br>
</p>
<pre><font size="4">[root@compute ~]# cat /sys/fs/cgroup/cgroup.subtree_control
memory pids
[root@compute ~]# systemctl disable slurmd
Removed /etc/systemd/system/multi-user.target.wants/slurmd.service.
[root@compute ~]# cat /sys/fs/cgroup/cgroup.subtree_control
cpuset cpu io memory pids</font>
</pre>
<p><br>
</p>
<p>I've made a script at launch time as a dirty fix by using
ExecPreStart in slurmd.service:
<br>
</p>
<pre><font size="4">ExecStartPre=/opt/slurm_bin/dirty_fix_slurmd.sh</font></pre>
<p>with dirty_fix_slurmd.sh:<br>
</p>
<pre>#!/bin//bash
echo "+cpu" >> /sys/fs/cgroup/cgroup.subtree_control
echo "+cpuset" >> /sys/fs/cgroup/cgroup.subtree_control
echo "+cpu" >> /sys/fs/cgroup/system.slice/cgroup.subtree_control
echo "+cpuset" >> /sys/fs/cgroup/system.slice/cgroup.subtree_control
</pre>
<p>(And i'm not sure if this is something good to do ?)<br>
</p>
<p><br>
</p>
<p>If you have an idea how to correct this situation <br>
</p>
<p>Have a nice day</p>
<p>Thank you <br>
</p>
<p class="MsoNormal">Tristan LEFEBVRE</p>
<div style="font-size:9pt; font-family: 'Calibri',sans-serif;">CONFIDENTIALITE
: ce courriel et les éventuelles pièces attachées sont la
propriété de l’IRT Jules Verne, sont confidentiels et sont
réservés à l’usage de la ou des personne(s) identifées(s) comme
destinataire(s). Si vous avez reçu ce courriel par erreur, toute
utilisation, divulgation, ou copie de ce courriel est interdite.
Dans ce cas, merci d’en informer immédiatement l'expéditeur et
de supprimer le courriel et ses pièces jointes.<br>
CONFIDENTIALITY : This e-mail and any attachments are IRT Jules
Verne’s property and are intended solely for the person or
entity to whom it is addressed, and may contain confidential or
privileged information. Should you have received this e-mail in
error, any use, disclosure, or copy of this email is prohibited.
In this case, please inform the sender immediately and delete
this email and its attachments.
</div>
</blockquote>
</body>
</html>