[slurm-users] Compute nodes cycling from idle to down on a regular basis ?
Jeremy.Fix at centralesupelec.fr
Wed Feb 2 05:56:43 UTC 2022
A follow-up. I though some of nodes were ok but that's not the case;
This morning, another pool of consecutive (why consecutive by the way?
they are always consecutively failing) compute nodes are idle* . And now
of the nodes which were drained came back to life in idle and now again
switched to idle*.
One thing I should mention is that the master is now handling a total of
148 nodes; That's the new pool of 100 nodes which have a cycling state.
The previous 48 nodes that already handled by this master are ok.
I do not know if this should be considered a large system but we tried
to have a look to settings such as the ARP cache  on the slurm
master. I'm not very familiar with that, it seems to me it enlarges the
cache of the node names/IPs table. This morning, the master has 125
lines in "arp -a" (before changing the settings in systctl , it was
like, 20 or so); Do you think this settings is also necessary on the
compute nodes ?
More information about the slurm-users