Thank you Brian,

while ResumeRate might be able to keep the CPU usage within an acceptable margin, it's not really a fix, but a workaround. I would prefer a solution that groups resume requests and therefore makes use of a single Ansible playbook run per second instead of <=ResumeRate.

As we completely destroy our instances when powering down, we need to set them up from anew using Ansible. Running Ansible on the worker nodes would be possible, but that comes with additional steps in order to save all log files on the master in case the startup fails and you want to investigate. For now I feel like using the master to setup workers is the better structure.

Best regards,
Xaver

On 08.04.24 18:18, Brian Andrus via slurm-users wrote:

Xaver,

You may want to look at the ResumeRate option in slurm.conf:

ResumeRate
The rate at which nodes in power save mode are returned to normal operation by ResumeProgram. The value is a number of nodes per minute and it can be used to prevent power surges if a large number of nodes in power save mode are assigned work at the same time (e.g. a large job starts). A value of zero results in no limits being imposed. The default value is 300 nodes per minute.

I have all our nodes in the cloud and they power down/deallocate when idle for a bit. I do not use ansible to start them and use the cli interface directly, so the only cpu usage is by that command. I do plan on having ansible run from the node to do any hot-fix/updates from the base image or changes. By running it from the node, it would alleviate any cpu spikes on the slurm head node.

Just a possible path to look at.

Brian Andrus

On 4/8/2024 6:10 AM, Xaver Stiensmeier via slurm-users wrote:
Dear slurm user list,

we make use of elastic cloud computing i.e. node instances are created
on demand and are destroyed when they are not used for a certain amount
of time. Created instances are set up via Ansible. If more than one
instance is requested at the exact same time, Slurm will pass those into
the resume script together and one Ansible call will handle all those
instances.

However, more often than not workflows will request multiple instances
within the same second, but not at the exact same time. This leads to
multiple resume script calls and therefore to multiple Ansible calls.
This will lead to less clear log files, greater CPU consumption by the
multiple running Ansible calls and so on.

What I am looking for is an option to force Slurm to wait a certain
amount and then perform a single resume call for all instances within
that time frame (let's say 1 second).

Is this somehow possible?

Best regards,
Xaver