[slurm-users] Steps to upgrade slurm for a patchlevel change?

Ryan Novosielski novosirj at rutgers.edu
Fri Sep 29 15:33:33 UTC 2023


I’ll just say, we haven’t done an online/jobs running upgrade recently (in part because we know our database upgrade will take a long time, and we have some processes that rely on -M), but we have done it and it does work fine. So the paranoia isn’t necessary unless you know that, like us, the DB upgrade time is not tenable (Ole’s wiki has some great suggestions for how to test that, but they aren’t especially Slurm specific, it’s just a dry-run).

As far as the shared symlink thing goes, I think you’d be fine, dependent on whether or not you have anything else stored in the shared software tree, changing the symlink and just not restarting compute nodes’ slurmd until you’re ready — though again, you can do this while jobs are running, so there’s not really a reason to wait, except in cases like ours where it’s just easier to reboot the node than one process for running nodes, and then rebooting, and wanting to be sure that the rebooted compute node and the running upgraded node will operate exactly the same.

On Sep 29, 2023, at 10:10, Paul Edmon <pedmon at cfa.harvard.edu> wrote:

This is one of the reasons we stick with using RPM's rather than the symlink process. It's just cleaner and avoids the issue of having the install on shared storage that may get overwhelmed with traffic or suffer outages. Also the package manager automatically removes the previous versions and local installs stuff. I've never been a fan of the symlink method has it runs counter to the entire point and design of Linux and package managers which are supposed to do this heavy lifting for you.

Rant aside :). Generally for minor upgrades the process is less touchy. For our setup we follow the following process that works well for us, but does create an outage for the period of the upgrade.

1. Set all partitions to down: This makes sure no new jobs are scheduled.
2. Suspend all jobs: This makes sure jobs aren't running while we upgrade.
3. Stop slurmctld and slurmdbd.
4. Upgrade the slurmdbd. Restart slurmdbd
5. Upgrade the slurmd and slurmctld across the cluster.
6. Restart slurmd and slurmctld simultaneously using choria.
7. Unsuspend all jobs
8. Reopen all partitions.

For major upgrades we always take a mysqldump and backup the spool for the slurmctld before upgrading just in case something goes wrong. We've had this happen before when the slurmdbd upgrade cut out early (note, always run the slurmdbd and slurmctld upgrades in -D mode and not via systemctl as systemctl can timeout and kill the upgrade midway for large upgrades).

That said I've also skipped steps 1, 2, 7, and 8 before for minor upgrades and it works fine. The slurmd, slurmctld, and slurmdbd can all run on different versions so long as the slurmdbd > slurmctld > slurmd.  So if you want to do a live upgrade you can do it. However out paranoia we general stop everything. The entire process takes about an hour start to finish, with the longest part being the pausing of all the jobs.

-Paul Edmon-

On 9/29/2023 9:48 AM, Groner, Rob wrote:
I did already see the upgrade section of Jason's talk, but it wasn't much about the mechanics of the actual upgrade process, more of a big picture it seemed.  It dealt a lot with different parts of slurm at different versions, which is something we don't have.

One little wrinkle here is that while, yes, we're using a symlink to point to what version of slurm is the current one...it's all on a shared filesystem.  So, ALL nodes, slurmdb, slurmctld are using that same symlink.  There is no means to upgrade one component at a time.  That means to upgrade, EVERYTHING has to come down before it could come back up.  Jason's slides seemed to indicate that, if there were separate symlinks, then I could focus on just the slurmdb first and upgrade it...then focus on slurmctld and upgrade it, and then finally the nodes (take down their slurmd, upgrade the link, bring up slurmd).  So maybe that's what I'm missing.

Otherwise, I think what I'm saying is that I see references to a "rolling upgrade", but I don't see any guide to a rolling upgrade.  I just see the 14 steps  in https://slurm.schedmd.com/quickstart_admin.html#upgrade, and I guess I'd always thought of that as the full octane, high fat upgrade.  I've only ever done upgrades during one of our many scheduled downtimes, because the upgrades were always to a new major version, and because I'm a scared little chicken, so I figured there were maybe some smaller subset of steps if only upgrading a patchlevel change.  Smaller change, less risk, less precautionary steps...?  I'm seeing now that's not the case.

Thank you all for the suggestions!

Rob


________________________________
From: slurm-users <slurm-users-bounces at lists.schedmd.com><mailto:slurm-users-bounces at lists.schedmd.com> on behalf of Ryan Novosielski <novosirj at rutgers.edu><mailto:novosirj at rutgers.edu>
Sent: Friday, September 29, 2023 2:48 AM
To: Slurm User Community List <slurm-users at lists.schedmd.com><mailto:slurm-users at lists.schedmd.com>
Subject: Re: [slurm-users] Steps to upgrade slurm for a patchlevel change?


You don't often get email from novosirj at rutgers.edu<mailto:novosirj at rutgers.edu>. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification>

I started off writing there’s really no particular process for these/just do your changes and start the new software (be mindful of any PATH that might contain data that’s under your software tree, if you have that setup), and that you might need to watch the timeouts, but I figured I’d have a look at the upgrade guide to be sure.

There’s really nothing onerous in there. I’d personally back up my database and state save directories just because I’d rather be safe than sorry, or for if have to go backwards and want to be sure. You can run SlurmCtld for a good while with no database (note that -M on the command line will be broken during that time), just being mindful of the RAM on the SlurmCtld machine/don’t restart it before the DB is back up, and backing up our fairly large database doesn’t take all that long. Whether or not 5 is required mostly depends on how long you think it will take you to do 6-11 (which could really take you seconds if your process is really as simple as stop, change symlink, start), 12 you’re going to do no matter what, 13 you don’t need if you skipped 5, and 14 is up to you. So practically, that’s what you’re going to do anyway.

We just did an upgrade last week, and the only difference is that our compute nodes are stateless, so the compute node upgrades were a reboot (we could upgrade them running, but we did it during a maintenance period anyway, so why?).

If you want to do this with running jobs, I’d definitely back up the state save directory, but as long as you watch the timeouts, it’s pretty uneventful. You won’t have that long database upgrade period, since no database modifications will be required, so it’s pretty much like upgrading anything else.

--
#BlackLivesMatter
____
|| \\UTGERS,     |---------------------------*O*---------------------------
||_// the State  |         Ryan Novosielski - novosirj at rutgers.edu<mailto:novosirj at rutgers.edu>
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
||  \\    of NJ  | Office of Advanced Research Computing - MSB A555B, Newark
     `'

On Sep 28, 2023, at 11:58, Groner, Rob <rug262 at psu.edu><mailto:rug262 at psu.edu> wrote:


There's 14 steps to upgrading slurm listed on their website, including shutting down and backing up the database.  So far we've only updated slurm during a downtime, and it's been a major version change, so we've taken all the steps indicated.

We now want to upgrade from 23.02.4 to 23.02.5.

Our slurm builds end up in version named directories, and we tell production which one to use via symlink.  Changing the symlink will automatically change it on our slurm controller node and all slurmd nodes.

Is there an expedited, simple, slimmed down upgrade path to follow if we're looking at just a . level upgrade?

Rob

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20230929/653224ed/attachment-0001.htm>


More information about the slurm-users mailing list