[slurm-users] Slurm versions 23.02.6 and 22.05.10 are now available (CVE-2023-41914)

Taras Shapovalov tshapovalov at nvidia.com
Thu Oct 12 06:37:24 UTC 2023

Are the older versions affected as well?

Best regards,

From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of Tim Wickberg <tim at schedmd.com>
Sent: Thursday, October 12, 2023 00:01
To: slurm-announce at schedmd.com <slurm-announce at schedmd.com>; slurm-users at schedmd.com <slurm-users at schedmd.com>
Subject: [slurm-users] Slurm versions 23.02.6 and 22.05.10 are now available (CVE-2023-41914)

External email: Use caution opening links or attachments

Slurm versions 23.02.6 and 22.05.10 are now available to address a
number of filesystem race conditions that could let an attacker take
control of an arbitrary file, or remove entire directories' contents

SchedMD customers were informed on September 27th and provided a patch
on request; this process is documented in our security policy [1].


A number of race conditions have been identified within the
slurmd/slurmstepd processes that can lead to the user taking ownership
of an arbitrary file on the system. A related issue can lead to the user
overwriting an arbitrary file on the compute node (although with data
that is not directly under their control). A related issue can also lead
to the user deleting all files and sub-directories of an arbitrary
target directory on the compute node.

Thank you to François Diakhate (CEA) for reporting the original issue to
us. A number of related issues were found during an extensive audit of
Slurm's filesystem handling code in reaction to that report, and are
included here in this same disclosure.

SchedMD only issues security fixes for the supported releases (currently
23.02 and 22.05). Due to the complexity of these fixes, we do not
recommend attempting to backport the fixes to older releases, and
strongly encourage sites to upgrade to fixed versions immediately.

Downloads are available at https://www.schedmd.com/downloads.php .

Release notes follow below.

- Tim

[1] https://www.schedmd.com/security.php

Tim Wickberg
Chief Technology Officer, SchedMD LLC
Commercial Slurm Development and Support

> * Changes in Slurm 23.02.6
> ==========================
>  -- Fix CpusPerTres= not upgreadable with scontrol update
>  -- Fix unintentional gres removal when validating the gres job state.
>  -- Fix --without-hpe-slingshot configure option.
>  -- Fix cgroup v2 memory calculations when transparent huge pages are used.
>  -- Fix parsing of sgather --timeout option.
>  -- Fix regression from 22.05.0 that caused srun --cpu-bind "=verbose" and "=v"
>     options give different CPU bind masks.
>  -- Fix "_find_node_record: lookup failure for node" error message appearing
>     for all dynamic nodes during reconfigure.
>  -- Avoid segfault if loading serializer plugin fails.
>  -- slurmrestd - Correct OpenAPI format for 'GET /slurm/v0.0.39/licenses'.
>  -- slurmrestd - Correct OpenAPI format for 'GET /slurm/v0.0.39/job/{job_id}'.
>  -- slurmrestd - Change format to multiple fields in 'GET
>     /slurmdb/v0.0.39/assocations' and 'GET /slurmdb/v0.0.39/qos' to handle
>     infinite and unset states.
>  -- When a node fails in a job with --no-kill, preserve the extern step on the
>     remaining nodes to avoid breaking features that rely on the extern step
>     such as pam_slurm_adopt, x11, and job_container/tmpfs.
>  -- auth/jwt - Ignore 'x5c' field in JWKS files.
>  -- auth/jwt - Treat 'alg' field as optional in JWKS files.
>  -- Allow job_desc.selinux_context to be read from the job_submit.lua script.
>  -- Skip check in slurmstepd that causes a large number of errors in the munge
>     log: "Unauthorized credential for client UID=0 GID=0".  This error will
>     still appear on slurmd/slurmctld/slurmdbd start up and is not a cause for
>     concern.
>  -- slurmctld - Allow startup with zero partitions.
>  -- Fix some mig profile names in slurm not matching nvidia mig profiles.
>  -- Prevent slurmscriptd processing delays from blocking other threads in
>     slurmctld while trying to launch {Prolog|Epilog}Slurmctld.
>  -- Fix sacct printing ReqMem field when memory doesn't exist in requested TRES.
>  -- Fix how heterogenous steps in an allocation with CR_PACK_NODE or -mpack are
>     created.
>  -- Fix slurmctld crash from race condition within job_submit_throttle plugin.
>  -- Fix --with-systemdsystemunitdir when requesting a default location.
>  -- Fix not being able to cancel an array task by the jobid (i.e. not
>     <jobid>_<taskid>) through scancel, job launch failure or prolog failure.
>  -- Fix cancelling the whole array job when the array task is the meta job and
>     it fails job or prolog launch and is not requeable. Cancel only the
>     specific task instead.
>  -- Fix regression in 21.08.2 where MailProg did not run for mail-type=end for
>     jobs with non-zero exit codes.
>  -- Fix incorrect setting of memory.swap.max in cgroup/v2.
>  -- Fix jobacctgather/cgroup collection of disk/io, gpumem, gpuutil TRES values.
>  -- Fix -d singleton for heterogeneous jobs.
>  -- Downgrade info logs about a job meeting a "maximum node limit" in the
>     select plugin to DebugFlags=SelectType. These info logs could spam the
>     slurmctld log file under certain circumstances.
>  -- prep/script - Fix [Srun|Task]<Prolog|Epilog> missing SLURM_JOB_NODELIST.
>  -- gres - Rebuild GRES core bitmap for nodes at startup. This fixes error:
>     "Core bitmaps size mismatch on node [HOSTNAME]", which causes jobs to enter
>     state "Requested node configuration is not available".
>  -- slurmctd - Allow startup with zero nodes.
>  -- Fix filesystem handling race conditions that could lead to an attacker
>     taking control of an arbitrary file, or removing entire directories'
>     contents. CVE-2023-41914.

> * Changes in Slurm 22.05.10
> ===========================
>  -- Fix filesystem handling race conditions that could lead to an attacker
>     taking control of an arbitrary file, or removing entire directories'
>     contents. CVE-2023-41914.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20231012/1dc949df/attachment-0003.htm>

More information about the slurm-users mailing list