[slurm-announce] Slurm version 22.05.1 is now available
Tim Wickberg
tim at schedmd.com
Tue Jun 14 21:10:28 UTC 2022
We are pleased to announce the availability of Slurm version 22.05.1.
This includes one significant fix to an regression introduced in 22.05.0
issue that can lead to over-subscription of licenses. For sites running
22.05.0 the new "bf_licenses" option to SchedulerParameters will resolve
this issue, otherwise upgrading to this new maintenance release is
strongly encouraged.
Slurm can be downloaded from https://www.schedmd.com/downloads.php .
- Tim
--
Tim Wickberg
Chief Technology Officer, SchedMD LLC
Commercial Slurm Development and Support
> * Changes in Slurm 22.05.1
> ==========================
> -- Flush the list of Include config files on SIGHUP.
> -- Fix and update Slurm completion script.
> -- jobacct_gather/cgroup - Add VMem support both for cgroup v1 and v2.
> -- Allow subset of node state transitions when node is in INVAL state.
> -- Remove INVAL state from cloud node after being powered down.
> -- When showing reason UID in scontrol show node, use the authenticated UID
> instead of the login UID.
> -- Fix calculation of reservation's NodeCnt when using dynamic nodes.
> -- Add SBATCH_{ERROR,INPUT,OUTPUT} input environment variables for --error,
> --input and --output options respectively.
> -- Prevent oversubscription of licenses by the backfill scheduler when not
> using the new "bf_licenses" option.
> -- Jobs with multiple nodes in a heterogeneous cluster now have access to all
> the memory on each node by using --mem=0. Previously the memory limit was
> set by the node with the least amount of memory.
> -- Don't limit the size of TaskProlog output (previously TaskProlog output was
> limited to 4094 characters per line, which limited the size of exported
> environment variables or logging to the task).
> -- Fix usage of possibly uninitialized buffer in proctrack/cgroup.
> -- Fix memleak in proctrack/cgroup proctrack_p_wait.
> -- Fix cloud/remote het srun jobs.
> -- Fix a segfault that may happen on gpu configured as no_consume.
More information about the slurm-announce
mailing list