We are pleased to announce the availability of the Slurm 24.11 release.
To highlight some new features in 24.11:
- New gpu/nvidia plugin. This does not rely on any NVIDIA libraries, and
will
build by default on all systems. It supports basic GPU detection and
management, but cannot currently identify GPU-to-GPU links, or provide
usage data as these are not exposed by the kernel driver.
- Add autodetected GPUs to the output from "slurmd -C".
- Added new QOS-based reports to "sreport".
- Revamped network I/O with the "conmgr" thread-pool model.
- Added new "hostlist function" syntax for management commands and
configuration files.
- switch/hpe_slingshot - Added support for hardware collectives setup
through
the fabric manager. (Requires SlurmctldParameters=enable_stepmgr)
- Added SchedulerParameters=bf_allow_magnetic_slot configuration option to
allow backfill planning for magnetic reservations.
- Added new "scontrol listjobs" and "liststeps" commands to complement
"listpids", and provide --json/--yaml output for all three subcommands.
- Allow jobs to be submitted against multiple QOSes.
- Added new experimental "oracle" backfill scheduling support, which permits
jobs to be delayed if the oracle function determines the reduced
fragmentation of the network topology is sufficiently advantageous.
- Improved responsiveness of the controller when jobs are requeued by
replacing the "db_index" identifier with a slurmctld-generated unique
identifier. ("SLUID")
- New options to job_container/tmpfs to permit site-specific scripts to
modify the namespace before user steps are launched, and to ensure all
steps are completely captured within that namespace.
The Slurm documentation has also been updated to the 24.11 release.
(Older versions can be found in the archive, linked from the main
documentation page.)
Slurm can be downloaded from https://www.schedmd.com/download-slurm/ .
- Tim
--
Tim Wickberg
Chief Technology Officer, SchedMD LLC
Commercial Slurm Development and Support
We are pleased to announce the availability of Slurm release candidate
24.11.0rc1.
To highlight some new features coming in 24.11:
- New gpu/nvidia plugin. This does not rely on any NVIDIA libraries, and
will build by default on all systems. It supports basic GPU detection
and management, but cannot currently identify GPU-to-GPU links, or
provide usage data as these are not exposed by the kernel driver.
- Add autodetected GPUs to the output from "slurmd -C".
- Added new QOS-based reports to "sreport".
- Revamped network I/O with the "conmgr" thread-pool model.
- Added new "hostlist function" syntax for management commands and
configuration files.
- switch/hpe_slingshot - Added support for hardware collectives setup
through the fabric manager. (Requires SlurmctldParameters=enable_stepmgr)
- Added SchedulerParameters=bf_allow_magnetic_slot configuration option
to allow backfill planning for magnetic reservations.
- Added new "scontrol listjobs" and "liststeps" commands to complement
"listpids", and provide --json/--yaml output for all three subcommands.
- Allow jobs to be submitted against multiple QOSes.
- Added new experimental "oracle" backfill scheduling support, which
permits jobs to be delayed if the oracle function determines the reduced
fragmentation of the network topology is sufficiently advantageous.
- Improved responsiveness of the controller when jobs are requeued by
replacing the "db_index" identifier with a slurmctld-generated unique
identifier. ("SLUID")
- New options to job_container/tmpfs to permit site-specific scripts to
modify the namespace before user steps are launched, and to ensure all
steps are completely captured within that namespace.
This is the first release candidate of the upcoming 24.11 release
series, and represents the end of development for this release, and a
finalization of the RPC and state file formats.
If any issues are identified with this release candidate, please report
them through https://bugs.schedmd.com against the 24.11.x version and we
will address them before the first production 24.11.0 release is made.
Please note that the release candidates are not intended for production use.
A preview of the updated documentation can be found at
https://slurm.schedmd.com/archive/slurm-master/ .
Slurm can be downloaded from https://www.schedmd.com/downloads.php .
--
Marshall Garey
Release Management, Support, and Development
SchedMD LLC - Commercial Slurm Development and Support