We are pleased to announce the availability of Slurm version 24.05.5.
This release fixes a few potential crashes, several stepmgr bugs,
compatibility for sstat and sattach with newer version steps, and some
other minor bugs.
Downloads are available at https://www.schedmd.com/downloads.php .
--
Marshall Garey
Release Management, Support, and Development
SchedMD LLC - Commercial Slurm Development and Support
> * Changes in Slurm 24.05.5
> ==========================
> -- Fix issue signaling cron jobs resulting in unintended requeues.
> -- Fix slurmctld memory leak in implementation of HealthCheckNodeState=CYCLE.
> -- job_container/tmpfs - Fix SLURM_CONF env variable not being properly set.
> -- sched/backfill - Fix job's time_limit being overwritten by time_min for job
> arrays in some situations.
> -- RoutePart - fix segfault from incorrect memory allocation when node doesn't
> exist in any partition.
> -- slurmctld - Fix crash when a job is evaluated for a reservation after
> removal of a dynamic node.
> -- gpu/nvml - Attempt loading libnvidia-ml.so.1 as a fallback for failure in
> loading libnvidia-ml.so.
> -- slurmrestd - Fix populating non-required object fields of objects as '{}' in
> JSON/YAML instead of 'null' causing compiled OpenAPI clients to reject
> the response to 'GET /slurm/v0.0.40/jobs' due to validation failure of
> '.jobs[].job_resources'.
> -- Fix sstat/sattach protocol errors for steps on higher version slurmd's
> (regressions since 20.11.0rc1 and 16.05.1rc1 respectively).
> -- slurmd - Avoid a crash when starting slurmd version 24.05 with
> SlurmdSpoolDir files that have been upgraded to a newer major version of
> Slurm. Log warnings instead.
> -- Fix race condition in stepmgr step completion handling.
> -- Fix slurmctld segfault with stepmgr and MpiParams when running a job array.
> -- Fix requeued jobs keeping their priority until the decay thread happens.
> -- slurmctld - Fix crash and possible split brain issue if the
> backup controller handles an scontrol reconfigure while in control
> before the primary resumes operation.
> -- Fix stepmgr not getting dynamic node addrs from the controller
> -- stepmgr - avoid "Unexpected missing socket" errors.
> -- Fix `scontrol show steps` with dynamic stepmgr
> -- Support IPv6 in configless mode
We are pleased to announce the availability of the Slurm 24.11 release.
To highlight some new features in 24.11:
- New gpu/nvidia plugin. This does not rely on any NVIDIA libraries, and
will
build by default on all systems. It supports basic GPU detection and
management, but cannot currently identify GPU-to-GPU links, or provide
usage data as these are not exposed by the kernel driver.
- Add autodetected GPUs to the output from "slurmd -C".
- Added new QOS-based reports to "sreport".
- Revamped network I/O with the "conmgr" thread-pool model.
- Added new "hostlist function" syntax for management commands and
configuration files.
- switch/hpe_slingshot - Added support for hardware collectives setup
through
the fabric manager. (Requires SlurmctldParameters=enable_stepmgr)
- Added SchedulerParameters=bf_allow_magnetic_slot configuration option to
allow backfill planning for magnetic reservations.
- Added new "scontrol listjobs" and "liststeps" commands to complement
"listpids", and provide --json/--yaml output for all three subcommands.
- Allow jobs to be submitted against multiple QOSes.
- Added new experimental "oracle" backfill scheduling support, which permits
jobs to be delayed if the oracle function determines the reduced
fragmentation of the network topology is sufficiently advantageous.
- Improved responsiveness of the controller when jobs are requeued by
replacing the "db_index" identifier with a slurmctld-generated unique
identifier. ("SLUID")
- New options to job_container/tmpfs to permit site-specific scripts to
modify the namespace before user steps are launched, and to ensure all
steps are completely captured within that namespace.
The Slurm documentation has also been updated to the 24.11 release.
(Older versions can be found in the archive, linked from the main
documentation page.)
Slurm can be downloaded from https://www.schedmd.com/download-slurm/ .
- Tim
--
Tim Wickberg
Chief Technology Officer, SchedMD LLC
Commercial Slurm Development and Support
We are pleased to announce the availability of Slurm release candidate
24.11.0rc1.
To highlight some new features coming in 24.11:
- New gpu/nvidia plugin. This does not rely on any NVIDIA libraries, and
will build by default on all systems. It supports basic GPU detection
and management, but cannot currently identify GPU-to-GPU links, or
provide usage data as these are not exposed by the kernel driver.
- Add autodetected GPUs to the output from "slurmd -C".
- Added new QOS-based reports to "sreport".
- Revamped network I/O with the "conmgr" thread-pool model.
- Added new "hostlist function" syntax for management commands and
configuration files.
- switch/hpe_slingshot - Added support for hardware collectives setup
through the fabric manager. (Requires SlurmctldParameters=enable_stepmgr)
- Added SchedulerParameters=bf_allow_magnetic_slot configuration option
to allow backfill planning for magnetic reservations.
- Added new "scontrol listjobs" and "liststeps" commands to complement
"listpids", and provide --json/--yaml output for all three subcommands.
- Allow jobs to be submitted against multiple QOSes.
- Added new experimental "oracle" backfill scheduling support, which
permits jobs to be delayed if the oracle function determines the reduced
fragmentation of the network topology is sufficiently advantageous.
- Improved responsiveness of the controller when jobs are requeued by
replacing the "db_index" identifier with a slurmctld-generated unique
identifier. ("SLUID")
- New options to job_container/tmpfs to permit site-specific scripts to
modify the namespace before user steps are launched, and to ensure all
steps are completely captured within that namespace.
This is the first release candidate of the upcoming 24.11 release
series, and represents the end of development for this release, and a
finalization of the RPC and state file formats.
If any issues are identified with this release candidate, please report
them through https://bugs.schedmd.com against the 24.11.x version and we
will address them before the first production 24.11.0 release is made.
Please note that the release candidates are not intended for production use.
A preview of the updated documentation can be found at
https://slurm.schedmd.com/archive/slurm-master/ .
Slurm can be downloaded from https://www.schedmd.com/downloads.php .
--
Marshall Garey
Release Management, Support, and Development
SchedMD LLC - Commercial Slurm Development and Support
Slurm version 24.05.4 is now available and includes a fix for a recently
discovered security issue with the new stepmgr subsystem.
SchedMD customers were informed on October 9th and provided a patch on
request; this process is documented in our security policy. [1]
A mistake in authentication handling in stepmgr could permit an attacker
to execute processes under other users' jobs. This is limited to jobs
explicitly running with --stepmgr, or on systems that have globally
enabled stepmgr through "SlurmctldParameters=enable_stepmgr" in their
configuration. CVE-2024-48936.
Downloads are available at https://www.schedmd.com/downloads.php .
Release notes follow below.
- Tim
[1] https://www.schedmd.com/security-policy/
--
Tim Wickberg
Chief Technology Officer, SchedMD LLC
Commercial Slurm Development and Support
> * Changes in Slurm 24.05.4
> ==========================
> -- Fix generic int sort functions.
> -- Fix user look up using possible unrealized uid in the dbd.
> -- Fix FreeBSD compile issue with tls/none plugin.
> -- slurmrestd - Fix regressions that allowed slurmrestd to be run as SlurmUser
> when SlurmUser was not root.
> -- mpi/pmix fix race conditions with het jobs at step start/end which could
> make srun to hang.
> -- Fix not showing some SelectTypeParameters in scontrol show config.
> -- Avoid assert when dumping removed certain fields in JSON/YAML.
> -- Improve how shards are scheduled with affinity in mind.
> -- Fix MaxJobsAccruePU not being respected when MaxJobsAccruePA is set
> in the same QOS.
> -- Prevent backfill from planning jobs that use overlapping resources for the
> same time slot if the job's time limit is less than bf_resolution.
> -- Fix memory leak when requesting typed gres and --[cpus|mem]-per-gpu.
> -- Prevent backfill from breaking out due to "system state changed" every 30
> seconds if reservations use REPLACE or REPLACE_DOWN flags.
> -- slurmrestd - Make sure that scheduler_unset parameter defaults to true even
> when the following flags are also set: show_duplicates, skip_steps,
> disable_truncate_usage_time, run_away_jobs, whole_hetjob,
> disable_whole_hetjob, disable_wait_for_result, usage_time_as_submit_time,
> show_batch_script, and or show_job_environment. Additionaly, always make
> sure show_duplicates and disable_truncate_usage_time default to true when
> the following flags are also set: scheduler_unset, scheduled_on_submit,
> scheduled_by_main, scheduled_by_backfill, and or job_started. This effects
> the following endpoints:
> 'GET /slurmdb/v0.0.40/jobs'
> 'GET /slurmdb/v0.0.41/jobs'
> -- Ignore --json and --yaml options for scontrol show config to prevent mixing
> output types.
> -- Fix not considering nodes in reservations with Maintenance or Overlap flags
> when creating new reservations with nodecnt or when they replace down nodes.
> -- Fix suspending/resuming steps running under a 23.02 slurmstepd process.
> -- Fix options like sprio --me and squeue --me for users with a uid greater
> than 2147483647.
> -- fatal() if BlockSizes=0. This value is invalid and would otherwise cause the
> slurmctld to crash.
> -- sacctmgr - Fix issue where clearing out a preemption list using
> preempt='' would cause the given qos to no longer be preempt-able until set
> again.
> -- Fix stepmgr creating job steps concurrently.
> -- data_parser/v0.0.40 - Avoid dumping "Infinity" for NO_VAL tagged "number"
> fields.
> -- data_parser/v0.0.41 - Avoid dumping "Infinity" for NO_VAL tagged "number"
> fields.
> -- slurmctld - Fix a potential leak while updating a reservation.
> -- slurmctld - Fix state save with reservation flags when a update fails.
> -- Fix reservation update issues with parameters Accounts and Users, when
> using +/- signs.
> -- slurmrestd - Don't dump warning on empty wckeys in:
> 'GET /slurmdb/v0.0.40/config'
> 'GET /slurmdb/v0.0.41/config'
> -- Fix slurmd possibly leaving zombie processes on start up in configless when
> the initial attempt to fetch the config fails.
> -- Fix crash when trying to drain a non-existing node (possibly deleted
> before).
> -- slurmctld - fix segfault when calculating limit decay for jobs with an
> invalid association.
> -- Fix IPMI energy gathering with multiple sensors.
> -- data_parser/v0.0.39 - Remove xassert requiring errors and warnings to have a
> source string.
> -- slurmrestd - Prevent potential segfault when there is an error parsing an
> array field which could lead to a double xfree. This applies to several
> endpoints in data_parser v0.0.39, v0.0.40 and v0.0.41.
> -- scancel - Fix a regression from 23.11.6 where using both the --ctld and
> --sibling options would cancel the federated job on all clusters instead of
> only the cluster(s) specified by --sibling.
> -- accounting_storage/mysql - Fix bug when removing an association
> specified with an empty partition.
> -- Fix setting multiple partition state restore on a job correctly.
> -- Fix difference in behavior when swapping partition order in job submission.
> -- Fix security issue in stepmgr that could permit an attacker to execute
> processes under other users' jobs. CVE-2024-48936.
Available presentations from this year's SLUG event are now online.
They can be found at https://www.schedmd.com/publications/
We thank all those who presented and attended for a great event!
--
Victoria Hobson
SchedMD LLC
Vice President of Marketing
We are pleased to announce the availability of Slurm versions 24.05.3
and 23.11.10.
Version 24.05.3 fixes a potential database problem when deleting a qos.
This bug only existed in 24.05.
Both versions have fixes for jobs potentially being stuck when using
cloud nodes when some nodes are powered down, a regression in 23.11.9
and 24.05.2 that caused sattach to crash, and some other minor issues.
Slurm can be downloaded from https://www.schedmd.com/downloads.php .
--
Marshall Garey
Release Management, Support, and Development
SchedMD LLC - Commercial Slurm Development and Support
> * Changes in Slurm 24.05.3
> ==========================
> -- data_parser/v0.0.40 - Added field descriptions
> -- slurmrestd - Avoid creating new slurmdbd connection per request to
> '* /slurm/slurmctld/*/*' endpoints.
> -- Fix compilation issue with switch/hpe_slingshot plugin.
> -- Fix gres per task allocation with threads-per-core.
> -- data_parser/v0.0.41 - Added field descriptions
> -- slurmrestd - Change back generated OpenAPI schema for
> `DELETE /slurm/v0.0.40/jobs/` to RequestBody instead of using parameters
> for request. slurmrestd will continue accept endpoint requests via
> RequestBody or HTTP query.
> -- topology/tree - Fix issues with switch distance optimization.
> -- Fix potential segfault of secondary slurmctld when falling back to the
> primary when running with a JobComp plugin.
> -- Enable --json/--yaml=v0.0.39 options on client commands to dump data using
> data_parser/v0.0.39 instead or outputting nothing.
> -- switch/hpe_slingshot - Fix issue that could result in a 0 length state file.
> -- Fix unnecessary message protocol downgrade for unregistered nodes.
> -- Fix unnecessarily packing alias addrs when terminating jobs with a mix of
> non-cloud/dynamic nodes and powered down cloud/dynamic nodes.
> -- accounting_storage/mysql - Fix issue when deleting a qos that could remove
> too many commas from the qos and/or delta_qos fields of the assoc table.
> -- slurmctld - Fix memory leak when using RestrictedCoresPerGPU.
> -- Fix allowing access to reservations without MaxStartDelay set.
> -- Fix regression introduced in 24.05.0rc1 breaking srun --send-libs parsing.
> -- Fix slurmd vsize memory leak when using job submission/allocation commands
> that implicitly or explicitly use --get-user-env.
> -- slurmd - Fix node going into invalid state when using CPUSpecList and
> setting CPUs to the # of cores on a multithreaded node
> -- Fix reboot asap nodes being considered in backfill after a restart.
> -- Fix --clusters/-M queries for clusters outside of a federation when
> fed_display is configured.
> -- Fix scontrol allowing updating job with bad cpus-per-task value.
> -- sattach - Fix regression from 24.05.2 security fix leading to crash.
> -- mpi/pmix - Fix assertion when built under --enable-debug.
> * Changes in Slurm 23.11.10
> ===========================
> -- switch/hpe_slingshot - Fix issue that could result in a 0 length state file.
> -- Fix unnecessary message protocol downgrade for unregistered nodes.
> -- Fix unnecessarily packing alias addrs when terminating jobs with a mix of
> non-cloud/dynamic nodes and powered down cloud/dynamic nodes.
> -- Fix allowing access to reservations without MaxStartDelay set.
> -- Fix scontrol allowing updating job with bad cpus-per-task value.
> -- sattach - Fix regression from 23.11.9 security fix leading to crash.
Slurm versions 24.05.2, 23.11.9, and 23.02.8 are now available and
include a fix for a recently discovered security issue with the switch
plugins.
SchedMD customers were informed on July 17th and provided a patch on
request; this process is documented in our security policy. [1]
For the switch/hpe_slingshot and switch/nvidia_imex plugins, a user
could override the isolation between Slingshot VNIs or IMEX channels.
If you do not have one of these switch plugins configured, then you are
not impacted by this issue.
It is unclear what, if any, information could be accessed with access to
an unauthorized channel. This disclosure is being made out of an
abundance of caution.
If you do have one of these plugins enabled, the slurmctld must be
restarted before the slurmd daemons to avoid disruption.
Downloads are available at https://www.schedmd.com/downloads.php .
Release notes follow below.
- Tim
[1] https://www.schedmd.com/security-policy/
--
Tim Wickberg
Chief Technology Officer, SchedMD LLC
Commercial Slurm Development and Support
> * Changes in Slurm 24.05.2
> ==========================
> -- Fix energy gathering rpc counter underflow in _rpc_acct_gather_energy when
> more than 10 threads try to get energy at the same time. This prevented
> the possibility to get energy from slurmd by any step until slurmd was
> restarted, so losing energy accounting metrics in the node.
> -- accounting_storage/mysql - Fix issue where new user with wckey did not
> have a default wckey sent to the slurmctld.
> -- slurmrestd - Prevent slurmrestd segfault when handling the following
> endpoints when none of the optional parameters are specified:
> 'DELETE /slurm/v0.0.40/jobs'
> 'DELETE /slurm/v0.0.41/jobs'
> 'GET /slurm/v0.0.40/shares'
> 'GET /slurm/v0.0.41/shares'
> 'GET /slurmdb/v0.0.40/instance'
> 'GET /slurmdb/v0.0.41/instance'
> 'GET /slurmdb/v0.0.40/instances'
> 'GET /slurmdb/v0.0.41/instances'
> 'POST /slurm/v0.0.40/job/{job_id}'
> 'POST /slurm/v0.0.41/job/{job_id}'
> -- Fix IPMI energy gathering when no IPMIPowerSensors are specified in
> acct_gather.conf. This situation resulted in an accounted energy of 0
> for job steps.
> -- Fix a minor memory leak in slurmctld when updating a job dependency.
> -- scontrol,squeue - Fix regression that caused incorrect values for
> multisocket nodes at '.jobs[].job_resources.nodes.allocation' for
> 'scontrol show jobs --(json|yaml)' and 'squeue --(json|yaml)'.
> -- slurmrestd - Fix regression that caused incorrect values for
> multisocket nodes at '.jobs[].job_resources.nodes.allocation' to be dumped
> with endpoints:
> 'GET /slurm/v0.0.41/job/{job_id}'
> 'GET /slurm/v0.0.41/jobs'
> -- jobcomp/filetxt - Fix truncation of job record lines > 1024 characters.
> -- Fixed regression that prevented compilation on FreeBSD hosts.
> -- switch/hpe_slingshot - Drain node on failure to delete CXI services.
> -- Fix a performance regression from 23.11.0 in cpu frequency handling when no
> CpuFreqDef is defined.
> -- Fix one-task-per-sharing not working across multiple nodes.
> -- Fix inconsistent number of cpus when creating a reservation using the
> TRESPerNode option.
> -- data_parser/v0.0.40+ - Fix job state parsing which could break filtering.
> -- Prevent cpus-per-task to be modified in jobs where a -c value has been
> explicitly specified and the requested memory constraints implicitly
> increase the number of CPUs to allocate.
> -- slurmrestd - Fix regression where args '-s v0.0.39,dbv0.0.39' and
> '-d v0.0.39' would result in 'GET /openapi/v3' not registering as a valid
> possible query resulting in 404 errors.
> -- slurmrestd - Fix memory leak for dbv0.0.39 jobs query which occurred if the
> query parameters specified account, association, cluster, constraints,
> format, groups, job_name, partition, qos, reason, reservation, state, users,
> or wckey. This affects the following endpoints:
> 'GET /slurmdb/v0.0.39/jobs'
> -- slurmrestd - In the case the slurmdbd does not respond to a persistent
> connection init message, prevent the closed fd from being used, and instead
> emit an error or warning depending on if the connection was required.
> -- Fix 24.05.0 regression that caused the slurmdbd not to send back an error
> message if there is an error initializing a persistent connection.
> -- Reduce latency of forwarded x11 packets.
> -- Add "curr_dependency" (representing the current dependency of the job)
> and "orig_dependency" (representing the original requested dependency of
> the job) fields to the job record in job_submit.lua (for job update) and
> jobcomp.lua.
> -- Fix potential segfault of slurmctld configured with
> SlurmctldParameters=enable_rpc_queue from happening on reconfigure.
> -- Fix potential segfault of slurmctld on its shutdown when rate limitting
> is enabled.
> -- slurmrestd - Fix missing job environment for SLURM_JOB_NAME,
> SLURM_OPEN_MODE, SLURM_JOB_DEPENDENCY, SLURM_PROFILE, SLURM_ACCTG_FREQ,
> SLURM_NETWORK and SLURM_CPU_FREQ_REQ to match sbatch.
> -- Add missing bash-completions dependency to slurm-smd-client debian package.
> -- Fix bash-completions installation in debian pacakges.
> -- Fix GRES environment variable indices being incorrect when only using a
> subset of all GPUs on a node and the --gres-flags=allow-task-sharing option
> -- Add missing mariadb/mysql client package dependency to debian package.
> -- Fail the debian package build early if mysql cannot be found.
> -- Prevent scontrol from segfaulting when requesting scontrol show reservation
> --json or --yaml if there is an error retrieving reservations from the
> slurmctld.
> -- switch/hpe_slingshot - Fix security issue around managing VNI access.
> -- switch/nvidia_imex - Fix security issue managing IMEX channel access.
> -- switch/nvidia_imex - Allow for compatibility with job_container/tmpfs.
> * Changes in Slurm 23.11.9
> ==========================
> -- Fix many commands possibly reporting an "Unexpected Message Received" when
> in reality the connection timed out.
> -- Fix heterogeneous job components not being signaled with scancel --ctld and
> 'DELETE slurm/v0.0.40/jobs' if the job ids are not explicitly given,
> the heterogeneous job components match the given filters, and the
> heterogeneous job leader does not match the given filters.
> -- Fix regression from 23.02 impeding job licenses from being cleared.
> -- Move error to log_flag which made _get_joules_task error to be logged to the
> user when too many rpcs were queued in slurmd for gathering energy.
> -- slurmrestd - Prevent a slurmrestd segfault when modifying an association
> without specifying max TRES limits in the request if those TRES
> limits are currently defined in the association. This affects the following
> fields of endpoint 'POST /slurmdb/v0.0.38/associations/':
> 'associations/max/tres/per/job'
> 'associations/max/tres/per/node'
> 'associations/max/tres/total'
> 'associations/max/tres/minutes/per/job'
> 'associations/max/tres/minutes/total'
> -- Fix power_save operation after recovering from a failed reconfigure.
> -- scrun - Delay shutdown until after start requested. This caused scrun
> to never start or shutdown and hung forever when using --tty.
> -- Fix backup slurmctld potentially not running the agent when taking over as
> the primary controller.
> -- Fix primary controller not running the agent when a reconfigure of the
> slurmctld fails.
> -- jobcomp/{elasticsearch,kafka} - Avoid sending fields with invalid date/time.
> -- Fix energy gathering rpc counter underflow in _rpc_acct_gather_energy when
> more than 10 threads try to get energy at the same time. This prevented
> the possibility to get energy from slurmd by any step until slurmd was
> restarted, so losing energy accounting metrics in the node.
> -- slurmrestd - Fix memory leak for dbv0.0.39 jobs query which occurred if the
> query parameters specified account, association, cluster, constraints,
> format, groups, job_name, partition, qos, reason, reservation, state, users,
> or wckey. This affects the following endpoints:
> 'GET /slurmdb/v0.0.39/jobs'
> -- switch/hpe_slingshot - Fix security issue around managing VNI access.
> * Changes in Slurm 23.02.8
> ==========================
> -- Fix rare deadlock when a dynamic node registers at the same time that a
> once per minute background task occurs.
> -- Fix assertion in developer mode on a failed message unpack.
> -- switch/hpe_slingshot - Fix security issue around managing VNI access.
Slurm User Group (SLUG) 2024 is set for September 12-13 at the
University of Oslo in Oslo, Norway.
Registration information, abstracts, and travel recommendations can be
found here:https://slug24.splashthat.com/
The last day to register with standard pricing ($900) is this Friday,
August 2nd. After this, final registration will run until August 30th
at a price of $1100.
SLUG is the best way to interact with the Slurm community and to
interact with the SchedMD Support & Training staff.
Don't forget to register. We can't wait to see you in Oslo!
--
Victoria Hobson
SchedMD LLC
Vice President of Marketing
We are pleased to announce the availability of Slurm version 24.05.1.
This release addresses a number of minor-to-moderate issues since the
24.05 release was first announced a month ago.
Slurm can be downloaded from https://www.schedmd.com/downloads.php .
- Tim
> * Changes in Slurm 24.05.1
> ==========================
> -- Fix slurmctld and slurmdbd potentially stopping instead of performing a
> logrotate when recieving SIGUSR2 when using auth/slurm.
> -- switch/hpe_slingshot - Fix slurmctld crash when upgrading from 23.02.
> -- Fix "Could not find group" errors from validate_group() when using
> AllowGroups with large /etc/group files.
> -- Prevent an assertion in debugging builds when triggering log rotation
> in a backup slurmctld.
> -- Add AccountingStoreFlags=no_stdio which allows to not record the stdio
> paths of the job when set.
> -- slurmrestd - Prevent a slurmrestd segfault when parsing the crontab field,
> which was never usable. Now it explicitly ignores the value and emits a
> warning if it is used for the following endpoints:
> 'POST /slurm/v0.0.39/job/{job_id}'
> 'POST /slurm/v0.0.39/job/submit'
> 'POST /slurm/v0.0.40/job/{job_id}'
> 'POST /slurm/v0.0.40/job/submit'
> 'POST /slurm/v0.0.41/job/{job_id}'
> 'POST /slurm/v0.0.41/job/submit'
> 'POST /slurm/v0.0.41/job/allocate'
> -- mpi/pmi2 - Fix communication issue leading to task launch failure with
> "invalid kvs seq from node".
> -- Fix getting user environment when using sbatch with "--get-user-env" or
> "--export=" when there is a user profile script that reads /proc.
> -- Prevent slurmd from crashing if acct_gather_energy/gpu is configured but
> GresTypes is not configured.
> -- Do not log the following errors when AcctGatherEnergyType plugins are used
> but a node does not have or cannot find sensors:
> "error: _get_joules_task: can't get info from slurmd"
> "error: slurm_get_node_energy: Zero Bytes were transmitted or received"
> However, the following error will continue to be logged:
> "error: Can't get energy data. No power sensors are available. Try later"
> -- sbatch, srun - Set SLURM_NETWORK environment variable if --network is set.
> -- Fix cloud nodes not being able to forward to nodes that restarted with new
> IP addresses.
> -- Fix cwd not being set correctly when running a SPANK plugin with a
> spank_user_init() hook and the new "contain_spank" option set.
> -- slurmctld - Avoid deadlock during shutdown when auth/slurm is active.
> -- Fix segfault in slurmctld with topology/block.
> -- sacct - Fix printing of job group for job steps.
> -- scrun - Log when an invalid environment variable causes the job submission
> to be rejected.
> -- accounting_storage/mysql - Fix problem where listing or modifying an
> association when specifying a qos list could hang or take a very long time.
> -- gpu/nvml - Fix gpuutil/gpumem only tracking last GPU in step. Now,
> gpuutil/gpumem will record sums of all GPUS in the step.
> -- Fix error in scrontab jobs when using slurm.conf:PropagatePrioProcess=1.
> -- Fix slurmctld crash on a batch job submission with "--nodes 0,...".
> -- Fix dynamic IP address fanout forwarding when using auth/slurm.
> -- Restrict listening sockets in the mpi/pmix plugin and sattach to the
> SrunPortRange.
> -- slurmrestd - Limit mime types returned from query to 'GET /openapi/v3' to
> only return one mime type per serializer plugin to fix issues with OpenAPI
> client generators that are unable to handle multiple mime type aliases.
> -- Fix many commands possibly reporting an "Unexpected Message Received" when
> in reality the connection timed out.
> -- Prevent slurmctld from starting if there is not a json serializer present
> and the extra_constraints feature is enabled.
> -- Fix heterogeneous job components not being signaled with scancel --ctld and
> 'DELETE slurm/v0.0.40/jobs' if the job ids are not explicitly given,
> the heterogeneous job components match the given filters, and the
> heterogeneous job leader does not match the given filters.
> -- Fix regression from 23.02 impeding job licenses from being cleared.
> -- Move error to log_flag which made _get_joules_task error to be logged to the
> user when too many rpcs were queued in slurmd for gathering energy.
> -- For scancel --ctld and the associated rest api endpoints:
> 'DELETE /slurm/v0.0.40/jobs'
> 'DELETE /slurm/v0.0.41/jobs'
> Fix canceling the final array task in a job array when the task is pending
> and all array tasks have been split into separate job records. Previously
> this task was not canceled.
> -- Fix power_save operation after recovering from a failed reconfigure.
> -- slurmctld - Skip removing the pidfile when running under systemd. In that
> situation it is never created in the first place.
> -- Fix issue where altering the flags on a Slurm account (UsersAreCoords)
> several limits on the account's association would be set to 0 in
> Slurm's internal cache.
> -- Fix memory leak in the controller when relaying stepmgr step accounting to
> the dbd.
> -- Fix segfault when submitting stepmgr jobs within an existing allocation.
> -- Added "disable_slurm_hydra_bootstrap" as a possible MpiParams parameter in
> slurm.conf. Using this will disable env variable injection to allocations
> for the following variables: I_MPI_HYDRA_BOOTSTRAP,
> I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS, HYDRA_BOOTSTRAP,
> HYDRA_LAUNCHER_EXTRA_ARGS.
> -- scrun - Delay shutdown until after start requested. This caused scrun
> to never start or shutdown and hung forever when using --tty.
> -- Fix backup slurmctld potentially not running the agent when taking over as
> the primary controller.
> -- Fix primary controller not running the agent when a reconfigure of the
> slurmctld fails.
> -- slurmd - fix premature timeout waiting for REQUEST_LAUNCH_PROLOG with large
> array jobs causing node to drain.
> -- jobcomp/{elasticsearch,kafka} - Avoid sending fields with invalid date/time.
> -- jobcomp/elasticsearch - Fix slurmctld memory leak from curl usage
> -- acct_gather_profile/influxdb - Fix slurmstepd memory leak from curl usage
> -- Fix 24.05.0 regression not deleting job hash dirs after MinJobAge.
> -- Fix filtering arguments being ignored when using squeue --json.
> -- switch/nvidia_imex - Move setup call after spank_init() to allow namespace
> manipulation within the SPANK plugin.
> -- switch/nvidia_imex - Skip plugin operation if nvidia-caps-imex-channels
> device is not present rather than preventing slurmd from starting.
> -- switch/nvidia_imex - Skip plugin operation if job_container/tmpfs
> is configured due to incompatibility.
> -- switch/nvidia_imex - Remove any pre-existing channels when slurmd starts.
> -- rpc_queue - Add support for an optional rpc_queue.yaml configuration file.
We are pleased to announce the availability of Slurm version 23.11.8.
The 23.11.8 release fixes some potential crashes in slurmctld,
slurmrestd, and slurmd when using less common features; two issues in
auth/slurm; and a few other minor bugs.
Slurm can be downloaded from https://www.schedmd.com/downloads.php .
-Marshall
> -- Fix slurmctld crash when reconfiguring with a PrologSlurmctld is running.
> -- Fix slurmctld crash after a job has been resized.
> -- Fix slurmctld and slurmdbd potentially stopping instead of performing a
> logrotate when recieving SIGUSR2 when using auth/slurm.
> -- Fix not having a disabled value for keepalive CommunicationParameters in
> slurm.conf when these parameters are not set. This can log an error when
> setting a socket, for example during slurmdbd registration with ctld.
> -- switch/hpe_slingshot - Fix slurmctld crash when upgrading from 23.02.
> -- Fix "Could not find group" errors from validate_group() when using
> AllowGroups with large /etc/group files.
> -- slurmrestd - Prevent a slurmrestd segfault when parsing the crontab field,
> which was never usable. Now it explicitly ignores the value and emits a
> warning if it is used for the following endpoints:
> 'POST /slurm/v0.0.39/job/{job_id}'
> 'POST /slurm/v0.0.39/job/submit'
> 'POST /slurm/v0.0.40/job/{job_id}'
> 'POST /slurm/v0.0.40/job/submit'
> -- Fix getting user environment when using sbatch with "--get-user-env" or
> "--export=" when there is a user profile script that reads /proc.
> -- Prevent slurmd from crashing if acct_gather_energy/gpu is configured but
> GresTypes is not configured.
> -- Do not log the following errors when AcctGatherEnergyType plugins are used
> but a node does not have or cannot find sensors:
> "error: _get_joules_task: can't get info from slurmd"
> "error: slurm_get_node_energy: Zero Bytes were transmitted or received"
> However, the following error will continue to be logged:
> "error: Can't get energy data. No power sensors are available. Try later"
> -- Fix cloud nodes not being able to forward to nodes that restarted with new
> IP addresses.
> -- sacct - Fix printing of job group for job steps.
> -- Fix error in scrontab jobs when using slurm.conf:PropagatePrioProcess=1.
> -- Fix slurmctld crash on a batch job submission with "--nodes 0,...".
> -- Fix dynamic IP address fanout forwarding when using auth/slurm.