We use Bright Cluster Manager with SLurm 23.02 on RHEL9. I know about pam_slurm_adopt https://slurm.schedmd.com/pam_slurm_adopt.html which does not appear to come by default with the Bright 'cm' package of Slurm.

Currently ssh to a node gets:
Login not allowed: no running jobs and no WLM allocations

We have 8 GPUs on a node so when we drain a node, which can have up to a 5 day job, no new jobs can run. And since we have 20+ TB (yes TB) local drives, researchers have their work and files on them to retrieve.

Is there a way to use /etc/security/access.conf to work around this at least temporarily until the reboot and then we can revert?

Thanks!

Rob