Hi, We have blocked ssh to nodes where users do not have a running job. However, when there is a running job, a user can ssh to that node and use more than the slurm resources allocated on that node. Am I missing something? I usually request an interactive session of 2GB. I then open another terminal and ssh to the node where the interactive session runs. I allocated a 4GB R variable on the node where I just ssh but this does not error out. Additionally, if I end my interactive session, the ssh connection still lives which I believe should not happen. Best, *Fritz Ratnasamy*Data Scientist Information Technology
On 4/8/26 8:44 pm, Ratnasamy, Fritz via slurm-users wrote:
We have blocked ssh to nodes where users do not have a running job. However, when there is a running job, a user can ssh to that node and use more than the slurm resources allocated on that node. Am I missing something?
Are you using pam_slurm_adopt? Is your Slurm configured to use cgroups? [...]
Additionally, if I end my interactive session, the ssh connection still lives which I believe should not happen.
We don't see this problem with pam_slurm_adopt, the ssh session is killed when the job ends. All the best, Chris -- Chris Samuel : http://www.csamuel.org/ : Philadelphia, PA, USA
Hi Fritz, On 4/9/26 05:44, Ratnasamy, Fritz via slurm-users wrote:
We have blocked ssh to nodes where users do not have a running job. However, when there is a running job, a user can ssh to that node and use more than the slurm resources allocated on that node. Am I missing something?
Did you disable pam_systemd in your PAM configuration? It is incompatible with pam_slurm_adopt. Best, Martin -- Dr. habil. Martin Lambers Forschung und Wissenschaftliche Informationsversorgung IT.SERVICES Ruhr-Universität Bochum | 44780 Bochum | Germany https://www.it-services.rub.de/
To add on to this, the way we handle "disabled pam_systemd in your PAM configuration" on our cluster is to disable/mask the systemd-logind service on the compute nodes. Not sure if there's a more elegant way to handle that. Keith -----Original Message----- From: Lambers, Martin via slurm-users <slurm-users@lists.schedmd.com> Sent: Wednesday, April 8, 2026 10:11 PM To: slurm-users@lists.schedmd.com Subject: [slurm-users] Re: pam_slurm_adopt Hi Fritz, On 4/9/26 05:44, Ratnasamy, Fritz via slurm-users wrote:
We have blocked ssh to nodes where users do not have a running job. However, when there is a running job, a user can ssh to that node and use more than the slurm resources allocated on that node. Am I missing something?
Did you disable pam_systemd in your PAM configuration? It is incompatible with pam_slurm_adopt. Best, Martin -- Dr. habil. Martin Lambers Forschung und Wissenschaftliche Informationsversorgung IT.SERVICES Ruhr-Universität Bochum | 44780 Bochum | Germany https://www.it-services.rub.de/
Hi Fritz, This Wiki page gives a discussion of how to configure pam_slurm_adopt correctly: https://wiki.fysik.dtu.dk/Niflheim_system/Slurm_configuration/#pam-module-re... IHTH, Ole On 4/9/26 05:44, Ratnasamy, Fritz via slurm-users wrote:
Hi,
We have blocked ssh to nodes where users do not have a running job. However, when there is a running job, a user can ssh to that node and use more than the slurm resources allocated on that node. Am I missing something? I usually request an interactive session of 2GB. I then open another terminal and ssh to the node where the interactive session runs. I allocated a 4GB R variable on the node where I just ssh but this does not error out. Additionally, if I end my interactive session, the ssh connection still lives which I believe should not happen.
-- Ole Holm Nielsen PhD, Senior HPC Officer Department of Physics, Technical University of Denmark
Also check this on how to compile and install pam_slurm_adopt. https://github.com/prod-feng/compile-and-setup-pam_slurm_adopt.so-module Stop pam_systemd is also important(or stop system-login) in pam model files on the compute nodes: ###### -session optional pam_systemd.so Best, Feng On Thu, Apr 9, 2026 at 12:09 AM Ratnasamy, Fritz via slurm-users < slurm-users@lists.schedmd.com> wrote:
Hi,
We have blocked ssh to nodes where users do not have a running job. However, when there is a running job, a user can ssh to that node and use more than the slurm resources allocated on that node. Am I missing something? I usually request an interactive session of 2GB. I then open another terminal and ssh to the node where the interactive session runs. I allocated a 4GB R variable on the node where I just ssh but this does not error out. Additionally, if I end my interactive session, the ssh connection still lives which I believe should not happen. Best,
*Fritz Ratnasamy*Data Scientist Information Technology
-- slurm-users mailing list -- slurm-users@lists.schedmd.com To unsubscribe send an email to slurm-users-leave@lists.schedmd.com
You are getting some good insights here. To add to them, there is a VERY good chance that your pam.d setup needs tweaked. If a user authenticates by something else before pam_slurm_adopt, then that is the access they get, which means, no adoption into the job constraints. Brian Andrus On 4/8/2026 8:44 PM, Ratnasamy, Fritz via slurm-users wrote:
Hi,
We have blocked ssh to nodes where users do not have a running job. However, when there is a running job, a user can ssh to that node and use more than the slurm resources allocated on that node. Am I missing something? I usually request an interactive session of 2GB. I then open another terminal and ssh to the node where the interactive session runs. I allocated a 4GB R variable on the node where I just ssh but this does not error out. Additionally, if I end my interactive session, the ssh connection still lives which I believe should not happen. Best,
*Fritz Ratnasamy *Data Scientist Information Technology
participants (7)
-
Brian Andrus -
Christopher Samuel -
Feng Zhang -
Keith Lyle Ballou -
Lambers, Martin -
Ole Holm Nielsen -
Ratnasamy, Fritz