Yes we do. We run a cluster which uses NFSv4 exclusively to access the shared file system (on Dell PowerScale), so all users need the Kerberos tickets from Active Directory to even access their login directories. We use the RFC2307 attributes in AD to provide a consistent UID and GID for all our users.
It does have some implications..
We use AUKS to manage credential renewal and propagation to the compute nodes. We have documentation based on the original CERN documents and information on the SchedMD Slurm site.
We do have to get the users to *disable* GSSAPI in the ssh client (we have instructions for PuTTY and MobaXterm) because the login node absolutely needs to the username and password in order to get the Kerberos credential. The nodes are all AD-joined and have constrained delegation enabled so that they can find the SPNs for the NFS services and pass on the tickets.
It also means that users cannot use ssh keys, so things like using the cluster as a back-end for VScode are not going to work (or not work seamlessly)
There are quite a few moving parts but it does all work. It took a while to get to that point. It helps a lot that I have access myself to <everything> so I do not have to beg an AD team to do/tell me.
It
From: Burian, John via slurm-users slurm-users@lists.schedmd.com Sent: 30 April 2025 15:39 To: slurm-users@lists.schedmd.com Subject: [slurm-users] Slurm and Kerberos/GSSAPI
Does anyone have any experience with using Kerberos/GSSAPI and Slurm? I'm specifically wondering if there is a known mechanism for providing proper Kerberos credentials to Slurm batch jobs, such that those processes would be able to access a filesystem that requires Kerberos credentials. Some quick searching returned nothing useful. Interactive jobs have a similar problem, but I'm hoping that SSH credential forwarding can be leveraged there.
I'm nothing like an expert in Kerberos, so forgive any apparent ignorance.
Thanks, John