[slurm-users] pam_slurm_adopt not working for all users

Loris Bennett loris.bennett at fu-berlin.de
Thu May 27 06:19:14 UTC 2021


Hi Michael,

Michael Jennings <mej at lanl.gov> writes:

> On Tuesday, 25 May 2021, at 14:09:54 (+0200),
> Loris Bennett wrote:
>
>> I think my main problem is that I expect logging in to a node with a job
>> to work with pam_slurm_adopt but without any SSH keys.  My assumption
>> was that MUNGE takes care of the authentication, since users' jobs start
>> on nodes with the need for keys.
>> 
>> Can someone confirm that this expectation is wrong and, if possible, why
>> the analogy with jobs is incorrect?
>
> Yes, that expectation is incorrect.  When Slurm launches jobs, even
> interactive ones, it is Slurm itself that handles connecting all the
> right sockets to all the right places, and MUNGE handles the
> authentication for that action.
>
> SSHing into cluster node isn't done through Slurm; thus, sshd handles
> the authentication piece by calling out to your PAM stack (by
> default).  And you should think of pam_slurm_adopt as adding a
> "required but not sufficient" step in your auth process for SSH; that
> is, if it fails, the user can't get in, but if it succeeds, PAM just
> moves on to the next module in the stack.
>
> (Technically speaking, it's PAM, so the above is only the default
> configuration.  It's theoretically possible to set up PAM in a
> different way...but that's very much a not-good idea.)
>
>> I have a vague memory that this used work on our old cluster with an
>> older version of Slurm, but I could be thinking of a time before we set
>> up pam_slurm_adopt.
>
> Some cluster tools, such as Warewulf and PERCEUS, come with built-in
> scripts to create SSH key pairs (with unencrypted private keys) that
> had special names for any (non-system) user who didn't already have a
> pair.  Maybe the prior cluster was doing something like that?  Or
> could it have been using Host-based Auth?
>
>> I have discovered that the users whose /home directories were migrated
>> from our previous cluster all seem to have a pair of keys which were
>> created along with files like '~/.bash_profile'.  Users who have been
>> set up on the new cluster don't have these files.
>> 
>> Is there some /etc/skel-like mechanism which will create passwordless
>> SSH keys when a user logs into the system for the first time?  It looks
>> increasingly to me that such a mechanism must have existed on our old
>> cluster.
>
> That tends to point toward the "something was doing it for you before
> that is no longer present" theory.
>
> You do NOT want to use /etc/skel for this, though.  That would cause
> all your users to have the same unencrypted private key providing
> access to their user account, which means they'd be able to SSH around
> as each other.  That's...problematic. ;-)
>
>> I was just getting round to the idea that /etc/profile.d might be
>> the way to go, so your script looks like exactly the sort of thing I
>> need.
>
> You can definitely do it that way, and a lot of sites do.  But
> honestly, you're better served by setting up Host-based Auth for SSH.
> It uses the same public/private keypair KEX to authenticate each other
> that is normally used for users, so as long as your hosts are secure,
> you can rely on the security of HostbasedAuthentication.
>
> With unencrypted private keys (that's what "passphraseless" really
> means), you definitely can be opening the door to abuse.  If you want
> to go that route, you'd likely want to set up something that users
> couldn't abuse, e.g. via AuthorizedKeysCommand, rather than the
> traditional in-homedir key pairs.
>
> We use host-based for all of our clusters here at LANL, and it
> simplifies a *lot* for us.  If you want to give it a try, there's a
> good cookbook here:
>     https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Host-based_Authentication
>
> HTH,
> Michael

Thanks for the detailed explanations.  I was obviously completely
confused about what MUNGE does.  Would it be possible to say, in very
hand-waving terms, that MUNGE performs a similar role for the access of
processes to nodes as SSH does for the access of users to nodes?

Regarding keys vs. host-based SSH, I see that host-based would be more
elegant, but would involve more configuration.  What exactly are the
simplification gains you see? I just have a single cluster and naively I
would think dropping a script into /etc/profile.d on the login node
would be less work than re-configuring SSH for the login node and
multiple compute node images.

Regarding AuthorizedKeysCommand, I don't think we can use that, because
users don't necessarily have existing SSH keys.  What abuse scenarios
where you thinking of in connection with in-homedir key pairs?

Cheers,

Loris
-- 
Dr. Loris Bennett (Hr./Mr.)
ZEDAT, Freie Universität Berlin         Email loris.bennett at fu-berlin.de



More information about the slurm-users mailing list