[slurm-users] Documentation for creating a login node for a SLURM cluster

Lachlan Musicman datakid at gmail.com
Sun Oct 14 16:11:10 MDT 2018


There's one thing that no one seems to have mentioned - I think you will
need to list it as an AllocNode in the Partition that you want it to be
able to allocate jobs to.

https://slurm.schedmd.com/slurm.conf.html#OPT_AllocNodes

Eg in my conf we have one partition that looks like

PartitionName=reserved Nodes=papr-res-compute[201-211] MaxTime=14-0
DefaultTime=0:40:0 State=UP AllocNodes=vmpr-res-head-node,vmpr-res-cluster1

You will see that
vmpr-res-head-node  <-  our head node/cluster manager, only admins have
access. Doesn't run jobs.
vmpr-res-cluster1  <-  login node, for the users. Doesn't run jobs.

Cheers
L.


On Sat, 13 Oct 2018 at 04:05, Christopher Benjamin Coffey <
Chris.Coffey at nau.edu> wrote:

> In addition, fwiw, this login node will have a second network connection
> of course for campus with firewall setup to only allow ssh (and other
> essential) from campus. Also you may consider having some script developed
> to prevent folks from abusing the login node instead of using slurm for
> their computations. We have a policy that allows for user processes to only
> consume 30 min of cpu time before they are killed. We have an exception
> list of course.  Also, our login node has lots of devel packages to allow
> folks to compile software, etc. You may want to have a motd on the login
> node to announce changes, or offer tips. Also, the login node doesn't need
> to have the slurm service running, but needs the slurm s/w, current
> slurm.conf, and munge keys.
>
> Just some things off the top of my head.
>
> Best,
> Chris
>
>> Christopher Coffey
> High-Performance Computing
> Northern Arizona University
> 928-523-1167
>
>
> On 10/12/18, 6:33 AM, "slurm-users on behalf of Michael Gutteridge" <
> slurm-users-bounces at lists.schedmd.com on behalf of
> michael.gutteridge at gmail.com> wrote:
>
>     I'm unaware of specific docs, but I tend to think of these simply as
> daemon nodes that aren't listed in slurm.conf.  We use Ubuntu and the
> packages we install are munge, slurm-wlm, and slurm-client (which
>      drags in libslurmXX and slurm-wlm-basic-plugins).
>
>
>     Then the setup is very similar to slurmd nodes- you need matching UIDs
> and the same munge key.  Access to the same file systems at the same
> locations as on daemon nodes is also advisable.
>
>
>     Hope this helps.
>
>
>     Michael
>
>     On Fri, Oct 12, 2018 at 3:38 AM Aravindh Sampathkumar <
> aravindh at fastmail.com> wrote:
>
>
>     Hello.
>
>
>
>     I built a simple 2 node SLURM cluster for the first time, and I'm able
> to run jobs on the lone compute node from the node that runs Slurmctl.
>
>     However, I'd like to setup a "login node" which only allows users to
> submit jobs to the SLURM cluster and not act as SLURM controller or as a
> compute node. I'm struggling to find documentation about what needs to be
> installed and
>      configured on this login node to be able to submit jobs to the
> cluster.
>
>
>
>     console(login node)  ---->  slurm controller(runs slurmctld and
> slurmdbd) ----> compute node (runs slurmd)
>
>
>
>     Can anybody point me to any relevant docs to configure the login node?
>
>
>
>     Thanks,
>
>
>     --
>
>       Aravindh Sampathkumar
>
>
>     aravindh at fastmail.com <mailto:aravindh at fastmail.com>
>
>
>
>
>
>
>
>
>
>
>
>
>
>

-- 
------
'...postwork futures are dismissed with the claim that "it is not in our
nature to be idle", thereby demonstrating at once an essentialist view of
labor and an impoverished imagination of the possibilities of nonwork.'

Kathi Weeks, *The Problem with Work: Feminism, Marxism, Antiwork Politics
and Postwork Imaginaries*
<https://www.dukeupress.edu/The-Problem-with-Work/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20181015/1256c9b1/attachment.html>


More information about the slurm-users mailing list