<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Hi Steve,</p>
<p>The requirement for a client node as I tested is</p>
<p>* munge daemon for auth</p>
<p>* mechanism for client to obtain configuration</p>
<p>So yes I believe you would need munge working on the submitting
machine.</p>
<p>For the configuration, I used to keep a copy of the slurm config
in /etc/slurm in the client node via NFS mount, and now I am using
DNS SRV record with "configless slurm". That simplified my setup
by a lot as I only need to install slurm client and munge, then
copy the munge key to the client machine and the client machine
can just find the slurmctld, get the configuration from there and
submit job.</p>
<p>Hope that helps</p>
<p>S. Zhang<br>
</p>
<div class="moz-cite-prefix">On 2023/08/28 7:10, Steven Swanson
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAMnVvkt4HQ7mDd_71MCZtE-uBhJaV1ogK5S9jC7TZpH-fA2u3w@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div class="gmail_quote">
<div dir="ltr">
<div dir="ltr">This makes sense and I have a unified
authorization scheme, but how do the slurm commands (e.g.,
salloc and squeue) know which slurm head node to talk to?
Do I have to run munge on the submitting machines?</div>
<div dir="ltr">
<div>
<div dir="ltr" class="gmail_signature">
<div dir="ltr">
<div><br>
</div>
-steve</div>
</div>
</div>
<br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Sun, Aug 27, 2023 at
5:00 AM <<a
href="mailto:slurm-users-request@lists.schedmd.com"
target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">slurm-users-request@lists.schedmd.com</a>>
wrote:</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
Message: 3<br>
Date: Sun, 27 Aug 2023 11:10:18 +0100<br>
From: William Brown <<a
href="mailto:william@signalbox.org.uk" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">william@signalbox.org.uk</a>><br>
To: Slurm User Community List <<a
href="mailto:slurm-users@lists.schedmd.com"
target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">slurm-users@lists.schedmd.com</a>><br>
Subject: Re: [slurm-users] Submitting jobs from machines
outside the<br>
cluster<br>
Message-ID:<br>
<CAGdVoZirihTKz7V=<a
href="mailto:yEoVENKM7LcJoE-JrZdaGA%2Bh2wgBNDk57w@mail.gmail.com"
target="_blank" moz-do-not-send="true">yEoVENKM7LcJoE-JrZdaGA+h2wgBNDk57w@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
The machine that runs the CLI isn't usually a cluster
node or run slurmd.<br>
<br>
The control node has to accept the user is who they
claim and AFAIK that is<br>
the job of munge. And your onboard and external
firewalls must allow the<br>
requisite ports.<br>
<br>
We used Galaxy (see <a href="http://galaxy.eu"
rel="noreferrer" target="_blank"
moz-do-not-send="true">galaxy.eu</a> if unfamiliar)
and could submit jobs to<br>
various job runners including Slurm. The galaxy node
definitely didn't run<br>
any slurm daemons.<br>
<br>
I think you do need a common authentication system
between the submitting<br>
node and the cluster, but that may just be what I'm used
to.<br>
<br>
<br>
<br>
William Brown<br>
<br>
On Sun, 27 Aug 2023, 07:20 Steven Swanson, <<a
href="mailto:sjswanson@ucsd.edu" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">sjswanson@ucsd.edu</a>>
wrote:<br>
<br>
> Can I submit jobs with a computer/docker container
that is not part of the<br>
> slurm cluster?<br>
><br>
> I'm trying to set up slurm as the backend for a
system with Jupyter<br>
> Notebook-based front end.<br>
><br>
> The jupyter notebooks are running in containers
managed by Jupyter Hub,<br>
> which is a mostly turnkey system for providing
docker containers that users<br>
> can access via jupyter.<br>
><br>
> I would like the jupyter containers to be able to
submit jobs to slurm,<br>
> but making them part of the cluster doesn't seem to
make sense because:<br>
><br>
> 1. They are dynamically created and don't have
known hostnames.<br>
> 2. They aren't supposed to run jobs.<br>
><br>
> Is there a way to do this? I tried just running
slurmd in the jupyter<br>
> containers, but it complained about not being able
to figure out its name<br>
> (I think because the container's hostname is not
listed in slurm.conf).<br>
><br>
> My fall back solution is to use ssh to connect to
the slurm head node and<br>
> run jobs there, but that seems kludgy.<br>
><br>
> -steve<br>
><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a
href="https://urldefense.com/v3/__http://lists.schedmd.com/pipermail/slurm-users/attachments/20230827/fa37f4ae/attachment-0001.htm__;!!Mih3wA!H6jtRH1Co7St0JJ5HiauL4ay4-Eh6LH0U82KsCeTJHkkjp-ljokZ6CgLmcjQ0N_QpvgxDYbrcQv4bQYRUvStqM0vuZcLCGlWVg$"
rel="noreferrer" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">https://urldefense.com/v3/__http://lists.schedmd.com/pipermail/slurm-users/attachments/20230827/fa37f4ae/attachment-0001.htm__;!!Mih3wA!H6jtRH1Co7St0JJ5HiauL4ay4-Eh6LH0U82KsCeTJHkkjp-ljokZ6CgLmcjQ0N_QpvgxDYbrcQv4bQYRUvStqM0vuZcLCGlWVg$</a>
><br>
<br>
End of slurm-users Digest, Vol 70, Issue 36<br>
*******************************************<br>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</body>
</html>