[slurm-users] Submitting jobs from machines outside the cluster
Shunran Zhang
szhang at ngs.gen-info.osaka-u.ac.jp
Mon Aug 28 01:06:37 UTC 2023
Hi Steve,
The requirement for a client node as I tested is
* munge daemon for auth
* mechanism for client to obtain configuration
So yes I believe you would need munge working on the submitting machine.
For the configuration, I used to keep a copy of the slurm config in
/etc/slurm in the client node via NFS mount, and now I am using DNS SRV
record with "configless slurm". That simplified my setup by a lot as I
only need to install slurm client and munge, then copy the munge key to
the client machine and the client machine can just find the slurmctld,
get the configuration from there and submit job.
Hope that helps
S. Zhang
On 2023/08/28 7:10, Steven Swanson wrote:
> This makes sense and I have a unified authorization scheme, but how do
> the slurm commands (e.g., salloc and squeue) know which slurm head
> node to talk to? Do I have to run munge on the submitting machines?
>
> -steve
>
>
> On Sun, Aug 27, 2023 at 5:00 AM
> <slurm-users-request at lists.schedmd.com> wrote:
>
> Message: 3
> Date: Sun, 27 Aug 2023 11:10:18 +0100
> From: William Brown <william at signalbox.org.uk>
> To: Slurm User Community List <slurm-users at lists.schedmd.com>
> Subject: Re: [slurm-users] Submitting jobs from machines outside the
> cluster
> Message-ID:
>
> <CAGdVoZirihTKz7V=yEoVENKM7LcJoE-JrZdaGA+h2wgBNDk57w at mail.gmail.com
> <mailto:yEoVENKM7LcJoE-JrZdaGA%2Bh2wgBNDk57w at mail.gmail.com>>
> Content-Type: text/plain; charset="utf-8"
>
> The machine that runs the CLI isn't usually a cluster node or run
> slurmd.
>
> The control node has to accept the user is who they claim and
> AFAIK that is
> the job of munge. And your onboard and external firewalls must
> allow the
> requisite ports.
>
> We used Galaxy (see galaxy.eu <http://galaxy.eu> if unfamiliar)
> and could submit jobs to
> various job runners including Slurm. The galaxy node definitely
> didn't run
> any slurm daemons.
>
> I think you do need a common authentication system between the
> submitting
> node and the cluster, but that may just be what I'm used to.
>
>
>
> William Brown
>
> On Sun, 27 Aug 2023, 07:20 Steven Swanson, <sjswanson at ucsd.edu> wrote:
>
> > Can I submit jobs with a computer/docker container that is not
> part of the
> > slurm cluster?
> >
> > I'm trying to set up slurm as the backend for a system with Jupyter
> > Notebook-based front end.
> >
> > The jupyter notebooks are running in containers managed by
> Jupyter Hub,
> > which is a mostly turnkey system for providing docker containers
> that users
> > can access via jupyter.
> >
> > I would like the jupyter containers to be able to submit jobs to
> slurm,
> > but making them part of the cluster doesn't seem to make sense
> because:
> >
> > 1. They are dynamically created and don't have known hostnames.
> > 2. They aren't supposed to run jobs.
> >
> > Is there a way to do this? I tried just running slurmd in the
> jupyter
> > containers, but it complained about not being able to figure out
> its name
> > (I think because the container's hostname is not listed in
> slurm.conf).
> >
> > My fall back solution is to use ssh to connect to the slurm head
> node and
> > run jobs there, but that seems kludgy.
> >
> > -steve
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <https://urldefense.com/v3/__http://lists.schedmd.com/pipermail/slurm-users/attachments/20230827/fa37f4ae/attachment-0001.htm__;!!Mih3wA!H6jtRH1Co7St0JJ5HiauL4ay4-Eh6LH0U82KsCeTJHkkjp-ljokZ6CgLmcjQ0N_QpvgxDYbrcQv4bQYRUvStqM0vuZcLCGlWVg$
> >
>
> End of slurm-users Digest, Vol 70, Issue 36
> *******************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20230828/1ce61c97/attachment-0001.htm>
More information about the slurm-users
mailing list