[slurm-users] Submitting jobs from machines outside the cluster

Steven Swanson sjswanson at ucsd.edu
Sun Aug 27 22:10:24 UTC 2023


This makes sense and I have a unified authorization scheme, but how do the
slurm commands (e.g., salloc and squeue) know which slurm head node to talk
to?   Do I have to run munge on the submitting machines?

-steve


On Sun, Aug 27, 2023 at 5:00 AM <slurm-users-request at lists.schedmd.com>
wrote:

> Message: 3
> Date: Sun, 27 Aug 2023 11:10:18 +0100
> From: William Brown <william at signalbox.org.uk>
> To: Slurm User Community List <slurm-users at lists.schedmd.com>
> Subject: Re: [slurm-users] Submitting jobs from machines outside the
>         cluster
> Message-ID:
>         <CAGdVoZirihTKz7V=
> yEoVENKM7LcJoE-JrZdaGA+h2wgBNDk57w at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> The machine that runs the CLI isn't usually a cluster node or run slurmd.
>
> The control node has to accept the user is who they claim and AFAIK that is
> the job of munge.  And your onboard and external firewalls must allow the
> requisite ports.
>
> We used Galaxy (see galaxy.eu if unfamiliar) and could submit jobs to
> various job runners including Slurm.  The galaxy node definitely didn't run
> any slurm daemons.
>
> I think you do need a common authentication system between the submitting
> node and the cluster, but that may just be what I'm used to.
>
>
>
> William Brown
>
> On Sun, 27 Aug 2023, 07:20 Steven Swanson, <sjswanson at ucsd.edu> wrote:
>
> > Can I submit jobs with a computer/docker container that is not part of
> the
> > slurm cluster?
> >
> > I'm trying to set up slurm as the backend for a system with Jupyter
> > Notebook-based front end.
> >
> > The jupyter notebooks are running in containers managed by Jupyter Hub,
> > which is a mostly turnkey system for providing docker containers that
> users
> > can access via jupyter.
> >
> > I would like the jupyter containers to be able to submit jobs to slurm,
> > but making them part of the cluster doesn't seem to make sense because:
> >
> > 1.  They are dynamically created and don't have known hostnames.
> > 2.  They aren't supposed to run jobs.
> >
> > Is there a way to do this?  I tried just running slurmd in the jupyter
> > containers, but it complained about not being able to figure out its name
> > (I think because the container's hostname is not listed in slurm.conf).
> >
> > My fall back solution is to use ssh to connect to the slurm head node and
> > run jobs there, but that seems kludgy.
> >
> > -steve
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://urldefense.com/v3/__http://lists.schedmd.com/pipermail/slurm-users/attachments/20230827/fa37f4ae/attachment-0001.htm__;!!Mih3wA!H6jtRH1Co7St0JJ5HiauL4ay4-Eh6LH0U82KsCeTJHkkjp-ljokZ6CgLmcjQ0N_QpvgxDYbrcQv4bQYRUvStqM0vuZcLCGlWVg$
> >
>
> End of slurm-users Digest, Vol 70, Issue 36
> *******************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20230827/236e4363/attachment.htm>


More information about the slurm-users mailing list