[slurm-users] Submitting jobs from machines outside the cluster

William Brown william at signalbox.org.uk
Sun Aug 27 10:10:18 UTC 2023


The machine that runs the CLI isn't usually a cluster node or run slurmd.

The control node has to accept the user is who they claim and AFAIK that is
the job of munge.  And your onboard and external firewalls must allow the
requisite ports.

We used Galaxy (see galaxy.eu if unfamiliar) and could submit jobs to
various job runners including Slurm.  The galaxy node definitely didn't run
any slurm daemons.

I think you do need a common authentication system between the submitting
node and the cluster, but that may just be what I'm used to.



William Brown

On Sun, 27 Aug 2023, 07:20 Steven Swanson, <sjswanson at ucsd.edu> wrote:

> Can I submit jobs with a computer/docker container that is not part of the
> slurm cluster?
>
> I'm trying to set up slurm as the backend for a system with Jupyter
> Notebook-based front end.
>
> The jupyter notebooks are running in containers managed by Jupyter Hub,
> which is a mostly turnkey system for providing docker containers that users
> can access via jupyter.
>
> I would like the jupyter containers to be able to submit jobs to slurm,
> but making them part of the cluster doesn't seem to make sense because:
>
> 1.  They are dynamically created and don't have known hostnames.
> 2.  They aren't supposed to run jobs.
>
> Is there a way to do this?  I tried just running slurmd in the jupyter
> containers, but it complained about not being able to figure out its name
> (I think because the container's hostname is not listed in slurm.conf).
>
> My fall back solution is to use ssh to connect to the slurm head node and
> run jobs there, but that seems kludgy.
>
> -steve
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20230827/fa37f4ae/attachment.htm>


More information about the slurm-users mailing list