[slurm-users] About x11 support

John Hearns hearnsj at googlemail.com
Tue Nov 27 04:12:58 MST 2018


Going off topic, if you want an ssh client and an X-server on a Windows
workstation or laptop, I highly recommend MobaXterm.
You can open a remote desktop easily.
Session types are ssh, VNC, RDP, Telnet(!) , Mosh and anything else you can
think of.
Including a serial terminal for those times when you just have to get into
the data centre and plug in that emergency serial console.

https://mobaxterm.mobatek.net/
Fantastic piece of software.

On Tue, 27 Nov 2018 at 09:58, Tina Friedrich <tina.friedrich at it.ox.ac.uk>
wrote:

> Why would you run a slurmctld on a *submit* host? You only need the
> controller daemon on, well, the controllers (what I would still call
> 'queue masters' :) ). Personally I'd make quite sure that no-one apart
> from admins has rights to log in to those, really!
>
> In fact, you don't need to run any daemon on a submit host; it just
> needs access to the binaries and the cluster config. We run Centos, so I
> build the rpms; all I install on the login node(s) is 'slurm' - neither
> 'slurmd' nor 'slurmctld' are required. Which is certainly not something
> you couldn't install on a bunch or workstations, especially if they are
> managed machines.
>
> Tina
>
> On 26/11/2018 23:23, Goetz, Patrick G wrote:
> > I'm a little confused about how this would work.  For example, where
> > does slurmctld run?  And if on each submit host, why aren't the control
> > daemons stepping all over each other?
> >
> > On 11/22/18 6:38 AM, Stu Midgley wrote:
> >> indeed.
> >>
> >> All our workstations are submit hosts and in the queue, so people can
> >> run jobs on their local host if they want.
> >>
> >> We have a GUI tightly integrated with our environment for our staff to
> >> submit and monitor their jobs from (they don't have to touch a single
> >> job script).
> >>
> >> On Thu, Nov 22, 2018 at 6:28 PM Tina Friedrich
> >> <tina.friedrich at it.ox.ac.uk <mailto:tina.friedrich at it.ox.ac.uk>> wrote:
> >>
> >>      I really don't want to start a flaming discussion on this - but I
> don't
> >>      think it's an unusual situation. I have, in likewise roughtly 15
> years
> >>      of doing this, not ever worked anywhere where people didn't have a
> GUI
> >>      to submit from. It's always been a case of 'Wand to use the
> cluster?
> >>      We'll make your workstation a submit host.'
> >>
> >>      I think it's a pretty standard way of handling things it you are an
> >>      institute that runs their own (maybe small) cluster, especially if
> the
> >>      workstations are also managed machine.
> >>
> >>      Tina
> >>
> >>      On 21/11/2018 23:26, Christopher Samuel wrote:
> >>       > On 22/11/18 5:04 am, Mahmood Naderan wrote:
> >>       >
> >>       >> The idea is to have a job manager that find the best node for a
> >>      newly
> >>       >> submitted job. If the user has to manually ssh to a node, why
> one
> >>       >> should use slurm or any other thing?
> >>       >
> >>       > You are in a really really unusual situation - in 15 years I've
> >>      not come
> >>       > across a situation before this where a user would have GUI
> access
> >>      to a
> >>       > system that can submit jobs directly to a cluster like you can.
> >>       >
> >>       > I'm not sure why Slurm has this restriction but it might be that
> >>      you can
> >>       > start up an xterm, change your $DISPLAY to be localhost:0 and
> see
> >>      if you
> >>       > can start an X11 application from that.  It might be that you'll
> >>      need to
> >>       > add an xauth cookie for localhost to get that going.
> >>       >
> >>       > If it does work then (hopefully) you can use that trick to fire
> >>      up jobs
> >>       > with X11 display forwarding.
> >>       >
> >>       > All the best,
> >>       > Chris
> >>
> >>
> >>
> >> --
> >> Dr Stuart Midgley
> >> sdm900 at gmail.com <mailto:sdm900 at gmail.com>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20181127/172e6fa7/attachment-0001.html>


More information about the slurm-users mailing list