[slurm-users] About x11 support

Brendan Moloney moloney.brendan at gmail.com
Mon Nov 26 11:05:43 MST 2018


I posted about the local display issue a while back ("Built in X11
forwarding in 17.11 won't work on local displays").

I agree that having some local managed workstations that can also act as
submit nodes is not so uncommon. However we also ran into this on our
official "login nodes" because we use X2Go (open source fork of NoMachine)
as our main access method, which gives you a "local" display.  There are a
number of advantages to using X2Go (or other NoMachine/FreeNX derivatives)
over plain SSH X11 forwarding. Over 90% of our users are using Windows
workstations and many aren't very familiar with Linux. They need something
more than PuTTY for GUI access anyway, and providing a remote desktop
solution helps smooth out the learning curve. The performance for many GUI
applications is much better, and we can integrate VirtualGL so that users
can easily run light visualization tasks utilizing the GPU on the login
nodes or request a GPU node with srun for larger visualization tasks. I
haven't tried yet, but using the "ssh -X localhost" workaround would likely
break my VirtualGL setup (or at least slow it down). If nothing else,
requiring "ssh -X localhost" (in each terminal they want run interactive
jobs from!) is confusing.

For now we compile our own packages with built-in X11 support disabled and
continue to use the spank plugin.

On Mon, Nov 26, 2018 at 1:41 AM Marcus Wagner <wagner at itc.rwth-aachen.de>
wrote:

> Hi Chris,
>
> I really think, it is not that uncommon. But in another way like Tina
> explained.
>
> We HAVE special loginnodes to the cluster, no institute can submit from
> their workstations, they have to login to our loginnodes.
> BUT, they can do it not only by logging in per ssh, but also per FastX,
> which provides a full desktop to the user, in our case the user can
> choose between MATE and XFCE.
> The DISPLAY variable after login is set e.g. to ":104", since there are
> many users on the nodes. So, to use X11 forwarding with native SLURM,
> the user needs to login to another host with "ssh -X", which is at least
> inconvenient, if not even confusing for the user.
>
> This might be ok, if you have a small group of users, which can be
> trained before they use the cluster. But we have about 40.000 students
> which at least theoretically could use the cluster. We see over one year
> about 2600 different users on the cluster, only 400 or so are constant
> users.
>
> @Mahmood:
> Sorry, if you misunderstood me. I did not mean to login to one of the
> computenodes, but slurm does not accept a socket in the DISPLAY
> variable, this is why you have to login somewhere else with X11
> forwarding, or like Tina explained at least to localhost again with X11
> forwarding.
>
>
> Best
> Marcus
>
>
> On 11/22/2018 12:05 PM, Chris Samuel wrote:
> > On Thursday, 22 November 2018 9:24:50 PM AEDT Tina Friedrich wrote:
> >
> >> I really don't want to start a flaming discussion on this - but I don't
> >> think it's an unusual situation.
> > Oops sorry, I wasn't intending to imply it wasn't a valid way to do it,
> it's
> > just that across the many organisations I've helped with HPC systems
> down here
> > it's not something I'd come across before.   Even the couple that had
> common
> > authN/authZ configs between user workstations and clusters had the
> management
> > nodes firewalled off so the only access to the batch system was by ssh
> into
> > the login nodes of the cluster.
> >
> > I think it's good to hear from sites where this is the case because we
> can
> > easily get stuck in our own little bubbles until something comes and
> trips us
> > up like that.
> >
> > All the best!
> > Chris
>
> --
> Marcus Wagner, Dipl.-Inf.
>
> IT Center
> Abteilung: Systeme und Betrieb
> RWTH Aachen University
> Seffenter Weg 23
> 52074 Aachen
> Tel: +49 241 80-24383
> Fax: +49 241 80-624383
> wagner at itc.rwth-aachen.de
> www.itc.rwth-aachen.de
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20181126/c4f92520/attachment.html>


More information about the slurm-users mailing list