[slurm-users] x11 forwarding not available?

Mahmood Naderan mahmood.nt at gmail.com
Tue Oct 16 12:18:34 MDT 2018

My platform is Rocks with Centos 7.0. It may not be exactly your case,
but it may help you with some ideas on what to do. I used
https://github.com/hautreux/slurm-spank-x11  and here is the guide
which Ian Mortimer told me:

There should be a binary slurm-spank-x11 and a library x11.so which
have to be installed in the correct locations.  Two configuration files
also have to be installed.

These have to be installed on all compute nodes as well as the head
node or login node(s) so the simplest way is to build a package.
Fortunately a spec file is included in the spank-x11 distribution but I
had to make some changes to make it usable.

Move slurm-spank-x11-0.2.5.tar.gz to ~/rpmbuild/SOURCES
My modified spec file is attached.  Copy that to ~/rpmbuild/SPECS and

   rpmbuild -bb --clean ~/rpmbuild/SPECS/slurm-spank-x11.spec

That should build a package:


Copy that package to /export/rocks/install/contrib/7.0/x86_64/RPMS/
and add the package to the package list in:


If you use login node(s) also add it to the package list in:


Then rebuild the distro with the new package added:

   cd /export/rocks/install; rocks create distro

Now you need to install it on the login node(s) (or front end) and all
the compute nodes.  You can use pdsh or 'rocks run host' for that.

The last step is to create the file /etc/slurm/plugstack.conf to enable

   echo 'include /etc/slurm/plugstack.conf.d/*' \
   >> /etc/slurm/plugstack.conf

Copy that file to the compute nodes and login node(s) and to ensure the
file is created when you reinstall your nodes or install new nodes you
also need to add the command to the %post section of extend-compute.xml
(and extend-login.xml if you're using it).

When that's all done restart slurm everywhere with:

   rocks sync slurm

You should then be able to get an interactive login with:

   srun --x11 --pty bash


On Tue, Oct 16, 2018 at 5:04 PM Dave Botsch <botsch at cnf.cornell.edu> wrote:
> Hi.
> Reminder :)

More information about the slurm-users mailing list