[slurm-users] Visualisation -- Slurm and (Turbo)VNC
John Hearns
hearnsj at googlemail.com
Thu Jan 3 04:27:10 MST 2019
Hi David. I set up DCV on a cluster of workstations at a facility not far
from you a few years ago (in Woking...).
I'm not sure what the relevance of having multiple GPUs is - I thought the
DCV documentation dealt with that ??
One thing you should do is introduce MobaXterm to your users if they are
Windows users
https://mobaxterm.mobatek.net/
It has built in VNC clients and you can easily start a remote Linux desktop
session.
The guys at Greenwich setup Moba with a login script which started VNC on
their HPC cluster.
On Thu, 3 Jan 2019 at 10:37, Daniel Letai <dani at letai.org.il> wrote:
> I haven't done this in a long time, but this blog entry might be of some
> use (I believe I did something similar when required in the past) :
>
>
> https://summerofhpc.prace-ri.eu/remote-accelerated-graphics-with-virtualgl-and-turbovnc/
>
>
> On 03/01/2019 12:14:52, Baker D.J. wrote:
>
> Hello,
>
>
> We have set up our NICE/DCV cluster and that is proving to be very
> popular. There are, however, users who would benefit from using the
> resources offered by our nodes with multiple GPU cards. This potentially
> means setting up TurboVNC, for example. I would, if possible, like to be
> able to make the process of starting a VNC server as painless as possible.
> I wondered if anyone had written a slurm script that users could
> modify/submit to reserve resources and start the VNC server.
>
>
> If you have such a template script and/or any advice in using VNC via
> slurm then I would be interested to hear from you please. Many of our
> visualization users are not "expert user" and so, as I note above, it would
> be useful to try to make the process as painless as possible, If you would
> be happy to share your script with us please then that would be
> appreciated.
>
>
> Best regards,
>
> David
>
>
> --
> Regards,
>
> Daniel Letai
> +972 (0)505 870 456
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190103/3036ee27/attachment-0001.html>
More information about the slurm-users
mailing list