[slurm-users] Running Slurm on a single host?
stolarek.marcin at gmail.com
Fri Apr 6 01:40:17 MDT 2018
Maybe consol user can simply use interactive job? Other approach would be to reboot the computer to different os image, probably killing the job. You can also think about those hosts reservation for working hours to prevent long jobs start just before interactive use in form of slurm job or just someone using it directly even without server reboot..
-------- Original message --------From: Patrick Goetz <pgoetz at math.utexas.edu> Date: 06/04/2018 06:00 (GMT-11:00) To: Slurm User Community List <slurm-users at lists.schedmd.com> Subject: [slurm-users] Running Slurm on a single host?
I've been using Slurm on a traditional CPU compute cluster, but am now
looking at a somewhat different issue. We recently purchased a single
machine with 10 high end graphics cards to be used for CUDA calculations
and which will shared among a couple of different user groups.
Does it make sense to use Slurm for scheduling in this case? We'll want
to do things like limit the number of GPU's any one user can use and
manage resource contention the same way one would for a cluster.
Potentially this would mean running slurmctld and slurmd on the same host?
Bonus question: these research groups (they do roughly the same kind of
work) also have a pool of GPU workstations they're going to share. It
would be super cool if we could somehow rope the workstations into the
resource pool in cases where no one is working at the console. Because
some of this stuff involves steps with interactive components, the
understanding would be that all resources go to a console user when
there is a console user.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the slurm-users