[slurm-users] Regarding multiple slurm server on one machine

Valerio Bellizzomi valerio at selnet.org
Wed Jul 28 04:57:39 UTC 2021


If you use qemu-kvm beware: qemu-kvm doesn't allow communication of
virtual machines with the host, therefore your slurm servers must be
all virtual machines.

On Wed, 2021-07-28 at 13:55 +1000, Sid Young wrote:
> Why not spin them up as Virtual machines... then you could build real
> (separate) clusters.
> 
> Sid Young
> W: https://off-grid-engineering.com
> W: (personal) https://sidyoung.com/
> W: (personal) https://z900collector.wordpress.com/
> 
> 
> On Wed, Jul 28, 2021 at 12:07 AM Brian Andrus <toomuchit at gmail.com>
> wrote:
> > You can run multiple slurmctld on one machine, but they have to be
> > on 
> > 
> > different ports.
> > 
> > 
> > 
> > What you are asking to be able to do, however would not work like
> > you 
> > 
> > may think. They do not talk to each other.
> > 
> > 
> > 
> > You can run squeue and point to a different config file (one for
> > each 
> > 
> > master) and get the information on a single box.
> > 
> > 
> > 
> > You may also want to enable accounting and have each report to the
> > same 
> > 
> > slurmdbd. Then your sacct/sreport functions can be filtered by
> > cluster 
> > 
> > or show everything.
> > 
> > 
> > 
> > Brian Andrus
> > 
> > 
> > 
> > On 7/27/2021 5:17 AM, pravin wrote:
> > 
> > > Hello all,
> > 
> > >
> > 
> > > I have a question regarding the slurm. Is it possible 
> > 
> > > for multiple slurm servers to show on one machine.
> > 
> > >  I have three different machines (master, naster1, master2) 
> > running 
> > 
> > > with their own slurmctld and compute nodes but I want to show all
> > the 
> > 
> > > slurm information on master3. for example node info, squeue 
> > 
> > > information on master3. Please help me.
> > 
> > >
> > 
> > > Thanks and regards
> > 
> > > Pravin Pawar
> > 
> > 
> > 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20210728/4e39f92c/attachment.htm>


More information about the slurm-users mailing list