[slurm-users] Migration of slurm communication network / Steps / how to

Ryan Novosielski novosirj at rutgers.edu
Mon Apr 24 04:32:12 UTC 2023

I think it’s easier than all of this. Are you actually changing names of all of these things, or just IP addresses? It they all resolve to an IP now and you can bring everything down and change the hosts files or DNS, it seems to me that if the names aren’t changing, that’s that. I know that “scontrol show cluster” will show the wrong IP address but I think that updates itself. 

The names of the servers are in slurm.conf, but again, if the names don’t change, that won’t matter. If you have IPs there, you will need to change them. 

Sent from my iPhone

> On Apr 23, 2023, at 14:01, Purvesh Parmar <purveshp0507 at gmail.com> wrote:
> Hello,
> We have slurm 21.08 on ubuntu 20. We have a cluster of 8 nodes. Entire slurm communication happens over 192.168.5.x network (LAN). However as per requirement, now we are migrating the cluster to other premises and there we have 172.16.1.x (LAN). I have to migrate the entire network including SLURMDBD (mariadb), SLURMCTLD, SLURMD. ALso the cluster network is also changing from 192.168.5.x to 172.16.1.x and each node will be assigned the ip address from the 172.16.1.x network. 
> The cluster has been running for the last 3 months and it is required to maintain the old usage stats as well.
>  Is the procedure correct as below :
> 1) Stop slurm
> 2) suspend all the queued jobs
> 3) backup slurm database
> 4) change the slurm & munge configuration i.e. munge conf, mariadb conf, slurmdbd.conf, slurmctld.conf, slurmd.conf (on compute nodes), gres.conf, service file 
> 5) Later, do the update in the slurm database by executing below command
> sacctmgr modify node where node=old_name set name=new_name
> for all the nodes.
> ALso, I think, slurm server name and slurmdbd server names are also required to be updated. How to do it, still checking
> 6) Finally, start slurmdbd, slurmctld on server and slurmd on compute nodes
> Please help and guide for above.
> Regards,
> Purvesh Parmar

More information about the slurm-users mailing list