[slurm-users] mpi on multiple nodes
Mahmood Naderan
mahmood.nt at gmail.com
Tue Mar 13 13:30:16 MDT 2018
Hi,
For a simple mpi hello program, I have written this script in order to
receive one message from each of the compute nodes.
#!/bin/bash
#SBATCH --output=hello.out
#SBATCH --job-name=hello
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=1
mpirun mpihello
The node information show
[mahmood at rocks7 ~]$ rocks report slurmnodes
#### Auto Created ########
NodeName=compute-0-0 NodeAddr=10.1.1.254 CPUs=2 Weight=20481900
Feature=rack-0,2CPUs
NodeName=compute-0-1 NodeAddr=10.1.1.253 CPUs=4 Weight=20483899
Feature=rack-0,4CPUs
However, the output is
[mahmood at rocks7 ~]$ cat hello.out
Hello world from processor compute-0-0.local, rank 1 out of 2 processors
Hello world from processor compute-0-0.local, rank 0 out of 2 processors
I expected to see one compute-0-0.local and one compute-0-1.local
messages. Any idea about that?
Regards,
Mahmood
More information about the slurm-users
mailing list