<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
<div class="">Hi folks,<br class="">
<br class="">
I am trying (unsuccessfully) to use MPI Spawn with IntelMPI and Slurm over multiple nodes. A single node works OK using mpiexec.hydra:<br class="">
<br class="">
mpiexec.hydra -n 1 python ./script_here.py<br class="">
<br class="">
My MPI is a task farm essentially based on - <a href="https://github.com/jbornschein/mpi4py-examples/blob/master/10-task-pull-spawn.py" class="">https://github.com/jbornschein/mpi4py-examples/blob/master/10-task-pull-spawn.py</a>. The full code is at - https://github.com/gprMax/gprMax/blob/master/gprMax/gprMax.py#L323<br class="">
<br class="">
I have also tried using srun but it does not spawn the correct number of tasks. <br class="">
<br class="">
Any advice on routes to solve the problem would be most welcome!<br class="">
<br class="">
My full submit script:</div>
<div class=""><br class="">
</div>
<div class="">#!/bin/bash<br class="">
<br class="">
#SBATCH --account=****<br class="">
#SBATCH --nodes=2<br class="">
#SBATCH --ntasks=48<br class="">
#SBATCH --cpus-per-task=1<br class="">
#SBATCH --output=gprmax_mpi_cpu_2nodes-out.%j<br class="">
#SBATCH --error=gprmax_mpi_cpu_2nodes-err.%j<br class="">
#SBATCH --time=00:05:00<br class="">
#SBATCH --partition=devel<br class="">
<br class="">
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}<br class="">
module --force purge<br class="">
module use /usr/local/software/jureca/OtherStages<br class="">
module load Stages/2018b<br class="">
module load Intel IntelMPI<br class="">
<br class="">
cd /p/project/****/****/gprMax<br class="">
source activate gprMax<br class="">
<br class="">
mpiexec.hydra -n 1 python -m gprMax user_models/cylinder_Bscan_2D.in -n 47 -mpi 48</div>
<div class=""><br class="">
</div>
<div class="">Kind regards,</div>
<div class=""><br class="">
Craig<br class="">
<div class="">
<div id="sig" style="font-family: 'Myriad', Arial, Helvetica, Sans-Serif; font-size: 12px;" class="">
<p class=""><strong class="">Dr Craig Warren</strong><span style="color: rgb(128,128,128)" class=""> CEng MIMechE MIET FHEA</span><br class="">
<i class="">Senior Lecturer, Department of Mechanical & Construction Engineering</i>
</p>
<p class=""><img style="float: left; margin: -15px 0 0 -15px; padding: 0 10px 0 0;" src="http://www.gprmax.com/images/logo_NU.svg" alt="Northumbria University" width="217" height="95" class="">
</p>
<p style="padding: 5px 0 0 0;" class="">T: +44 (0)191 227 3633<br class="">
E: <a href="mailto:craig.warren@northumbria.ac.uk" class="">craig.warren@northumbria.ac.uk</a><br class="">
W: <a href="http://www.northumbria.ac.uk" class="">northumbria.ac.uk</a><br class="">
Twitter: <a href="http://www.twitter.com/DrCraigWarren" class="">@DrCraigWarren</a>,
<a href="http://www.twitter.com/enginerdsUK" class="">@enginerdsUK</a> </p>
<p style="clear: left;" class=""><span style="color: rgb(128,128,128)" class="">Room 117, Wynne Jones Building, Northumbria University, Newcastle upon Tyne, NE1 8ST, United Kingdom</span>
<br class="">
</p>
<p style="clear: left;" class=""></p>
</div>
</div>
<br class="">
</div>
This message is intended solely for the addressee and may contain confidential and/or legally privileged information. Any use, disclosure or reproduction without the sender’s explicit consent is unauthorised and may be unlawful. If you have received this message
in error, please notify Northumbria University immediately and permanently delete it. Any views or opinions expressed in this message are solely those of the author and do not necessarily represent those of the University. Northumbria University email is provided
by Microsoft Office365 and is hosted within the EEA, although some information may be replicated globally for backup purposes. The University cannot guarantee that this message or any attachment is virus free or has not been intercepted and/or amended.
</body>
</html>