<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<div>Hi there all,</div>
<div>We're using Slurm 21.08 on Redhat 7.9 HPC cluster with OpenMPI
4.0.3 + gcc 8.5.0.</div>
<div>When we run command below for call SU2, we get an error
message:</div>
<div><br>
</div>
<div><i>$ srun -p defq --nodes=1 --ntasks-per-node=1 --time=01:00:00
--pty bash -i</i></div>
<div><i>$ module load su2/7.5.1</i></div>
<div><i>$ SU2_CFD config.cfg</i><br>
</div>
<div><br>
</div>
<div><i>*** An error occurred in MPI_Init_thread</i></div>
<div><i>*** on a NULL communicator</i></div>
<div><i>*** MPI_ERRORS_ARE_FATAL (processes in this communicator
will now abort,</i></div>
<div><i>*** and potentially your MPI job)</i></div>
<div><i>[cnode003.hpc:17534] Local abort before MPI_INIT completed
completed successfully, but am not able to aggregate error
messages, and not able to guarantee that all other processes
were killed!</i><font color="#888888"><font color="#888888"><font
color="#888888"><br>
</font></font></font></div>
<p></p>
<pre class="moz-signature" cols="72">--
Best regards,
Aziz Öğütlü
Eduline Bilişim Sanayi ve Ticaret Ltd. Şti. <a class="moz-txt-link-abbreviated" href="http://www.eduline.com.tr">www.eduline.com.tr</a>
Merkez Mah. Ayazma Cad. No:37 Papirus Plaza
Kat:6 Ofis No:118 Kağıthane - İstanbul - Türkiye 34406
Tel : +90 212 324 60 61 Cep: +90 541 350 40 72</pre>
</body>
</html>