[slurm-users] HELP: error between compilation and execution on gpu cluster

Saksham Pande 5-Year IDD Physics saksham.pande.phy20 at itbhu.ac.in
Fri May 19 02:13:49 UTC 2023


Hi everyone,
I am trying to run a simulation software on slurm using openmpi-4.1.1 and
cuda/11.1.
On executing, I get the following error:

srun --mpi=pmi2 --nodes=1 --ntasks-per-node=5 --partition=gpu --gres=gpu:1
--time=02:00:00 --pty bash -i
./<execultable>


```._____________________________________________________________________________________
|
| Initial checks...
| All good.
|_____________________________________________________________________________________
[gpu008:162305] OPAL ERROR: Not initialized in file pmix3x_client.c at line
112
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:

  version 16.05 or later: you can use SLURM's PMIx support. This
  requires that you configure and build SLURM --with-pmix.

  Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  install PMI-2. You must then build Open MPI using --with-pmi pointing
  to the SLURM PMI library location.

Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[gpu008:162305] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages, and not able to
guarantee that all other processes were killed!```


using the following modules: gcc/10.2 openmpi/4.1.1 cuda/11.1
on using which mpic++ or mpirun or nvcc, I get the module paths only, which
looks correct.
I also changed the $PATH and $LD_LIBRARY_PATH based on ldd <executable>,
but still the same error.

[sakshamp.phy20.itbhu at login2 menura]$ srun --mpi=list
srun: MPI types are...
srun: cray_shasta
srun: none
srun: pmi2

What should I do from here, been stuck on this error for 6 days now? If
there is any build difference, I will have to tell the sysadmin.
Since there is an openmpi pairing error with slurm, are there other error I
could expect between cuda and openmpi?

Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20230519/1523ba18/attachment.htm>


More information about the slurm-users mailing list