<html style="direction: ltr;">
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<style type="text/css">body p { margin-bottom: 0cm; margin-top: 0pt; } </style>
</head>
<body bidimailui-charset-is-forced="true" style="direction: ltr;"
text="#000000" bgcolor="#FFFFFF">
<p>Cross posting to Slurm, PMIx and UCX lists.<br>
</p>
<p><br>
</p>
<p>Trying to execute a simple openmpi (4.0.1) mpi-hello-world via
Slurm (19.05.0) compiled with both PMIx (3.1.2) and UCX (1.5.0)
results in:</p>
<p><br>
</p>
<p>[root@n1 ~]# SLURM_PMIX_DIRECT_CONN_UCX=true
SLURM_PMIX_DIRECT_CONN=true OMPI_MCA_pml=true
OMPI_MCA_btl='^vader,tcp,openib' UCX_NET_DEVICES='mlx4_0:1'
SLURM_PMIX_DIRECT_CONN_EARLY=false UCX_TLS=rc,shm srun --export
SLURM_PMIX_DIRECT_CONN_UCX,SLURM_PMIX_DIRECT_CONN,OMPI_MCA_pml,OMPI_MCA_btl,
UCX_NET_DEVICES,SLURM_PMIX_DIRECT_CONN_EARLY,UCX_TLS --mpi=pmix -N
2 -n 2 /data/mpihello/mpihello<br>
</p>
<p><br>
</p>
<p>slurmstepd: error: n1 [0] pmixp_dconn_ucx.c:668 [_ucx_connect]
mpi/pmix: ERROR: ucp_ep_create failed: Input/output error<br>
slurmstepd: error: n1 [0] pmixp_dconn.h:243 [pmixp_dconn_connect]
mpi/pmix: ERROR: Cannot establish direct connection to n2 (1)<br>
slurmstepd: error: n1 [0] pmixp_server.c:731
[_process_extended_hdr] mpi/pmix: ERROR: Unable to connect to 1<br>
srun: Job step aborted: Waiting up to 32 seconds for job step to
finish.<br>
slurmstepd: error: n2 [1] pmixp_dconn_ucx.c:668 [_ucx_connect]
mpi/pmix: ERROR: ucp_ep_create failed: Input/output error<br>
slurmstepd: error: n2 [1] pmixp_dconn.h:243 [pmixp_dconn_connect]
mpi/pmix: ERROR: Cannot establish direct connection to n1 (0)<br>
slurmstepd: error: *** STEP 7202.0 ON n1 CANCELLED AT
2019-07-01T13:20:36 ***<br>
slurmstepd: error: n2 [1] pmixp_server.c:731
[_process_extended_hdr] mpi/pmix: ERROR: Unable to connect to 0<br>
srun: error: n2: task 1: Killed<br>
srun: error: n1: task 0: Killed</p>
<p><br>
</p>
<p>However, the following works:</p>
<p><br>
</p>
<p>[root@n1 ~]# SLURM_PMIX_DIRECT_CONN_UCX=false
SLURM_PMIX_DIRECT_CONN=true OMPI_MCA_pml=true
OMPI_MCA_btl='^vader,tcp,openib' UCX_NET_DEVICES='mlx4_0:1'
SLURM_PMIX_DIRECT_CONN_EARLY=false
UCX_TLS=rc,shm srun --export
SLURM_PMIX_DIRECT_CONN_UCX,SLURM_PMIX_DIRECT_CONN,OMPI_MCA_pml,OMPI_MCA_btl,
UCX_NET_DEVICES,SLURM_PMIX_DIRECT_CONN_EARLY,UCX_TLS --mpi=pmix -N
2 -n 2 /data/mpihello/mpihello</p>
<p><br>
n2: Process 1 out of 2<br>
n1: Process 0 out of 2</p>
<p><br>
</p>
<p>[root@n1 ~]# SLURM_PMIX_DIRECT_CONN_UCX=false
SLURM_PMIX_DIRECT_CONN=true OMPI_MCA_pml=true
OMPI_MCA_btl='^vader,tcp,openib' UCX_NET_DEVICES='mlx4_0:1'
SLURM_PMIX_DIRECT_CONN_EARLY=true UCX_TLS=rc,shm srun --export
SLURM_PMIX_DIRECT_CONN_UCX,SLURM_PMIX_DIRECT_CONN,OMPI_MCA_pml,OMPI_MCA_btl,
UCX_NET_DEVICES,SLURM_PMIX_DIRECT_CONN_EARLY,UCX_TLS --mpi=pmix -N
2 -n 2 /data/mpihello/mpihello</p>
<p><br>
</p>
<p>n2: Process 1 out of 2<br>
n1: Process 0 out of 2</p>
<p><br>
</p>
<p>Executing mpirun directly (same env vars, without the slurm vars)
works, so UCX appears to function correctly.<br>
</p>
<p><br>
</p>
<p>If both SLURM_PMIX_DIRECT_CONN_EARLY=true and
SLURM_PMIX_DIRECT_CONN_UCX=true then I get collective timeout
errors from mellanox/hcoll and glibc detected
/data/mpihello/mpihello: malloc(): memory corruption (fast) <br>
</p>
<p><br>
</p>
<p>Can anyone help using PMIx direct connection with UCX in Slurm?<br>
</p>
<p><br>
</p>
<p><br>
</p>
<p><br>
</p>
<p>Some info about my setup:</p>
<p><br>
</p>
<p>UCX version<br>
</p>
<p>[root@n1 ~]# ucx_info -v</p>
<p># UCT version=1.5.0 revision 02078b9<br>
# configured with: --build=x86_64-redhat-linux-gnu
--host=x86_64-redhat-linux-gnu --target=x86_64-redhat-linux-gnu
--program-prefix= --prefix=/usr --exec-prefix=/usr
--bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc
--datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64
--libexecdir=/usr/libexec --localstatedir=/var
--sharedstatedir=/var/lib --mandir=/usr/share/man
--infodir=/usr/share/info --disable-optimizations
--disable-logging --disable-debug --disable-assertions --enable-mt
--disable-params-check</p>
<p><br>
</p>
<p>Mellanox OFED version:<br>
</p>
<p>[root@n1 ~]# ofed_info -s<br>
OFED-internal-4.5-1.0.1:</p>
<p><br>
</p>
<p>Slurm:<br>
</p>
<p>slurm was built with:<br>
rpmbuild -ta slurm-19.05.0.tar.bz2 --without debug --with ucx
--define '_with_pmix --with-pmix=/usr'</p>
<p><br>
</p>
<p>PMIx:<br>
</p>
<p>[root@n1 ~]# pmix_info -c --parsable<br>
config:user:root<br>
config:timestamp:"Mon Mar 25 09:51:04 IST 2019"<br>
config:host:slurm-test<br>
config:cli: '--host=x86_64-redhat-linux-gnu'
'--build=x86_64-redhat-linux-gnu' '--program-prefix='
'--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin'
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share'
'--includedir=/usr/include' '--libdir=/usr/lib64'
'--libexecdir=/usr/libexec' '--localstatedir=/var'
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man'
'--infodir=/usr/share/info'</p>
<p><br>
</p>
<p>Thanks,</p>
<p>--Dani_L.<br>
</p>
</body>
</html>