[slurm-users] SLURM heterogeneous jobs, a little help needed plz

Frava fravadona at gmail.com
Tue Mar 19 21:03:27 UTC 2019


Hi all,

I'm struggling to get an heterogeneous job to run...
The SLURM version installed on the cluster is 17.11.12

Here are the SBATCH file parameters of the job :
================================================================================
#!/bin/bash
#SBATCH --threads-per-core=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=8G
#
#SBATCH packjob
#
#SBATCH --threads-per-core=1
#SBATCH --cpus-per-task=4
#SBATCH --mem-per-cpu=2G
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:2
#SBATCH --gres-flags=enforce-binding
================================================================================

Now the SRUN commands that I tried in the SBATCH file:

1) srun --mpi=pmix myapp
=> The app gets allocated only 1 MPI rank

2) srun --mpi=pmix --pack-group=0 --ntasks=1 : --pack-group=1 --ntasks=4
myapp
=> srun: fatal: Job steps that span multiple components of a heterogeneous
job are not currently supported

3) srun --mpi=pmix --pack-group=0 --ntasks=1 myapp : --pack-group=1
--ntasks=4 myapp
=> srun: fatal: Job steps that span multiple components of a heterogeneous
job are not currently supported

So my question is: How do we run an heterogeneous job ?

Thanks for your tips,
Rafael N.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190319/b64587fd/attachment-0001.html>


More information about the slurm-users mailing list