[slurm-users] siesta jobs with slurm, an issue

Mahmood Naderan mahmood.nt at gmail.com
Sun Jul 22 10:07:12 MDT 2018


Hi,
I don't know why siesta jobs are aborted by the slurm.

[mahmood at rocks7 sie]$ cat slurm_script.sh
#!/bin/bash
#SBATCH --output=siesta.out
#SBATCH --job-name=siesta
#SBATCH --ntasks=8
#SBATCH --mem=4G
#SBATCH --account=z3
#SBATCH --partition=EMERALD
mpirun /share/apps/chem/siesta-4.0.2/spar/siesta prime.fdf prime.out
[mahmood at rocks7 sie]$ sbatch slurm_script.sh
Submitted batch job 783
[mahmood at rocks7 sie]$ squeue --job 783
             JOBID PARTITION     NAME     USER ST       TIME  NODES
NODELIST(REASON)
[mahmood at rocks7 sie]$ cat siesta.out
Siesta Version  : v4.0.2
Architecture    : x86_64-unknown-linux-gnu--unknown
Compiler version: GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
Compiler flags  : mpifort -g -O2
PP flags        : -DMPI -DFC_HAVE_FLUSH -DFC_HAVE_ABORT
PARALLEL version

* Running on    8 nodes in parallel
>> Start of run:  22-JUL-2018  20:33:36

                           ***********************
                           *  WELCOME TO SIESTA  *
                           ***********************

reinit: Reading from standard input
************************** Dump of input data file
****************************
************************** End of input data file
*****************************

reinit:
-----------------------------------------------------------------------
reinit: System Name:
reinit:
-----------------------------------------------------------------------
reinit: System Label: siesta
reinit:
-----------------------------------------------------------------------
No species found!!!
Stopping Program from Node:    0

initatom: Reading input for the pseudopotentials and atomic orbitals
----------
No species found!!!
Stopping Program from Node:    0
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[mahmood at rocks7 sie]$



However, I am able to run that command with "-np 4" on the head node. So, I
don't know is there any problem with the compute node or something else.

Any idea?

Regards,
Mahmood
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180722/3bed7eda/attachment-0001.html>


More information about the slurm-users mailing list