<div dir="ltr"><div class="gmail_default" style="font-family:tahoma,sans-serif">Hi,</div><div class="gmail_default" style="font-family:tahoma,sans-serif">I don't know why siesta jobs are aborted by the slurm.</div><div class="gmail_default" style="font-family:tahoma,sans-serif"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif">[mahmood@rocks7 sie]$ cat slurm_script.sh<br>#!/bin/bash<br>#SBATCH --output=siesta.out<br>#SBATCH --job-name=siesta<br>#SBATCH --ntasks=8<br>#SBATCH --mem=4G<br>#SBATCH --account=z3<br>#SBATCH --partition=EMERALD<br>mpirun /share/apps/chem/siesta-4.0.2/spar/siesta prime.fdf prime.out<br>[mahmood@rocks7 sie]$ sbatch slurm_script.sh<br>Submitted batch job 783<br>[mahmood@rocks7 sie]$ squeue --job 783<br> JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)<br>[mahmood@rocks7 sie]$ cat siesta.out<br>Siesta Version : v4.0.2<br>Architecture : x86_64-unknown-linux-gnu--unknown<br>Compiler version: GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)<br>Compiler flags : mpifort -g -O2<br>PP flags : -DMPI -DFC_HAVE_FLUSH -DFC_HAVE_ABORT<br>PARALLEL version<br><br>* Running on 8 nodes in parallel<br>>> Start of run: 22-JUL-2018 20:33:36<br><br> ***********************<br> * WELCOME TO SIESTA *<br> ***********************<br><br>reinit: Reading from standard input<br>************************** Dump of input data file ****************************<br>************************** End of input data file *****************************<br><br>reinit: -----------------------------------------------------------------------<br>reinit: System Name:<br>reinit: -----------------------------------------------------------------------<br>reinit: System Label: siesta<br>reinit: -----------------------------------------------------------------------<br>No species found!!!<br>Stopping Program from Node: 0<br><br>initatom: Reading input for the pseudopotentials and atomic orbitals ----------<br>No species found!!!<br>Stopping Program from Node: 0<br>--------------------------------------------------------------------------<br>MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD<br>with errorcode 1.<br><br>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.<br>You may or may not see output from other processes, depending on<br>exactly when Open MPI kills them.<br>--------------------------------------------------------------------------<br>[mahmood@rocks7 sie]$<br></div><div class="gmail_default" style="font-family:tahoma,sans-serif"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif">However, I am able to run that command with "-np 4" on the head node. So, I don't know is there any problem with the compute node or something else.</div><div class="gmail_default" style="font-family:tahoma,sans-serif"><br></div><div class="gmail_default" style="font-family:tahoma,sans-serif">Any idea?</div><div class="gmail_default" style="font-family:tahoma,sans-serif"><br clear="all"></div><div><div class="gmail_signature"><div dir="ltr"><font face="tahoma,sans-serif">Regards,<br>Mahmood</font><br><br><br></div></div></div>
</div>