Your --nodes line is incorrect:
Looks like it ignored that and used ntasks with ntasks-per-node as 1, giving you 3 nodes. Check your logs and check your conf see what your defaults are.
Brian Andrus
Hello, I have a cluster with four Intel nodes (node[01-04], Feature=intel) and four Amd nodes (node[05-08], Feature=amd). # job file #SBATCH --ntasks=3 #SBATCH --nodes=2,4 #SBATCH --constraint="[intel|amd]" env | grep SLURM # slurm.conf PartitionName=DEFAULT MinNodes=1 MaxNodes=UNLIMITED # log SLURM_JOB_USER=software SLURM_TASKS_PER_NODE=1(x3) SLURM_JOB_UID=1002 SLURM_TASK_PID=49987 SLURM_LOCALID=0 SLURM_SUBMIT_DIR=/home/software SLURMD_NODENAME=node01 SLURM_JOB_START_TIME=1724932865 SLURM_CLUSTER_NAME=cluster SLURM_JOB_END_TIME=1724933465 SLURM_CPUS_ON_NODE=1 SLURM_JOB_CPUS_PER_NODE=1(x3) SLURM_GTIDS=0 SLURM_JOB_PARTITION=nodes SLURM_JOB_NUM_NODES=3 SLURM_JOBID=26 SLURM_JOB_QOS=lprio SLURM_PROCID=0 SLURM_NTASKS=3 SLURM_TOPOLOGY_ADDR=node01 SLURM_TOPOLOGY_ADDR_PATTERN=node SLURM_MEM_PER_CPU=0 SLURM_NODELIST=node[01-03] SLURM_JOB_ACCOUNT=dalco SLURM_PRIO_PROCESS=0 SLURM_NPROCS=3 SLURM_NNODES=3 SLURM_SUBMIT_HOST=master SLURM_JOB_ID=26 SLURM_NODEID=0 SLURM_CONF=/etc/slurm/slurm.conf SLURM_JOB_NAME=mpijob SLURM_JOB_GID=1002 SLURM_JOB_NODELIST=node[01-03] <<<=== why three nodes? Shouldn't this still be two nodes? Thank you.