[slurm-users] Odd behavior/bug with --array env SLURM_ARRAY_TASK_MAX
E V
eliventer at gmail.com
Tue Feb 20 06:28:23 MST 2018
Well isn't the next job index 10, when counting by 5 and you specified
a max index of 9, so 5 is the max that will run for your
specification.
On Mon, Feb 19, 2018 at 4:06 PM, Robert Anderson <rea at sr.unh.edu> wrote:
> While working on an example python slurm job script I found the
> environment variable SLURM_ARRAY_TASK_MAX was not set to the expected
> value when a step is defined.
>
> Below is a minimal test of a 10 array job, with a step value of 5. When
> a step value is defined the SLURM_ARRAY_TASK_MAX is set to the maximum
> value that slurm will provide as a SLURM_ARRAY_TASK_ID, not the
> actual expected "array max value".
>
> Current behavior looses the only hook to the real "array max" value.
> I can think of no reason why the current behavior would be preferred
> over my expected value.
>
> Am I missing something?
>
>
> We are using "slurm 17.02.2" on SL7.3 kernel 3.10.0-229.20.1.el7.x86_64.
> Searches reveal very little information on SLURM_ARRAY_TASK_MAX, so I
> do not believe this has been previously reported and/or fixed.
>
>
> Thanks in advance for any clarification and/or bug acknowledgment. If
> it really is a bug how do I officially report it. Is this the correct
> place?
>
> ----- slurm_test.py -----
>
> #!/usr/bin/python
> #SBATCH --array=0-9:5%2
> import os
> print('''
> ID={e[SLURM_ARRAY_TASK_ID]}
> MIN={e[SLURM_ARRAY_TASK_MIN]}
> MAX={e[SLURM_ARRAY_TASK_MAX]}
> ''').format(e=os.environ)
>
> ----- shell run -----
>
> bash-4.2$ sbatch --version
> slurm 17.02.2
> bash-4.2$ sbatch slurm_test.py
> Submitted batch job 20610
> bash-4.2$ ls slurm-20610_*
> slurm-20610_0.out slurm-20610_5.out
> bash-4.2$ cat slurm-20610_*
>
> ID=0
> MIN=0
> MAX=5
>
>
> ID=5
> MIN=0
> MAX=5
>
>
>
>
> --
> --------------------------------------------------
> Robert E. Anderson -- (603) 862-3489
> Associate Director -- UNH Research Computing Center
> http://www.unh.edu/research/rcc
> --------------------------------------------------
>
More information about the slurm-users
mailing list