[slurm-users] Submitting to multiple paritions problem with gres specified
Bas van der Vlies
bas.vandervlies at surf.nl
Mon Mar 8 18:49:55 UTC 2021
Same problem with 20.11.4:
```
[2021-03-08T19:46:09.378] _pick_best_nodes: JobId=1861 never runnable in
partition cpu_e5_2650_v2
[2021-03-08T19:46:09.378] debug2: job_allocate: setting JobId=1861 to
"BadConstraints" due to a flaw in the job request (Requested node
configuration is not available)
[2021-03-08T19:46:09.378] _slurm_rpc_allocate_resources: Requested node
configuration is not available
```
On 08/03/2021 17:29, Bas van der Vlies wrote:
> Hi,
>
> On this cluster I have version 20.02.6 installed. We have different
> partitions for cpu type and gpu types. we want to make it easy for the
> user who not care where there job runs and for the experienced user they
> can specify the gres type: cpu_type or gpu
>
> I have defined 2 cpu partitions:
> * cpu_e5_2650_v1
> * cpu_e5_2650_v2
>
> and 2 gres cpu_type:
> * e5_2650_v1
> * e5_2650_v2
>
>
> When no partitions are specified it will submit to both partitions:
> * srun --exclusive --gres=cpu_type:e5_2650_v1 --pty /bin/bash -->
> r16n18 wich has defined this gres and is in partition cpu_e5_2650_v1
>
> Now I submit at the same time another job:
> * srun --exclusive --gres=cpu_type:e5_2650_v1 --pty /bin/bash
>
> This fails with: `srun: error: Unable to allocate resources: Requested
> node configuration is not available`
>
> I would expect it gets queued in the partition `cpu_e5_2650_v1`.
>
>
> When I specify the partition on the command line:
> * srun --exclusive -p cpu_e5_2650_v1_shared
> --gres=cpu_type:e5_2650_v1 --pty /bin/bash
>
> srun: job 1856 queued and waiting for resources
>
>
> So the question is can slurm handle submitting to multiple partitions
> when we specify gres attributes?
>
> Regards
>
>
--
Bas van der Vlies
| HPCV Supercomputing | Internal Services | SURF |
https://userinfo.surfsara.nl |
| Science Park 140 | 1098 XG Amsterdam | Phone: +31208001300 |
| bas.vandervlies at surf.nl
More information about the slurm-users
mailing list