[slurm-users] Weirdness with partitions

Diego Zuccato diego.zuccato at unibo.it
Thu Sep 21 13:15:21 UTC 2023


Uh? It's not a problem if other users see there are jobs in the 
partition (IIUC it's what 'hidden' is for), even if they can't use it.

The problem is that if it's included in --partition it prevents jobs 
from being queued!
Nothing  in the documentation about --partition made me think that 
forbidding access to one partition would make a job unqueueable...

Diego

Il 21/09/2023 14:41, David ha scritto:
> I would think that slurm would only filter it out, potentially, if the 
> partition in question (b4) was marked as "hidden" and only accessible by 
> the correct account.
> 
> On Thu, Sep 21, 2023 at 3:11 AM Diego Zuccato <diego.zuccato at unibo.it 
> <mailto:diego.zuccato at unibo.it>> wrote:
> 
>     Hello all.
> 
>     We have one partition (b4) that's reserved for an account while the
>     others are "free for all".
>     The problem is that
>     sbatch --partition=b1,b2,b3,b4,b5 test.sh
>     fails with
>     sbatch: error: Batch job submission failed: Invalid account or
>     account/partition combination specified
>     while
>     sbatch --partition=b1,b2,b3,b5 test.sh
>     succeeds.
> 
>     Shouldn't Slurm (22.05.6) just "filter out" the inaccessible partition,
>     considering only the others?
>     Just like what it does if I'm requesting more cores than available on a
>     node.
> 
>     I'd really like to avoid having to replicate scheduler logic in
>     job_submit.lua... :)
> 
>     -- 
>     Diego Zuccato
>     DIFA - Dip. di Fisica e Astronomia
>     Servizi Informatici
>     Alma Mater Studiorum - Università di Bologna
>     V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
>     tel.: +39 051 20 95786
> 
> 
> 
> -- 
> David Rhey
> ---------------
> Advanced Research Computing
> University of Michigan

-- 
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



More information about the slurm-users mailing list