[slurm-users] exclusive or not exclusive, that is the question

Marcus Wagner wagner at itc.rwth-aachen.de
Tue Aug 20 08:34:45 UTC 2019


Just made another test.


Thanks god, the exclusivity is not "destroyed" completely, only on job 
can run on the node, when the job is exclusive. Nonetheless, this is 
somewhat unintuitive.
I wonder, if that also has an influence on the cgroups and the process 
affinity/binding.

I will do some more tests.


Best
Marcus

On 8/20/19 9:47 AM, Marcus Wagner wrote:
> Hi Folks,
>
>
> I think, I've stumbled over a BUG in Slurm regarding the 
> exclusiveness. Might also, I've misinterpreted something. I would be 
> happy, if someone could explain that to me in the latter case.
>
> To the background. I have set PriorityFlags=MAX_TRES
> The TRESBillingWeights are "CPU=1.0,Mem=0.1875G" for a partition with 
> 48 core nodes and RealMemory 187200.
>
> ---
>
> I have two jobs:
>
> job 1:
> #SBATCH --exclusive
> #SBATCH --ntasks=2
> #SBATCH --nodes=1
>
> scontrol show <jobid> =>
>    NumNodes=1 NumCPUs=48 NumTasks=2 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
>    TRES=cpu=48,mem=187200M,node=1,billing=48
>
> exactly, what I expected, I got 48 CPUs and therefore the billing is 48.
>
> ---
>
> job 2 (just added mem-per-cpu):
> #SBATCH --exclusive
> #SBATCH --ntasks=2
> #SBATCH --nodes=1
> #SBATCH --mem-per-cpu=5000
>
> scontrol show <jobid> =>
>    NumNodes=1-1 NumCPUs=2 NumTasks=2 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
>    TRES=cpu=2,mem=10000M,node=1,billing=2
>
> Why "destroys" '--mem-per-cpu' exclusivity?
>
>
>
> Best
> Marcus
>

-- 
Marcus Wagner, Dipl.-Inf.

IT Center
Abteilung: Systeme und Betrieb
RWTH Aachen University
Seffenter Weg 23
52074 Aachen
Tel: +49 241 80-24383
Fax: +49 241 80-624383
wagner at itc.rwth-aachen.de
www.itc.rwth-aachen.de




More information about the slurm-users mailing list