[slurm-users] RawUsage 0??
Matthias Leopold
matthias.leopold at meduniwien.ac.at
Wed Apr 7 10:52:24 UTC 2021
The "solution" for my problem was very simple: after reboot of all hosts
in this test cluster (login node with slurmctld/slurmdbd + 2 worker
nodes) I do get reasonable values in sshare. Maybe I didn't do that
after finishing deepops installation procedure (but I didn't know I had
to do it and had no hints)
Sorry for bothering you
Matthias
Am 06.04.21 um 17:06 schrieb Matthias Leopold:
> Hi,
>
> I'm very new to Slurm and try to understand basic concepts. One of them
> is the "Multifactor Priority Plugin". For this I submitted some jobs and
> looked at sshare output. To my surprise I don't get any numbers for
> "RawUsage", regardless what I do RawUsage stays 0 (same in "scontrol
> show assoc_mgr" output). When I look at CPU stats for the jobs I submit
> and complete (with sacct) I do see usage counters there, I also see
> counters for TRESRunMins in sshare while the job is running, but
> RawUsage (and also GrpTRESRaw) stays empty.
>
> I found this discussion for a similar topic:
> https://www.mail-archive.com/slurm-dev@schedmd.com/msg04435.html
>
> I can confirm that I waited for longer than 5 min for sshare to update
> values and I also tried "PriorityDecayHalfLife=0".
>
> My slurm.conf currently includes:
>
> ##
>
> AccountingStorageType=accounting_storage/slurmdbd
> AccountingStorageTRES=gres/gpu
> AccountingStorageEnforce=associations,limits,qos
>
> JobAcctGatherType=jobacct_gather/linux # also tried "cgroup"
>
> SelectType=select/cons_tres
> SelectTypeParameters=CR_Core_Memory,CR_CORE_DEFAULT_DIST_BLOCK,CR_ONE_TASK_PER_CORE
>
> PriorityType=priority/multifactor
>
> SchedulerType=sched/backfill
> SelectType=select/cons_tres
> SelectTypeParameters=CR_Core_Memory,CR_CORE_DEFAULT_DIST_BLOCK,CR_ONE_TASK_PER_CORE
>
>
> # only partition
> PartitionName=batch Nodes=ALL Default=YES DefMemPerCPU=0 State=UP
> OverSubscribe=NO MaxTime=INFINITE
>
> ##
>
> Slurm is 20.11.3, installed via NVIDIA/deepops
> (https://github.com/nvidia/deepops/) ansible playbooks in VMs.
>
> What am I missing?
>
> Thanks for any advice
> Matthias
>
More information about the slurm-users
mailing list