[slurm-users] seff Not Caluculating
Diego Zuccato
diego.zuccato at unibo.it
Mon Nov 9 11:53:27 UTC 2020
Il 15/09/20 10:14, Diego Zuccato ha scritto:
Seems my corrections actually work only for single-node jobs.
In case of multi-node jobs, it only considers the memory used on one
node, hence understimates the real efficiency.
Someone more knowledgeable than me can spot the error?
TIA!
> I'm neither Perl nor Slurm expert so I'm quite sure there's a better way
> to do it, but I "fixed" it by changing at line 66:
> my $ncpus = $job->{'alloc_cpus'};
> to
> my $ncpus = $job->{'req_cpus'};
>
> And at ~ line 106:
> my $lmem = $step->{'stats'}{'rss_max'};
> to:
> my %hash = split /[,=]/, $step->{'stats'}{'tres_usage_in_max'};
> my $lmem=$hash{'2'}/1024;
>
> It seems to give meaningful results, for some value of "meaningful" :)
> Corrections accepted.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
More information about the slurm-users
mailing list