[slurm-users] How to allocate SMT cores
Maik Schmidt
maik.schmidt at tu-dresden.de
Fri Dec 7 08:31:31 MST 2018
I've used "numactl -show" to see which cores I actually got allocated
from SLURM, and that's only 0-43 with task/cgroup.
If I add task/affinity and use the parameter --hint=multithread, I can
get all 0-175 with: -c 176 --hint=multithread.
The problem then is, that it *always* allocates the virtual SMT threads
first even when not giving the --hint parameter at all, e.g.:
$ srun -p ml -n 1 -N 1 -c 4 numactl -show
[...]
physcpubind: 0 1 2 3
as opposed to:
$ srun --pty -p ml -n 1 -N 1 -c 4 --hint=nomultithread numactl -show
[...]
physcpubind: 0 4 8 12
To make matters worse, I still can't allocate more than 44 when omitting
the --hint parameter:
$ srun -p ml -n 1 -N 1 -c 176 numactl -show
srun: error: CPU count per node can not be satisfied
So as a user, this is highly unintuitive, because I expect to get "real"
cores if I'm not able to request more than 44, but with task/affinity,
this is not the case. So you'd *always* have to use the --hint parameter
to get sane behavior. Imho this is broken, but maybe there are some
other settings that could fix this? Any hint would be much appreciated.
Best, Maik
Am 07.12.18 um 15:11 schrieb Eli V:
> On Fri, Dec 7, 2018 at 7:53 AM Maik Schmidt <maik.schmidt at tu-dresden.de> wrote:
>> I have found --hint=multithread, but this only works with task/affinity.
>> We use task/cgroup. Are there any downsides to activating both task
>> plugins at the same time?
>>
>> Best, Maik
>>
>> Am 07.12.18 um 13:33 schrieb Maik Schmidt:
>>> Hi all,
>>>
>>> we recently got ourselves some Power9 nodes with 4-way SMT. However,
>>> other than using --exclusive I cannot seem to find a possibility to
>>> make SLURM allocate all SMT threads for me. There simply does not seem
>>> to exist a parameter for that. One might think that --threads-per-core
>>> would be right, but this only limits the job to run on nodes that have
>>> ThreadsPerCore=x set, it does not actually allocate the CPUs.
>>>
>>> Also, I cannot use more than -c 44 even though the node has
>>> CPUTot=176, so it's not possible to allocate all (virtual) cores.
> I'd check your slurmd.log, it might already be doing it seeing as 44 *
> 4 == 176. I think slurm used to auto assign all threads on a core to a
> job, and noted it in slurmd.log.(or maybe slurmctld.log) I think that
> has changed though, since I've recently upgraded to 18.03 from 17.02
> and no longer see this. sbatch -n 64 -N 1 and sbatch --cpus-per-task
> 64 -N 1 both ran for me reserving all 64 threads on my test 32 core
> host. I use SelectTypeParameters=CR_Core_Memory,CR_LLN
>
>>> So, how can this be done?
>>>
>>> Here are some relevant SLURM config settings:
>>>
>>> NodeName=taurusml[1-22] Feature=IB Gres=gpu:6 Procs=176 Sockets=2
>>> CoresPerSocket=22 ThreadsPerCore=4 RealMemory=450000 Weight=128
>>>
>>> TaskPlugin = task/cgroup
>>> TaskPluginParam = cpusets,autobind=threads
>>>
>>> Still on SLURM 17.02.11 if that is relevant.
>>>
>>> Thanks in advance.
>>>
>> --
>> Maik Schmidt
>> HPC Services
>>
>> Technische Universität Dresden
>> Zentrum für Informationsdienste und Hochleistungsrechnen (ZIH)
>> Willers-Bau A116
>> D-01062 Dresden
>> Telefon: +49 351 463-32836
>>
>>
--
Maik Schmidt
HPC Services
Technische Universität Dresden
Zentrum für Informationsdienste und Hochleistungsrechnen (ZIH)
Willers-Bau A116
D-01062 Dresden
Telefon: +49 351 463-32836
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5793 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20181207/0c13bb3a/attachment.bin>
More information about the slurm-users
mailing list