[slurm-users] Using hyperthreaded processors
Sebastian T Smith
stsmith at unr.edu
Wed Nov 4 21:41:35 UTC 2020
Hi,
We have Hyper-threading/SMT enabled on our cluster. It's challenging to fully utilize threads, as Brian suggests. We have a few workloads that benefit from it being enabled, but they represent a minority of our overall workload.
We use SelectTypeParameters=CR_Core_Memory. This configuration means that each thread counts as a CPU. The biggest issue with this has been user support. Most of our users are unaware of SMT as a feature and often require consultation to ensure their resource requests match their workload. I'm invested in SMT at this point, but if I had a do over (in the same environment), I would disable Hyper-threading.
Ping me with questions. I'm happy to discuss our config and experience in greater detail.
Sebastian
--
[University of Nevada, Reno]<http://www.unr.edu/>
Sebastian Smith
High-Performance Computing Engineer
Office of Information Technology
1664 North Virginia Street
MS 0291
work-phone: 775-682-5050<tel:7756825050>
email: stsmith at unr.edu<mailto:stsmith at unr.edu>
website: http://rc.unr.edu<http://rc.unr.edu/>
________________________________
From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of Brian Andrus <toomuchit at gmail.com>
Sent: Wednesday, November 4, 2020 10:12 AM
To: slurm-users at lists.schedmd.com <slurm-users at lists.schedmd.com>
Subject: Re: [slurm-users] Using hyperthreaded processors
JC,
One thing you will start finding in HPC is that, by it's goal, hyperthreading is usually a poor fit.
If you are properly utilizing your cores, your jobs will actually be slowed by using hyperthreading. They are not 'extra' cores, but a method of swapping a core to a different workload during an idle cycle. The operative word being 'idle'. The goal of HPC is get resource usage as close to 100% as possible, so there should be no 'idle' cycles.
>From Intel's own info:
With CPU Hyper-Threading, a PC can process more information in less time and run more background tasks without disruption.
In HPC, we have no background tasks (excluding daemons and such on the node), so you have little to nothing that can be improved. You end up allocating some of the various Core memory to secondary tasks instead of allowing more of it to be used by your primary task, leading to more fetches, wasting effort.
This is a VERY simplistic description, but the point is that hyperthreading is not a silver bullet that will improve HPC performance if you are maximizing your resource utilization.
Ok, I will get off my soapbox :)
Brian Andrus
On 11/4/2020 7:30 AM, Jean-Christophe HAESSIG wrote:
Hi,
I would like to make good use of hyperthreaded processors and I already
skimmed through a quantity of posts and documentation.
It is pretty clear that Slurm likes to allocate processing units up to
the core level, and to be able to allocate threads one has to either :
- not declare Sockets/Core/Threads at all, only CPUS and use
SelectTypeParameters=CR_CPU_Memory
- resort to some trickery to declare threads as cores
However, the two options have shortcomings :
By not declaring S/C/T, options like --ntasks-per-core seem to get
ignored, but some programs would benefit from being able to be isolated
on their cores.
Not declaring S/C/T (e.g. C=C*T;T=1) as they are in reality also seems
to mess up CPU binding since Slurm does not have a real view on the
node's topology.
When S/C/T are properly declared (with SelectTypeParameters=
CR_Core_Memory), I run into other problems since the allocator thinks I
want 2 CPUS even if I asked for --ntasks=1. In combination with --mem-
per-cpu, this leads to twice memory allocated as was asked for (which
resembles bug #3879).
Finally, while the documentation explains in several places that
processing units should be allocated best by core, I miss an
explanation on why.
Any other options/advice ? Feel free to correct me if I didn't get
something right.
Thanks,
J.C. Haessig
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20201104/044b776d/attachment-0001.htm>
More information about the slurm-users
mailing list