[slurm-users] Only one socket for SLURM
Sam Hawarden
sam.hawarden at otago.ac.nz
Thu Feb 21 21:00:52 UTC 2019
Hi GestiĆ³,
To reliably load down 32 cores and view it:
[user at headnode] srun -c 32 -t 10 --pty $SHELL
[user at worknode] stress -c 32 -vm 32&
[user at worknode] htop; fg
^C
You can view a task's CPU affinity by pressing a in htop if stress isn't consuming all the cores.
You will need to make sure you have set TaskPlugin=affinity (or cgroups) in your slurm.conf so processes are constrained to a given set of CPUs.
Linux assigns numbers to your CPUs. 0-15 will be socket 1, thread 1. 16-31 are socket 2, thread 1, 32-47 are socket 1, thread 2. 48-63 are socket 2, thread 2. Slurm will use the first N CPUs.
If your current configuration isn't using all of, but only the first socket, you could try:
Boards=1 SocketsPerBoard=1 CoresPerSocket=32 ThreadsPerCore=1
But I thing you might be stuck with half of each or starting slurmd with a set affinity mask via taskset.
Kind regards,
Sam
________________________________
Sam Hawarden
Assistant Research Fellow
Pathology Department
Dunedin School of Medicine
________________________________
From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of GestiĆ³ Servidors <sysadmin.caos at uab.cat>
Sent: Tuesday, 19 February 2019 03:52
To: slurm-users at lists.schedmd.com
Subject: [slurm-users] Only one socket for SLURM
Hi,
One node of my cluster has 2 CPU sockets (with 2 32-cores CPUs). Now, I would like to configure my SLURM to share only CPUs of first socket. I have configured slurm.conf in this way:
NodeName=mynode CPUs=32 SocketsPerBoard=1 CoresPerSocket=16 ThreadsPerCore=2 RealMemory=515703 TmpDisk=270000
but, with this line, how can I know slurm is using only firsts 32 cores (or seconds 32 cores) and not 16 of first and others 16 of the other socket?
Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190221/85c7a63f/attachment.html>
More information about the slurm-users
mailing list