[slurm-users] How to find core count per job per node
Jeffrey Frey
frey at udel.edu
Fri Oct 18 18:19:33 UTC 2019
Adding the "--details" flag to scontrol lookup of the job:
$ scontrol --details show job 1636832
JobId=1636832 JobName=R3_L2d
:
NodeList=r00g01,r00n09
BatchHost=r00g01
NumNodes=2 NumCPUs=60 NumTasks=60 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
TRES=cpu=60,mem=60G,node=2,billing=55350
Socks/Node=* NtasksPerN:B:S:C=30:0:*:* CoreSpec=*
Nodes=r00g01,r00n09 CPU_IDs=0-29 Mem=30720 GRES_IDX=
MinCPUsNode=30 MinMemoryCPU=1G MinTmpDiskNode=0
Features=(null) DelayBoot=00:00:00
Gres=(null) Reservation=(null)
OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)
:
$ scontrol --details show job 1653838
JobId=1653838 JobName=v1.20
:
NodeList=r00g01,r00n[16,20],r01n16
BatchHost=r00g01
NumNodes=4 NumCPUs=20 NumTasks=20 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
TRES=cpu=20,mem=20G,node=4,billing=18450
Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
Nodes=r00g01 CPU_IDs=31-35 Mem=5120 GRES_IDX=
Nodes=r00n16 CPU_IDs=34-35 Mem=2048 GRES_IDX=
Nodes=r00n20 CPU_IDs=12-17,30-35 Mem=12288 GRES_IDX=
Nodes=r01n16 CPU_IDs=15 Mem=1024 GRES_IDX=
MinCPUsNode=1 MinMemoryCPU=1G MinTmpDiskNode=0
Features=(null) DelayBoot=00:00:00
Gres=(null) Reservation=(null)
OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)
:
> On Oct 18, 2019, at 1:56 PM, Tom Wurgler <twurgl at goodyear.com> wrote:
>
> I need to know how many cores a given job is using per node.
> Say my nodes have 24 cores each and I run a 36 way job.
> It take a node and a half.
> scontrol show job id
> shows me 36 cores, and the 2 nodes it is running on.
> But I want to know how it split the job up between the nodes.
>
> Thanks for any info
::::::::::::::::::::::::::::::::::::::::::::::::::::::
Jeffrey T. Frey, Ph.D.
Systems Programmer V / HPC Management
Network & Systems Services / College of Engineering
University of Delaware, Newark DE 19716
Office: (302) 831-6034 Mobile: (302) 419-4976
::::::::::::::::::::::::::::::::::::::::::::::::::::::
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20191018/7cf8de10/attachment.htm>
More information about the slurm-users
mailing list