[slurm-users] Running gpu and cpu jobs on the same node

Renfro, Michael Renfro at tntech.edu
Wed Sep 30 20:25:21 UTC 2020


We share our 28-core gpu nodes with non-gpu jobs through a set of ‘any’ partitions. The ‘any’ partitions have a setting of MaxCPUsPerNode=12, and the gpu partitions have a setting o MaxCPUsPerNode=16. That’s more or less documented in the slurm.conf documentation under “MaxCPUsPerNode”.

From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of Ahmad Khalifa <underoath006 at gmail.com>
Reply-To: Slurm User Community List <slurm-users at lists.schedmd.com>
Date: Wednesday, September 30, 2020 at 3:13 PM
To: "slurm-users at lists.schedmd.com" <slurm-users at lists.schedmd.com>
Subject: [slurm-users] Running gpu and cpu jobs on the same node


External Email Warning

This email originated from outside the university. Please use caution when opening attachments, clicking links, or responding to requests.

________________________________
I have a machine with 4 rtx2080ti and a core i9. I submit jobs to it through MPI PMI2 (from Relion).

If I use 5 MPI and 4 threads, then basically I'm using all 4 GPUs and 20 threads of my cpu.

My question is, my current configuration allows submitting jobs to the same node, but with a different partition, but I'm not sure if I use #SBATCH --partition=cpu that the submitted jobs will only use the remaining 2 cores (4 threads) or is it going to share resources with my gpu job?!

Thanks.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200930/1972de8c/attachment.htm>


More information about the slurm-users mailing list