[slurm-users] Compiling Slurm with nvml support

Bas van der Vlies bas.vandervlies at surf.nl
Fri Sep 25 07:43:07 UTC 2020

That is why we switched to tarball installations with version directories as suggested by schedmd. No deb/rpm installations any more. 

Bas van der Vlies
| Operations, Support & Development | SURFsara | Science Park 140 | 1098 XG  Amsterdam
| T +31 (0) 20 800 1300  | bas.vandervlies at surf.nl | www.surf.nl |

> On 24 Sep 2020, at 21:31, Dana, Jason T. <Jason.Dana at jhuapl.edu> wrote:
> Hello,
> I hopefully have a quick question.
> I have compiled Slurm RPMs on a CentOS system with nvidia drivers installed so that I can utilize AutoDetect=nvml configuration in our GPU nodes’ gres.conf. All seems to be going well on the GPU nodes since I have done that. I was unable to install the slurm RPM on the control/master node as the RPM required libnvidia-ml.so to be installed. The control/master and other compute nodes don’t have any nvidia cards attached to them, so I believed installing the drivers just to satisfy this requirement might not be the best idea. I recreated the RPM without the drivers present to get around this and everything has been working great as far as I can tell.
> I am now working on adding pmix support that I didn’t properly add initially and am encountering this situation again. I figured I would send up a flag and see if maybe I am going about this the wrong way. Is it typical to have to compile the slurm RPMs for different types of nodes or am I completely going about this the wrong way?
> Thanks in advance! 
> Jason

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4182 bytes
Desc: not available
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200925/71dbc4df/attachment.bin>

More information about the slurm-users mailing list