[slurm-users] Nodeconfig for SGI UV300

Galloway, Michael D. gallowaymd at ornl.gov
Sat Feb 17 15:17:28 MST 2018


All,


We are planning to migrate our UV300 (512c/24T) from torque/moab to slurm and are wondering if other slurm users have done this and what approach they used.


Under the current system we chose to schedule against numa zones and that has works ok for us. This gives us 16 'nodes' that look like:


nodes=0 cpus=0-15,256-271 mems=0
nodes=1 cpus=16-31,272-287 mems=1?

largely based on the topology. It looks like there are a number of ways to approach this with slurm, cpu binding, cgroups, etc. slurmd sees the UV like this:

CPUs=512 Boards=4 SocketsPerBoard=16 CoresPerSocket=16 ThreadsPerCore=2 RealMemory=23739448?

We are just coming to slurm and we are wondering if and how others have done this and perhaps how schedmd would recommend approaching it.
Thanks!

---- michael

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180217/7511bdcd/attachment.html>


More information about the slurm-users mailing list