[slurm-users] Exposing only requested CPUs to a job on a given node.

Luis R. Torres lrtorres at gmail.com
Fri May 14 20:35:03 UTC 2021


Hi Folks,

We are currently running on SLURM 20.11.6 with cgroups constraints for
memory and CPU/Core.  Can the scheduler only expose the requested number of
CPU/Core resources to a job?  We have some users that employ python scripts
with the multi processing modules, and the scripts apparently use all of
the CPU/Cores in a node, despite using options to constraint a task to just
a given number of CPUs.    We would like several multiprocessing jobs to
run simultaneously on the nodes, but not step on each other.

The sample script I use for testing is below; I'm looking for something
similar to what can be done with the GPU Gres configuration where only the
number of GPUs requested are exposed to the job requesting them.


#!/usr/bin/env python3

import multiprocessing


def worker():

    print("Worker on CPU #%s" % multiprocessing.current_process

().name)

    result=0

    for j in range(20):

      result += j**2

    print ("Result on CPU {} is {}".format(multiprocessing.curr

ent_process().name,result))

    return


if __name__ == '__main__':

    pool = multiprocessing.Pool()

    jobs = []

    print ("This host exposed {} CPUs".format(multiprocessing.c

pu_count()))

    for i in range(multiprocessing.cpu_count()):

        p = multiprocessing.Process(target=worker, name=i).star

t()

Thanks,
-- 
----------------------------------------
Luis R. Torres
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20210514/bfdec6c1/attachment.htm>


More information about the slurm-users mailing list