[slurm-users] lmod and slurm

Loris Bennett loris.bennett at fu-berlin.de
Tue Dec 19 08:33:16 MST 2017


Yair Yarom <irush at cs.huji.ac.il> writes:

> There are two issues:
>
> 1. For the manually loaded modules by users, we can (and are)
>    instructing them to load the modules within their sbatch scripts. The
>    problem is that not all users read the documentation properly, so in
>    the tensorflow example, they use the cpu version of tensorflow
>    (available on the submission node) instead of the gpu version
>    (available on the execution node). Their program works, but slowly,
>    and some of them simply accept it without knowing there's a problem.

To me, this is just what users do.  They make mistakes, not just with
loading modules, their programs run badly, so I have to tell them what
they are doing wrong and point them to the documentation.  You obviously
need some sort of monitoring to help you spot the poorly configured jobs. 

> 2. We have modules which we want to be loaded by default, without
>    telling users to load them. These are mostly for programs used by all
>    users and for some settings we want to be set by default (and may be
>    different per host). Letting users call 'module purge' or
>    "--export=NONE" will unload the default modules as well.

I'm not sure how you want to prevent users from doing 'module purge' at
a point which will upset the environment you are trying to set up for them.

> So I basically want to force modules to be unloaded for all jobs - to
> solve issue 1, while allowing modules to be loaded "automatically" by
> the system or user - for issue 2. 

There may well be a technical solution to your problem such that
everything works as it should without the users having to know what is
going on.  However, my approach would be to use a submit plugin to
reject some badly configured jobs and/or set defaults such that badly
configured jobs fail quickly.  In my experience, if users' jobs fail
straight away, they mainly learn to do the right thing fairly fast and
without getting frustrated, provided they get enough support.  However,
your users may be different, so YMMV.

Cheers,

Loris


> Thanks,
>     Yair.
>
>
> On Tue, Dec 19 2017, Jeffrey Frey <frey at udel.edu> wrote:
>
>> Don't propagate the submission environment:
>>
>> srun --export=NONE myprogram
>>
>>
>>
>>> On Dec 19, 2017, at 8:37 AM, Yair Yarom <irush at cs.huji.ac.il> wrote:
>>> 
>>> 
>>> Thanks for your reply,
>>> 
>>> The problem is that users are running on the submission node e.g.
>>> 
>>> module load tensorflow
>>> srun myprogram
>>> 
>>> So they get the tensorflow version (and PATH/PYTHONPATH) of the
>>> submission node's version of tensorflow (and any additional default
>>> modules).
>>> 
>>> There is never a chance to run the "module add ${SLURM_CONSTRAINT}" or
>>> remove the unwanted modules that were loaded (maybe automatically) on
>>> the submission node and aren't working on the execution node.
>>> 
>>> Thanks,
>>>    Yair.
>>> 
>>> On Tue, Dec 19 2017, "Loris Bennett" <loris.bennett at fu-berlin.de> wrote:
>>> 
>>>> Hi Yair,
>>>> 
>>>> Yair Yarom <irush at cs.huji.ac.il> writes:
>>>> 
>>>>> Hi list,
>>>>> 
>>>>> We use here lmod[1] for some software/version management. There are two
>>>>> issues encountered (so far):
>>>>> 
>>>>> 1. The submission node can have different software than the execution
>>>>>   nodes - different cpu, different gpu (if any), infiniband, etc. When
>>>>>   a user runs 'module load something' on the submission node, it will
>>>>>   pass the wrong environment to the task in the execution
>>>>>   node. e.g. "module load tensorflow" can load a different version
>>>>>   depending on the nodes.
>>>>> 
>>>>> 2. There are some modules we want to load by default, and again this can
>>>>>   be different between nodes (we do this by source'ing /etc/lmod/lmodrc
>>>>>   and ~/.lmodrc).
>>>>> 
>>>>> For issue 1, we instruct users to run the "module load" in their batch
>>>>> script and not before running sbatch, but issue 2 is more problematic.
>>>>> 
>>>>> My current solution is to write a TaskProlog script that runs "module
>>>>> purge" and "module load" and export/unset the changed environment
>>>>> variables. I was wondering if anyone encountered this issue and have a
>>>>> less cumbersome solution.
>>>>> 
>>>>> Thanks in advance,
>>>>>    Yair.
>>>>> 
>>>>> [1] https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
>>>> 
>>>> I don't fully understand your use-case, but, assuming you can divide
>>>> your nodes up by some feature, could you define a module per feature
>>>> which just loads the specific modules needed for that category, e.g. in
>>>> the batch file you would have
>>>> 
>>>>   #SBATCH --constraint=shiny_and_new
>>>> 
>>>>   module add ${SLURM_CONSTRAINT}
>>>> 
>>>> and would have a module file 'shiny_and_new', with contents like, say,
>>>> 
>>>>  module add tensorflow/2.0
>>>>  module add cuda/9.0
>>>> 
>>>> whereas the module 'rusty_and_old' would contain
>>>> 
>>>>  module add tensorflow/0.1
>>>>  module add cuda/0.2
>>>> 
>>>> Would that help?
>>>> 
>>>> Cheers,
>>>> 
>>>> Loris
>>> 
>>
>>
>> ::::::::::::::::::::::::::::::::::::::::::::::::::::::
>> Jeffrey T. Frey, Ph.D.
>> Systems Programmer V / HPC Management
>> Network & Systems Services / College of Engineering
>> University of Delaware, Newark DE  19716
>> Office: (302) 831-6034  Mobile: (302) 419-4976
>> ::::::::::::::::::::::::::::::::::::::::::::::::::::::

-- 
Dr. Loris Bennett (Mr.)
ZEDAT, Freie Universität Berlin         Email loris.bennett at fu-berlin.de



More information about the slurm-users mailing list