[slurm-users] lmod and slurm
Yair Yarom
irush at cs.huji.ac.il
Wed Dec 20 08:53:03 MST 2017
Thank you all for your advises and insights.
I understand that a fair portion of my time is spent on helping the
users. However, in cases were the error repeats and I need to re-explain
it to a different user each time - I tend to believe there's something
wrong with the system configuration. And it's more fun writing plugins
than explaining the same point over and over again ;)
This specific issue is a very subtle point in the documentations, new
users won't pay attention to it (or understand it), and not-so-new users
won't read it again. So the documentation isn't really helpful.
As such I do want the system to force proper usage as much as possible,
and I prefer the system not working for them instead of seemingly
working but somewhat flawed.
For future reference (if anyone else wants to overly complicate his
system), the plugin I'm currently testing is in
https://github.com/irush-cs/slurm-plugins/ - spank_lmod and
TaskProlog-lmod
Thanks again,
Yair.
On Tue, Dec 19 2017, Gerry Creager - NOAA Affiliate <gerry.creager at noaa.gov> wrote:
> I have to echo Loris' comments. My users tend to experiment, and a fair portion
> of my time is spent helping them correct errors they've inflicted upon
> themselves. I tend to provide guides for configuring and running our more usual
> applications, and then when they fail, I review the guidance with them in my
> office.
>
> Some of my bigger nightmares begin with one of my truly talented users trying
> something because the procedure he's trying is "just like" what he did on
> another, very different system. Followed closely with "Well it SHOULD work this
> way". We then spend some quality time going over how things really work, and he
> goes away a bit happier, and wiser.
>
> Plan to work with your users and be prepared to train them on nuance.
>
> Gerry
>
> On Tue, Dec 19, 2017 at 9:33 AM, Loris Bennett <loris.bennett at fu-berlin.de>
> wrote:
>
> Yair Yarom <irush at cs.huji.ac.il> writes:
>
> > There are two issues:
> >
> > 1. For the manually loaded modules by users, we can (and are)
> > instructing them to load the modules within their sbatch scripts. The
> > problem is that not all users read the documentation properly, so in
> > the tensorflow example, they use the cpu version of tensorflow
> > (available on the submission node) instead of the gpu version
> > (available on the execution node). Their program works, but slowly,
> > and some of them simply accept it without knowing there's a problem.
>
> To me, this is just what users do. They make mistakes, not just with
> loading modules, their programs run badly, so I have to tell them what
> they are doing wrong and point them to the documentation. You obviously
> need some sort of monitoring to help you spot the poorly configured jobs.
>
> > 2. We have modules which we want to be loaded by default, without
> > telling users to load them. These are mostly for programs used by all
> > users and for some settings we want to be set by default (and may be
> > different per host). Letting users call 'module purge' or
> > "--export=NONE" will unload the default modules as well.
>
> I'm not sure how you want to prevent users from doing 'module purge' at
> a point which will upset the environment you are trying to set up for them.
>
> > So I basically want to force modules to be unloaded for all jobs - to
> > solve issue 1, while allowing modules to be loaded "automatically" by
> > the system or user - for issue 2.
>
> There may well be a technical solution to your problem such that
> everything works as it should without the users having to know what is
> going on. However, my approach would be to use a submit plugin to
> reject some badly configured jobs and/or set defaults such that badly
> configured jobs fail quickly. In my experience, if users' jobs fail
> straight away, they mainly learn to do the right thing fairly fast and
> without getting frustrated, provided they get enough support. However,
> your users may be different, so YMMV.
>
> Cheers,
>
> Loris
>
>
>
>
> > Thanks,
> > Yair.
> >
> >
> > On Tue, Dec 19 2017, Jeffrey Frey <frey at udel.edu> wrote:
> >
> >> Don't propagate the submission environment:
> >>
> >> srun --export=NONE myprogram
> >>
> >>
> >>
> >>> On Dec 19, 2017, at 8:37 AM, Yair Yarom <irush at cs.huji.ac.il> wrote:
> >>>
> >>>
> >>> Thanks for your reply,
> >>>
> >>> The problem is that users are running on the submission node e.g.
> >>>
> >>> module load tensorflow
> >>> srun myprogram
> >>>
> >>> So they get the tensorflow version (and PATH/PYTHONPATH) of the
> >>> submission node's version of tensorflow (and any additional default
> >>> modules).
> >>>
> >>> There is never a chance to run the "module add ${SLURM_CONSTRAINT}" or
> >>> remove the unwanted modules that were loaded (maybe automatically) on
> >>> the submission node and aren't working on the execution node.
> >>>
> >>> Thanks,
> >>> Yair.
> >>>
> >>> On Tue, Dec 19 2017, "Loris Bennett" <loris.bennett at fu-berlin.de>
> wrote:
> >>>
> >>>> Hi Yair,
> >>>>
> >>>> Yair Yarom <irush at cs.huji.ac.il> writes:
> >>>>
> >>>>> Hi list,
> >>>>>
> >>>>> We use here lmod[1] for some software/version management. There are
> two
> >>>>> issues encountered (so far):
> >>>>>
> >>>>> 1. The submission node can have different software than the execution
> >>>>> nodes - different cpu, different gpu (if any), infiniband, etc. When
> >>>>> a user runs 'module load something' on the submission node, it will
> >>>>> pass the wrong environment to the task in the execution
> >>>>> node. e.g. "module load tensorflow" can load a different version
> >>>>> depending on the nodes.
> >>>>>
> >>>>> 2. There are some modules we want to load by default, and again this
> can
> >>>>> be different between nodes (we do this by source'ing /etc/lmod/lmodrc
> >>>>> and ~/.lmodrc).
> >>>>>
> >>>>> For issue 1, we instruct users to run the "module load" in their
> batch
> >>>>> script and not before running sbatch, but issue 2 is more
> problematic.
> >>>>>
> >>>>> My current solution is to write a TaskProlog script that runs "module
> >>>>> purge" and "module load" and export/unset the changed environment
> >>>>> variables. I was wondering if anyone encountered this issue and have
> a
> >>>>> less cumbersome solution.
> >>>>>
> >>>>> Thanks in advance,
> >>>>> Yair.
> >>>>>
> >>>>> [1]
> https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
> >>>>
> >>>> I don't fully understand your use-case, but, assuming you can divide
> >>>> your nodes up by some feature, could you define a module per feature
> >>>> which just loads the specific modules needed for that category, e.g.
> in
> >>>> the batch file you would have
> >>>>
> >>>> #SBATCH --constraint=shiny_and_new
> >>>>
> >>>> module add ${SLURM_CONSTRAINT}
> >>>>
> >>>> and would have a module file 'shiny_and_new', with contents like, say,
> >>>>
> >>>> module add tensorflow/2.0
> >>>> module add cuda/9.0
> >>>>
> >>>> whereas the module 'rusty_and_old' would contain
> >>>>
> >>>> module add tensorflow/0.1
> >>>> module add cuda/0.2
> >>>>
> >>>> Would that help?
> >>>>
> >>>> Cheers,
> >>>>
> >>>> Loris
> >>>
> >>
> >>
> >> ::::::::::::::::::::::::::::::::::::::::::::::::::::::
> >> Jeffrey T. Frey, Ph.D.
> >> Systems Programmer V / HPC Management
> >> Network & Systems Services / College of Engineering
> >> University of Delaware, Newark DE 19716
> >> Office: (302) 831-6034 Mobile: (302) 419-4976
> >> ::::::::::::::::::::::::::::::::::::::::::::::::::::::
>
>
>
> --
> Dr. Loris Bennett (Mr.)
> ZEDAT, Freie Universität Berlin Email loris.bennett at fu-berlin.de
More information about the slurm-users
mailing list