[slurm-users] Memory prioritization?

John Hearns hearnsj at googlemail.com
Wed Jun 13 02:58:52 MDT 2018


Matt, I back up what Loris said regarding interactive jobs.

I am sorry to sound ranty here, but my experience teaches me that in cases
like this you must ask why this is being desired.
Hey - you are the systems expert. If you get the user to explain why they
desire this functionality, it actually helps you do your job well.
You can do the following:
*) Say - wow, thatø's interesting. I know a better way we can implement
this, I read about technique XYZ only last week
*) Say - A OK, I will write a script or alter our configuration to do just
what you want
*) Say - well no. What you want to do will have a huge imnpact, destroy our
system and prevent other users running on the system

As I said sorry to sound ranty, but if you tease out what is goign on here
you might find there is a very elegant solution just waiting there.

At a guess, what is happening is that you have users who want to run
visualization sessions for a short time, using GPUs on large memory servers?
Maybe you should look at checkpointing and restarting the compute jobs, and
see how fas this could be done.




On 13 June 2018 at 07:53, Loris Bennett <loris.bennett at fu-berlin.de> wrote:

> Hi Matt,
>
> Matt Hohmeister <hohmeister at psy.fsu.edu> writes:
>
> > Relatively new to Slurm here; I have someone who has asked if the
> > following is possible:
> >
> > Allow Slurm to use as much memory on a node as exists on the node
> > itself. If someone is running a process outside of Slurm, decrease
> > Slurm’s memory usage to make way for the non-Slurm process.
> >
> > Is such a thing possible?
>
>
> I don't think this is possible.  Once Slurm has allocated memory to a
> job, it cannot currently be changed.
>
> However, from my point of view, you really don't want to do this anyway.
> The whole idea of a resource manager like Slurm is to maximise the
> utilisation of your resources.  If you have people running jobs outside
> of Slurm as well, you will generate major headaches for yourself.  You
> can increase the responsiveness some users within Slurm by tweaking the
> priorities (via accounts, partitions, QOS, ...)
>
> You could look at interactive jobs, whereby a user can just use 'srun'
> to start a shell on a node, e.g.
>
>   srun --ntasks=1 --time=00:30:00 --mem=1000 bash
>
> Of course, this will only work well if your cluster is not full or you
> have a dedicated partition and, ultimately, also leads to resources
> being wasted.
>
> HTH,
>
> Loris
>
> --
> Dr. Loris Bennett (Mr.)
> ZEDAT, Freie Universität Berlin         Email loris.bennett at fu-berlin.de
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180613/8029781d/attachment.html>


More information about the slurm-users mailing list