[slurm-users] How to avoid a feature?

Brian Andrus toomuchit at gmail.com
Thu Jul 1 17:50:14 UTC 2021


Lyn,

Yeah, I think this is it. Looks similar to what Tina has in place too.

So, we set all the nodes as either "FEATURE" or "NOFEATURE" and in 
job_submit.lua set it to 'NOFEATURE' if it is not set.

Sound like what you are doing?

I may need some hints on what to specifically set in the lua script. I 
do have it in place already to ensure time and account are set, but that 
is about it.

Brian Andrus

On 7/1/2021 9:39 AM, Lyn Gerner wrote:
> Hey, Brian,
>
> Neither I nor you are going to like what I'm about to say (but I think 
> it's where you're headed). :)
>
> We have an equivalent use case, where we're trying to keep long work 
> off of a certain number of nodes. Since we already have used "long" as 
> a QoS name, to keep from overloading "long," we have had to establish 
> a "notshort" feature on all the nodes where we want to allow jobs 
> longer than N minutes to run. We use job_submit.lua to detect job 
> duration, and set the notshort feature as appropriate. No user action 
> required.
>
> Best,
> Lyn
>
> On Thu, Jul 1, 2021 at 7:10 AM Brian Andrus <toomuchit at gmail.com 
> <mailto:toomuchit at gmail.com>> wrote:
>
>     All,
>
>     I have a partition where one of the nodes has a node-locked license.
>     That license is not used by everyone that uses the partition.
>     They are cloud nodes, so weights do not work (there is an open bug
>     about
>     that).
>
>     I need to have jobs 'avoid' that node by default. I am thinking I can
>     use a feature constraint, but that seems to only apply to those that
>     want the feature. Since we have so many other users, it isn't
>     feasible
>     to have them modify their scripts, so having it avoid by default
>     would work.
>
>     Any ideas how to do that? Submit LUA perhaps?
>
>     Brian Andrus
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20210701/5d432a2c/attachment.htm>


More information about the slurm-users mailing list