<html><body><div style="font-family: trebuchet ms,sans-serif; font-size: 11pt; color: #000000"><div>Hi Miguel,<br><br><br>I modified my test configuration to evaluate the effect of NoDecay.<br><br></div><div><br data-mce-bogus="1"></div><div><br><br>I modified all QOS adding <strong>NoDecay</strong> Flag.<br><br><br>toto@login1:~/TEST$ sacctmgr show QOS<br>      Name   Priority  GraceTime    Preempt   PreemptExemptTime PreemptMode                                    Flags UsageThres UsageFactor       GrpTRES   GrpTRESMins GrpTRESRunMin GrpJobs GrpSubmit     GrpWall       MaxTRES MaxTRESPerNode   MaxTRESMins     MaxWall     MaxTRESPU MaxJobsPU MaxSubmitPU     MaxTRESPA MaxJobsPA MaxSubmitPA       MinTRES <br>---------- ---------- ---------- ---------- ------------------- ----------- ---------------------------------------- ---------- ----------- ------------- ------------- ------------- ------- --------- ----------- ------------- -------------- ------------- ----------- ------------- --------- ----------- ------------- --------- ----------- ------------- <br>    normal          0   00:00:00                                    cluster                                  NoDecay               1.000000                                                                                                                                                                                                                      <br>interactif         10   00:00:00                                    cluster                                  NoDecay               1.000000       node=50                                                                 node=22                               1-00:00:00       node=50                                                                         <br>     petit          4   00:00:00                                    cluster                                  NoDecay               1.000000     node=1500                                                                 node=22                               1-00:00:00      node=300                                                                         <br>      gros          6   00:00:00                                    cluster                                  NoDecay               1.000000     node=2106                                                                node=700                               1-00:00:00      node=700                                                                         <br>     court          8   00:00:00                                    cluster                                  NoDecay               1.000000     node=1100                                                                node=100                                 02:00:00      node=300                                                                         <br>      long          4   00:00:00                                    cluster                                  NoDecay               1.000000      node=500                                                                node=200                               5-00:00:00      node=200                                                                         <br>   special         10   00:00:00                                    cluster                                  NoDecay               1.000000     node=2106                                                               node=2106                               5-00:00:00     node=2106                                                                         <br>   support         10   00:00:00                                    cluster                                  NoDecay               1.000000     node=2106                                                                node=700                               1-00:00:00     node=2106                                                                         <br>      visu         10   00:00:00                                    cluster                                  NoDecay               1.000000        node=4                                                                node=700                                 06:00:00        node=4                       <br><br><br><br>I submitted a bunch of jobs to control the NoDecay efficiency and I noticed <strong>RawUsage</strong> as well as <strong>GrpTRESRaw</strong> <strong>cpu</strong> is still decreasing.<br><br><br>toto@login1:~/TEST$ sshare -A dci -u " " -o account,user,GrpTRESRaw%80,<strong>GrpTRESMins</strong>,RawUsage<br>             Account       User                                                                       <strong>GrpTRESRaw                    GrpTRESMins    RawUsage</strong><br>-------------------- ----------                            ----------------------------------------------------- ------------------------------ -----------<br>dci                                <strong>cpu=6932</strong>,mem=12998963,energy=0,node=216,billing=6932,fs/disk=0,vmem=0,pages=0                      cpu=17150      <strong>415966</strong><br>toto@login1:~/TEST$ sshare -A dci -u " " -o account,user,GrpTRESRaw%80,<strong>GrpTRESMins</strong>,<strong>RawUsage</strong><br>             Account       User                                                                       <strong>GrpTRESRaw                    GrpTRESMins    RawUsage</strong><br>-------------------- ----------                            ----------------------------------------------------- ------------------------------ -----------<br>dci                                <strong>cpu=6931</strong>,mem=12995835,energy=0,node=216,billing=6931,fs/disk=0,vmem=0,pages=0                      cpu=17150      <strong>415866</strong><br>toto@login1:~/TEST$ sshare -A dci -u " " -o account,user,GrpTRESRaw%80,GrpTRESMins,RawUsage<br>             Account       User                                                                       <strong>GrpTRESRaw                    GrpTRESMins    RawUsage</strong> <br>-------------------- ----------                            ----------------------------------------------------- ------------------------------ ----------- <br>dci                                <strong>cpu=6929</strong>,mem=12992708,energy=0,node=216,billing=6929,fs/disk=0,vmem=0,pages=0                      cpu=17150      <strong>415766</strong> <br><br><br>Something I forgot to do ?<br><br><br>Best,<br>Gérard<br><br></div><div data-marker="__SIG_PRE__"><div><span style="color: #3333ff;"><span style="color: #000000;">Cordialement,</span></span></div><div><span style="color: #3333ff;"><span style="color: #000000;">Gérard Gil</span><br><br>Département Calcul Intensif</span><br>Centre Informatique National de l'Enseignement Superieur<br>950, rue de Saint Priest<br>34097 Montpellier CEDEX 5<br>FRANCE<br><br>tel :  (334) 67 14 14 14<br>fax : (334) 67 52 37 63<br>web : <a href="http://www.cines.fr" target="_blank">http://www.cines.fr</a><br></div></div><div><br></div><hr id="zwchr" data-marker="__DIVIDER__"><div data-marker="__HEADERS__"><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>De: </b>"Gérard Gil" <gerard.gil@cines.fr><br><b>À: </b>"Slurm-users" <slurm-users@lists.schedmd.com><br><b>Cc: </b>"slurm-users" <slurm-users@schedmd.com><br><b>Envoyé: </b>Vendredi 24 Juin 2022 14:52:12<br><b>Objet: </b>Re: [slurm-users] GrpTRESMins and GrpTRESRaw usage<br></blockquote></div><div data-marker="__QUOTED_TEXT__"><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"> Hi Miguel,<br> <br> Good !!<br> <br> I'll try this options on all existing QOS and see if everything works as<br> expected.<br> I'll inform you on the results.<br> <br> <br> Thanks a lot<br> <br> Best,<br> Gérard<br> <br> <br> ----- Mail original -----<br><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"> De: "Miguel Oliveira" <miguel.oliveira@uc.pt><br> À: "Slurm-users" <slurm-users@lists.schedmd.com><br> Cc: "slurm-users" <slurm-users@schedmd.com><br> Envoyé: Vendredi 24 Juin 2022 14:07:16<br> Objet: Re: [slurm-users] GrpTRESMins and GrpTRESRaw usage</blockquote><br> <br><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"> Hi Gérard,<br> <br> I believe so. All our accounts correspond to one project and all have an<br> associated QoS with NoDecay and DenyOnLimit. This is enough to restrict usage<br> on each individual project.<br> You only need these flags on the QoS. The association will carry on as usual and<br> fairshare will not be impacted.<br> <br> Hope that helps,<br> <br> Miguel Oliveira<br> <br><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"> On 24 Jun 2022, at 12:56, gerard.gil@cines.fr wrote:<br> <br> Hi Miguel,<br> <br><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"> Why not? You can have multiple QoSs and you have other techniques to change<br> priorities according to your policies.</blockquote><br> <br> Is this answer my question ?<br> <br> "If all configured QOS use NoDecay, we can take advantage of the FairShare<br> priority with Decay and  all jobs GrpTRESRaw with NoDecay ?"<br> <br> Thanks<br> <br> Best,</blockquote></blockquote><br> > > Gérard<br><br></blockquote></div></div></body></html>