[slurm-users] [EXTERNAL] Re: Job in "priority" status - resources available

Greg Wickham greg.wickham at kaust.edu.sa
Wed Aug 2 20:17:07 UTC 2023


Following on from what Michael said, the default Slurm configuration is to allocate only one job per node. If GRES a100_1g.10gb is on the same node ensure to enable “SelectType=select/cons_res” (info at https://slurm.schedmd.com/cons_res.html) to permit multiple jobs to use the same node.

Also using “TaskPlugin=task/cgroup” is useful to ensure that users cannot inadvertently access resources not allocated to other jobs on the same node (refer to the slurm.conf man page).

   -Greg

From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of Michael Gutteridge <michael.gutteridge at gmail.com>
Date: Wednesday, 2 August 2023 at 5:22 pm
To: Slurm User Community List <slurm-users at lists.schedmd.com>
Subject: [EXTERNAL] Re: [slurm-users] Job in "priority" status - resources available
I'm not sure there's enough information in your message- Slurm version and configs are often necessary to make a more confident diagnosis.  However, the behaviour you are looking for (lower priority jobs skipping the line) is called "backfill".  There's docs here: https://slurm.schedmd.com/sched_config.html#backfill<https://urldefense.com/v3/__https:/slurm.schedmd.com/sched_config.html*backfill__;Iw!!Nmw4Hv0!yN2vkwBRGKx9XuVO3o7g6Ca8yN0A5bXazd8I1g0g1FaKEOi2P0xAg6Z_1eyqdwOFsvIv0D64pDlw4G36wojaY0uNkOhsh72u8g$>

It should be loaded and active by default which is why I'm not super confident here.  There may also be something else going on with the node configuration as it looks like 1596 would maybe need the same node?  Maybe there's not enough CPU or memory to accommodate both jobs (1596 and 1739)?

HTH
 - Michael

On Wed, Aug 2, 2023 at 5:13 AM Cumer Cristiano <CristianoMaria.Cumer at unibz.it<mailto:CristianoMaria.Cumer at unibz.it>> wrote:

Hello,

I'm quite a newbie regarding Slurm. I recently created a small Slurm instance to manage our GPU resources. I have this situation:

 JOBID        STATE         TIME   ACCOUNT    PARTITION    PRIORITY              REASON CPU MIN_MEM              TRES_PER_NODE
    1739    PENDING         0:00  standard      gpu-low           5            Priority   1     80G    gres:gpu:a100_1g.10gb:1
    1738    PENDING         0:00  standard      gpu-low           5            Priority   1     80G  gres:gpu:a100-sxm4-80gb:1
    1737    PENDING         0:00  standard      gpu-low           5            Priority   1     80G  gres:gpu:a100-sxm4-80gb:1
    1736    PENDING         0:00  standard      gpu-low           5           Resources   1     80G  gres:gpu:a100-sxm4-80gb:1
    1740    PENDING         0:00  standard      gpu-low           1            Priority   1      8G      gres:gpu:a100_3g.39gb
    1735    PENDING         0:00  standard      gpu-low           1            Priority   8     64G  gres:gpu:a100-sxm4-80gb:1
    1596    RUNNING   1-13:26:45  standard      gpu-low           3                None   2     64G    gres:gpu:a100_1g.10gb:1
    1653    RUNNING     21:09:52  standard      gpu-low           2                None   1     16G                 gres:gpu:1
    1734    RUNNING        59:52  standard      gpu-low           1                None   8     64G  gres:gpu:a100-sxm4-80gb:1
    1733    RUNNING      1:01:54  standard      gpu-low           1                None   8     64G  gres:gpu:a100-sxm4-80gb:1
    1732    RUNNING      1:02:39  standard      gpu-low           1                None   8     40G  gres:gpu:a100-sxm4-80gb:1
    1731    RUNNING      1:08:28  standard      gpu-low           1                None   8     40G  gres:gpu:a100-sxm4-80gb:1
    1718    RUNNING     10:16:40  standard      gpu-low           1                None   2      8G              gres:gpu:v100
    1630    RUNNING   1-00:21:21  standard      gpu-low           1                None   1     30G      gres:gpu:a100_3g.39gb
    1610    RUNNING   1-09:53:23  standard      gpu-low           1                None   2      8G              gres:gpu:v100



Job 1736 is in the PENDING state since there are no more available a100-sxm4-80gb GPUs. The job priority starts to rise with time (priority 5) as expected. Now another user submits job 1739 on a gres:gpu:a100_1g.10gb:1 that is available, but the job is not starting since its priority is 1. This is obviously not the desired outcome, and I believe I must change the scheduling strategy. Could someone with more experience than me give me some hints?

Thanks, Cristiano
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20230802/ca311761/attachment-0001.htm>


More information about the slurm-users mailing list