[slurm-users] ignore gpu resources to scheduled the cpu based jobs

navin srivastava navin.altair at gmail.com
Sat Jun 13 15:41:09 UTC 2020


Thanks Renfro.

Yes we have both types of nodes with gpu and nongpu.
Also some users job require gpu and some applications use only CPU.

So the issue happens when user priority is high and waiting for gpu
resources which is not available and the job with lower priority is waiting
even though enough CPU is available which need only CPU resources.

When I hold gpu  jobs the cpu  jobs will go through.

Regards
Navin

On Sat, Jun 13, 2020, 20:37 Renfro, Michael <Renfro at tntech.edu> wrote:

> Will probably need more information to find a solution.
>
> To start, do you have separate partitions for GPU and non-GPU jobs? Do you
> have nodes without GPUs?
>
> On Jun 13, 2020, at 12:28 AM, navin srivastava <navin.altair at gmail.com>
> wrote:
>
> Hi All,
>
> In our environment we have GPU. so what i found is if the user having high
> priority and his job is in queue and waiting for the GPU resources which
> are almost full and not available. so the other user submitted the job
> which does not require the GPU resources are in queue even though lots of
> cpu resources are available.
>
> our scheduling mechanism is FIFO and Fair tree enabled. Is there any way
> we can make some changes so that the cpu based job should go through and
> GPU based job can wait till the GPU resources are free.
>
> Regards
> Navin.
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200613/2b4c6f8d/attachment.htm>


More information about the slurm-users mailing list