[slurm-users] ignore gpu resources to scheduled the cpu based jobs

navin srivastava navin.altair at gmail.com
Tue Jun 30 07:11:48 UTC 2020


Hi Team,

I have differentiated the CPU node and GPU nodes into two different queues.

Now I have 20 Nodes having CPUS (20 cores)only but no GPU.
Another set of nodes having GPU+CPU.some nodes are with 2 GPU and 20 CPU
and some are with 8GPU and 48 CPU assigned to GPU queue

user facing issues when in GPU queue. the scenario is as below:

user submitting jobs with 4CPU+1GPU and also submitting jobs with 4CPU
only. So the situation arises when all the GPU is full and the job
submitted with GPU resources is waiting in queue but there is a large
amount of CPU available but the job which is only required CPU jobs are not
going through because the 4CPU+1GPU job has higher priority over CPU.

is there any mechanism that once all GPU is full in use it will allow the
CPU based job.

Regards
Navin.






On Mon, Jun 22, 2020 at 6:09 PM Diego Zuccato <diego.zuccato at unibo.it>
wrote:

> Il 16/06/20 16:23, Loris Bennett ha scritto:
>
> > Thanks for pointing this out - I hadn't been aware of this.  Is there
> > anywhere in the documentation where this is explicitly stated?
> I don't remember. Seems Michael's experience is different. Possibly some
> other setting influences that behaviour. Maybe different partition
> priorities?
> But on the small cluster I'm managing it's this way. I'm not an expert
> and I'd like to understand.
>
> --
> Diego Zuccato
> DIFA - Dip. di Fisica e Astronomia
> Servizi Informatici
> Alma Mater Studiorum - Università di Bologna
> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> tel.: +39 051 20 95786
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200630/b38b8643/attachment.htm>


More information about the slurm-users mailing list