<div dir="auto">Ah, of course, that makes sense, thanks. I guess if we're constraining the devices into job specific cgroups then the Slurmd on the node may know what device is assigned to what job and be able to interrogate resource usage from that but there's no mechanism to do it anything other than that. </div><br><div class="gmail_quote"><div dir="ltr">On Fri, 23 Nov 2018, 6:36 pm Mark Hahn <<a href="mailto:hahn@mcmaster.ca">hahn@mcmaster.ca</a> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">> We have a use-case in that the GRES being tracked on a particular partition<br>
>are GPU cards, but aren't being used by applications that would require them<br>
>exclusively (lightweight direct rendering rather than GP-GPU/CUDA<br>
<br>
the issue is that slurm/kernel can't arbitrate resources on the GPU,<br>
so oversubscription is likely to run out of device memory or SMs, no?<br>
<br>
</blockquote></div>