<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<blockquote itemscope="" itemtype="https://schemas.microsoft.com/QuotedText" style="border-left: 3px solid rgb(200, 200, 200); border-top-color: rgb(200, 200, 200); border-right-color: rgb(200, 200, 200); border-bottom-color: rgb(200, 200, 200); padding-left: 1ex; margin-left: 0.8ex;">
<div style="font-family: Arial, Helvetica, sans-serif; font-size: 10pt; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);">
I have a large compute node with 10 RTX8000 cards at a remote colo.<br>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt">
<div class="PlainText">One of the cards on it is acting up "falling of the bus" once a day<br>
requiring a full power cycle to reset.<br>
<br>
I want jobs to avoid that card as well as the card it is NVLINK'ed to.<br>
<br>
So I modified gres.conf on that node as follows:<br>
<br>
<br>
# cat /etc/slurm/gres.conf<br>
AutoDetect=nvml<br>
Name=gpu Type=quadro_rtx_8000 File=/dev/nvidia0<br>
Name=gpu Type=quadro_rtx_8000 File=/dev/nvidia1<br>
#Name=gpu Type=quadro_rtx_8000 File=/dev/nvidia2<br>
Name=gpu Type=quadro_rtx_8000 File=/dev/nvidia3<br>
#Name=gpu Type=quadro_rtx_8000 File=/dev/nvidia4<br>
Name=gpu Type=quadro_rtx_8000 File=/dev/nvidia5<br>
Name=gpu Type=quadro_rtx_8000 File=/dev/nvidia6<br>
Name=gpu Type=quadro_rtx_8000 File=/dev/nvidia7<br>
Name=gpu Type=quadro_rtx_8000 File=/dev/nvidia8<br>
Name=gpu Type=quadro_rtx_8000 File=/dev/nvidia9<br>
<br>
and it slurm.conf I changed for node def Gres=gpu:quadro_rtx_8000:10<br>
to be Gres=gpu:quadro_rtx_8000:8. I restarted slurmctld and slurmd<br>
after this.<br>
<br>
I then put the node back from drain to idle. Jobs were sumbitted and <br>
started on the node but they are using the GPU I told it to avoid<br>
<br>
+--------------------------------------------------------------------+<br>
| Processes: |<br>
| GPU GI CI PID Type Process name GPU Memory |<br>
| ID ID Usage |<br>
|====================================================================|<br>
| 0 N/A N/A 63426 C python 11293MiB |<br>
| 1 N/A N/A 63425 C python 11293MiB |<br>
| 2 N/A N/A 63425 C python 10869MiB |<br>
| 2 N/A N/A 63426 C python 10869MiB |<br>
| 4 N/A N/A 63425 C python 10849MiB |<br>
| 4 N/A N/A 63426 C python 10849MiB |<br>
+--------------------------------------------------------------------+<br>
<br>
How can I make SLURM not use GPU 2 and 4?<br>
<br>
---------------------------------------------------------------<br>
Paul Raines <a href="http://help.nmr.mgh.harvard.edu">http://help.nmr.mgh.harvard.edu</a><br>
MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging<br>
149 (2301) 13th Street Charlestown, MA 02129 USA<br>
<br>
</div>
</span></font></div>
</blockquote>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt">
<div class="PlainText"><br>
You can use the nvidia-smi command to 'drain' the GPU's which will power-down the GPU's and no applications will use them.<br>
<br>
This thread on stack overflow explains how to do that:<br>
<br>
<a href="https://unix.stackexchange.com/a/654089/94412" id="LPlnk217751">https://unix.stackexchange.com/a/654089/94412</a><br>
<br>
You can create a script to run at boot and 'drain' the cards.</div>
<div class="PlainText"><br>
</div>
<div class="PlainText">Regards</div>
<div class="PlainText">--Mick<br>
<br>
<br>
</div>
</span></font></div>
</body>
</html>