<div dir="ltr">Many thanks! I added " AccountingStorageEnforce=limits" in slurm.conf. See below that configuration: <div><br></div><div><div><br><div>##################### slurm.conf ########################3</div><div><div>ClusterName=localcluster<br>SlurmctldHost=gag<br>MpiDefault=none<br>#ProctrackType=proctrack/linuxproc<br>ProctrackType=proctrack/cgroup<br>ReturnToService=2<br>SlurmctldPidFile=/var/run/slurmctld.pid<br>SlurmctldPort=6817<br>SlurmdPidFile=/var/run/slurmd.pid<br>SlurmdPort=6818<br>SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd<br>SlurmUser=slurm<br>StateSaveLocation=/var/lib/slurm-llnl/slurmctld<br>SwitchType=switch/none<br>#TaskPlugin=task/none<br>TaskPlugin=task/cgroup<br>#<br>GresTypes=gpu<br>#SlurmdDebug=debug2<br><br># TIMERS<br>InactiveLimit=0<br>KillWait=30<br>MinJobAge=300<br>SlurmctldTimeout=120<br>SlurmdTimeout=300<br>Waittime=0<br># SCHEDULING<br>SchedulerType=sched/backfill<br>SelectType=select/cons_tres<br>SelectTypeParameters=CR_Core<br>#AccountingStorageTRES=gres/gpu<br>#<br>## added by SM for enabling DBD ##<br>AccountingStorageEnforce=limits <br>#AccountingStoragePort=<br>AccountingStorageType=accounting_storage/none<br>JobCompType=jobcomp/none<br>JobAcctGatherFrequency=30<br>JobAcctGatherType=jobacct_gather/none<br>SlurmctldDebug=info<br>SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log<br>SlurmdDebug=info<br>SlurmdLogFile=/var/log/slurm-llnl/slurmd.log<br>#<br># COMPUTE NODES<br>NodeName=gag Procs=96 CoresPerSocket=24 Sockets=2 ThreadsPerCore=2 Gres=gpu:8<br>#NodeName=mannose NodeAddr=130.74.2.86 CPUs=1 State=UNKNOWN<br><br># Partitions list<br>PartitionName=LocalQ Nodes=ALL Default=YES State=UP<br>#PartitionName=gpu_short  MaxCPUsPerNode=32 DefMemPerNode=65556 DefCpuPerGPU=8 DefMemPerGPU=65556 MaxMemPerNode=532000 MaxTime=01-00:00:00 State=UP Nodes=localhost  Default=YES<br></div></div><div><br></div><div><br></div><div>#################### /etc/slurm-llnl/slurmdbd.conf ###########</div><div><br></div><div>#<br># Example slurmdbd.conf file.<br>#<br># See the slurmdbd.conf man page for more information.<br>#<br># Archive info<br>#ArchiveJobs=yes<br>#ArchiveDir="/tmp"<br>#ArchiveSteps=yes<br>#ArchiveScript=<br>#JobPurge=12<br>#StepPurge=1<br>#<br># Authentication info<br>AuthType=auth/munge<br>#AuthInfo=/var/run/munge/munge.socket.2<br>#<br># slurmDBD info<br>DbdAddr=localhost<br>DbdHost=localhost<br>#DbdPort=7031<br>SlurmUser=slurm<br>#MessageTimeout=300<br>DebugLevel=verbose<br>#DefaultQOS=normal,standby<br>LogFile=/var/log/slurm/slurmdbd.log<br>PidFile=/var/run/slurmdbd.pid<br>#PluginDir=/usr/lib/slurm<br>#PrivateData=accounts,users,usage,jobs<br>#TrackWCKey=yes<br>#<br># Database info<br>StorageType=accounting_storage/mysql<br>#StorageHost=localhost<br>#StoragePort=1234<br>StoragePass=password<br>StorageUser=slurm<br>#StorageLoc=slurm_acct_db<br></div><div><br></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Oct 10, 2022 at 10:03 AM Sudeep Narayan Banerjee <<a href="mailto:snbanerjee@iitgn.ac.in">snbanerjee@iitgn.ac.in</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Dear Sushil: please share the slurm.conf, if possible.<div><br><div><div><div dir="ltr"><div dir="ltr">Thanks & Regards,<div>Sudeep Narayan Banerjee</div><div>System Analyst | Scientist B</div><div><span style="color:rgb(34,34,34)">Supercomputing Facility &</span><span style="color:rgb(34,34,34)"> </span>Information System and Technology Facility</div><div>Academic Block 5, Room 110A<br></div><div>Indian Institute of Technology Gandhinagar [<a href="https://iitgn.ac.in/" target="_blank">https://iitgn.ac.in/</a>]<br></div><div>Palaj, Gujarat 382055, INDIA</div><div><i style="color:rgb(34,34,34)"><a href="http://sites.iitgn.ac.in/10/" style="color:rgb(17,85,204)" target="_blank">IITGN: Celebrating 10 years of educational excellence</a></i><br></div></div></div></div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Oct 10, 2022 at 8:27 PM Jörg Striewski <<a href="mailto:striewski@ismll.de" target="_blank">striewski@ismll.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">did you enter the information slurm needs for the database in <br>
slurmdbd,conf and slurm.conf ?<br>
<br>
<br>
Mit freundlichen Grüßen / kind regards<br>
<br>
-- <br>
Jörg Striewski<br>
<br>
Information Systems and Machine Learning Lab (ISMLL)<br>
Institute of Computer Science<br>
University of Hildesheim Germany<br>
post address: Universitätsplatz 1, D-31141Hildesheim, Germany<br>
visitor address: Samelsonplatz 1, D-31141 Hildesheim,Germany<br>
Tel.(+49) 05121 / 883-40392<br>
<a href="http://www.ismll.uni-hildesheim.de" rel="noreferrer" target="_blank">http://www.ismll.uni-hildesheim.de</a><br>
<br>
On 10.10.22 16:38, Sushil Mishra wrote:<br>
> Dear all,<br>
><br>
> I am pretty new to system administration and looking for some help <br>
> setup slumdb or maridb in a GPU cluster. We bought a machine but the <br>
> vendor simply installed slurm and did not install any database for <br>
> accounting. I tried installing MariaDB and then slurmdb as described <br>
> in the manual but looks like I am missing something. I wonder if <br>
> someone can help us with this off the list? I only need to keep an <br>
> account of the core hours used by each user only. Is there any <br>
> alternate way of keeping an account of core hour usages per user <br>
> without installing DB?<br>
><br>
> Best,<br>
> Sushil<br>
><br>
<br>
</blockquote></div>
</blockquote></div>