[slurm-users] Spec-ing a Slurm DB server

Marcin Stolarek stolarek.marcin at gmail.com
Sat Jul 21 13:40:06 MDT 2018


>From my experience it's the question about your future sacct queries. If
you are not going to list a lot of jobs that were executed long time ago vm
with a few gb of ram should be fine. It depends on the numer of jobs you
expect a lot of small or a few big. Nevertheless, if you think that you'll
be generating a lot of reports from your sluem accounting you shold think
about exporting this data to something like xdmod and quering it instead of
slurmdbd.

Cheers,
Marcin
http://funinit.wordpress.com

On Thu, 19 Jul 2018, 19:54 Will Dennis, <wdennis at nec-labs.com> wrote:

> Hi all,
>
>
>
> We currently have a small Slurm cluster for a research group here, where
> as a part of that setup, we have a Slurm DBD setup that we utilize for
> fair-share scheduling. The current server platform this runs on is an older
> Dell PowerEdge R210 system, with a single 4-core Intel Xeon X3430 CPU, and
> 16GB of memory. The MySQL (actually MariaDB) server running on this seems
> to be performing well enough for current needs.
>
>
>
> Now we are planning a (also initially small, but hopefully will grow)
> labs-wide Slurm cluster, and wish to have a labs-wide Slurm DBD server that
> will handle this new cluster, as well as any other Slurm cluster we
> have/will have (this includes the existing dept-specific Slurm cluster we
> already have.) I have a couple of questions on this point:
>
>
>
> 1)      How to size the hardware for this server? (What kind of resources
> does a moderate Slurm DBD server require?)
>
> 2)      How to port over the existing Slurm DBD database to the newer
> server?
>
>
>
> Pointers to existing docs that answer these questions gratefully accepted
> (I looked, but didn’t find any that addressed my concerns.)
>
>
>
> Thanks!
>
>
>
> Will Dennis
>
> NEC Laboratories America
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180721/0dd6fbc5/attachment.html>


More information about the slurm-users mailing list