[slurm-users] Allocate more memory

Krieger, Donald N. kriegerd at upmc.edu
Wed Feb 7 08:34:20 MST 2018


Hi David –

You might consider running your more memory intensive jobs on the XSede machine at the Pittsburgh Supercomputing Center. It’s called Bridges.

Bridges has a set of 42 large memory (LM) nodes, each with 3 TBytes of private memory. 9 of the nodes have 64 cores; the rest each have 80.

There are also 4 XML nodes with 64 cores and 12 TBytes each. Comet at the San Diego Supercomputing Center also has large memory nodes. But Bridges is much less heavily over-subscribed.

You can get time through xsede.org . A startup allocation will likely be granted overnight and both machines use slurm.

Best – Don

From: slurm-users [mailto:slurm-users-bounces at lists.schedmd.com] On Behalf Of david vilanova
Sent: Wednesday, February 7, 2018 10:23 AM
To: Slurm User Community List <slurm-users at lists.schedmd.com>
Subject: Re: [slurm-users] Allocate more memory

Yes, when working with the human genome you can easily go up to 16Gb.

El El mié, 7 feb 2018 a las 16:20, Krieger, Donald N. <kriegerd at upmc.edu<mailto:kriegerd at upmc.edu>> escribió:
Sorry for jumping in without full knowledge of the thread.
But it sounds like the key issue is that each job requires 3 GBytes.
Even if that's true, won't jobs start on cores with less memory and then just page?
Of course as the previous post states, you must tailor your slurm request to the physical limits of your cluster.

But the real question is do the jobs really require 3 GBytes of resident memory.
Most code declares far more than required and then ends up running in what it actually uses.
You can tell by running a job and viewing the memory statistics with top or something similar.

Anyway - best - Don

-----Original Message-----
From: slurm-users [mailto:slurm-users-bounces at lists.schedmd.com<mailto:slurm-users-bounces at lists.schedmd.com>] On Behalf Of rhc at open-mpi.org<mailto:rhc at open-mpi.org>
Sent: Wednesday, February 7, 2018 10:03 AM
To: Slurm User Community List <slurm-users at lists.schedmd.com<mailto:slurm-users at lists.schedmd.com>>
Subject: Re: [slurm-users] Allocate more memory

Afraid not - since you don’t have any nodes that meet the 3G requirement, you’ll just hang.

> On Feb 7, 2018, at 7:01 AM, david vilanova <vilanew at gmail.com<mailto:vilanew at gmail.com>> wrote:
>
> Thanks for the quick response.
>
> Should the following script do the trick ?? meaning use all required nodes to have at least 3G total memory ? even though my nodes were setup with 2G each ??
>
> #SBATCH array 1-10%10:1
>
> #SBATCH mem-per-cpu=3000m
>
> srun R CMD BATCH myscript.R
>
>
>
> thanks
>
>
>
>
> On 07/02/2018 15:50, Loris Bennett wrote:
>> Hi David,
>>
>> david martin <vilanew at gmail.com<mailto:vilanew at gmail.com>> writes:
>>
>>> 
>>>
>>> Hi,
>>>
>>> I would like to submit a job that requires 3Go. The problem is that I have 70 nodes available each node with 2Gb memory.
>>>
>>> So the command sbatch --mem=3G will wait for ressources to become available.
>>>
>>> Can I run sbatch and tell the cluster to use the 3Go out of the 70Go
>>> available or is that a particular setup ? meaning is the memory
>>> restricted to each node ? or should i allocate two nodes so that i
>>> have 2x4Go availble ?
>> Check
>>
>>   man sbatch
>>
>> You'll find that --mem means memory per node.  Thus, if you specify
>> 3GB but all the nodes have 2GB, your job will wait forever (or until
>> you buy more RAM and reconfigure Slurm).
>>
>> You probably want --mem-per-cpu, which is actually more like memory
>> per task.  This is obviously only going to work if your job can
>> actually run on more than one node, e.g. is MPI enabled.
>>
>> Cheers,
>>
>> Loris
>>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180207/e3d68f30/attachment-0001.html>


More information about the slurm-users mailing list