<div><div dir="auto">Yes, when working with the human genome you can easily go up to 16Gb.</div><br><div class="gmail_quote"><div>El El mié, 7 feb 2018 a las 16:20, Krieger, Donald N. <<a href="mailto:kriegerd@upmc.edu">kriegerd@upmc.edu</a>> escribió:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Sorry for jumping in without full knowledge of the thread.<br>
But it sounds like the key issue is that each job requires 3 GBytes.<br>
Even if that's true, won't jobs start on cores with less memory and then just page?<br>
Of course as the previous post states, you must tailor your slurm request to the physical limits of your cluster.<br>
<br>
But the real question is do the jobs really require 3 GBytes of resident memory.<br>
Most code declares far more than required and then ends up running in what it actually uses.<br>
You can tell by running a job and viewing the memory statistics with top or something similar.<br>
<br>
Anyway - best - Don<br>
<br>
-----Original Message-----<br>
From: slurm-users [mailto:<a href="mailto:slurm-users-bounces@lists.schedmd.com" target="_blank">slurm-users-bounces@lists.schedmd.com</a>] On Behalf Of <a href="mailto:rhc@open-mpi.org" target="_blank">rhc@open-mpi.org</a><br>
Sent: Wednesday, February 7, 2018 10:03 AM<br>
To: Slurm User Community List <<a href="mailto:slurm-users@lists.schedmd.com" target="_blank">slurm-users@lists.schedmd.com</a>><br>
Subject: Re: [slurm-users] Allocate more memory<br>
<br>
Afraid not - since you don’t have any nodes that meet the 3G requirement, you’ll just hang.<br>
<br>
> On Feb 7, 2018, at 7:01 AM, david vilanova <<a href="mailto:vilanew@gmail.com" target="_blank">vilanew@gmail.com</a>> wrote:<br>
><br>
> Thanks for the quick response.<br>
><br>
> Should the following script do the trick ?? meaning use all required nodes to have at least 3G total memory ? even though my nodes were setup with 2G each ??<br>
><br>
> #SBATCH array 1-10%10:1<br>
><br>
> #SBATCH mem-per-cpu=3000m<br>
><br>
> srun R CMD BATCH myscript.R<br>
><br>
><br>
><br>
> thanks<br>
><br>
><br>
><br>
><br>
> On 07/02/2018 15:50, Loris Bennett wrote:<br>
>> Hi David,<br>
>><br>
>> david martin <<a href="mailto:vilanew@gmail.com" target="_blank">vilanew@gmail.com</a>> writes:<br>
>><br>
>>> <br>
>>><br>
>>> Hi,<br>
>>><br>
>>> I would like to submit a job that requires 3Go. The problem is that I have 70 nodes available each node with 2Gb memory.<br>
>>><br>
>>> So the command sbatch --mem=3G will wait for ressources to become available.<br>
>>><br>
>>> Can I run sbatch and tell the cluster to use the 3Go out of the 70Go<br>
>>> available or is that a particular setup ? meaning is the memory<br>
>>> restricted to each node ? or should i allocate two nodes so that i<br>
>>> have 2x4Go availble ?<br>
>> Check<br>
>><br>
>> man sbatch<br>
>><br>
>> You'll find that --mem means memory per node. Thus, if you specify<br>
>> 3GB but all the nodes have 2GB, your job will wait forever (or until<br>
>> you buy more RAM and reconfigure Slurm).<br>
>><br>
>> You probably want --mem-per-cpu, which is actually more like memory<br>
>> per task. This is obviously only going to work if your job can<br>
>> actually run on more than one node, e.g. is MPI enabled.<br>
>><br>
>> Cheers,<br>
>><br>
>> Loris<br>
>><br>
><br>
><br>
<br>
<br>
</blockquote></div></div>