<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.msonormal0, li.msonormal0, div.msonormal0
{mso-style-name:msonormal;
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal">Hi David –<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">You might consider running your more memory intensive jobs on the XSede machine at the Pittsburgh Supercomputing Center. It’s called Bridges.
<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Bridges has a set of 42 large memory (LM) nodes, each with 3 TBytes of private memory. 9 of the nodes have 64 cores; the rest each have 80.
<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">There are also 4 XML nodes with 64 cores and 12 TBytes each. Comet at the San Diego Supercomputing Center also has large memory nodes. But Bridges is much less heavily over-subscribed.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">You can get time through xsede.org . A startup allocation will likely be granted overnight and both machines use slurm.
<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Best – Don <o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><b>From:</b> slurm-users [mailto:slurm-users-bounces@lists.schedmd.com]
<b>On Behalf Of </b>david vilanova<br>
<b>Sent:</b> Wednesday, February 7, 2018 10:23 AM<br>
<b>To:</b> Slurm User Community List <slurm-users@lists.schedmd.com><br>
<b>Subject:</b> Re: [slurm-users] Allocate more memory<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal">Yes, when working with the human genome you can easily go up to 16Gb.<o:p></o:p></p>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal">El El mié, 7 feb 2018 a las 16:20, Krieger, Donald N. <<a href="mailto:kriegerd@upmc.edu">kriegerd@upmc.edu</a>> escribió:<o:p></o:p></p>
</div>
<blockquote style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in">
<p class="MsoNormal" style="margin-bottom:12.0pt">Sorry for jumping in without full knowledge of the thread.<br>
But it sounds like the key issue is that each job requires 3 GBytes.<br>
Even if that's true, won't jobs start on cores with less memory and then just page?<br>
Of course as the previous post states, you must tailor your slurm request to the physical limits of your cluster.<br>
<br>
But the real question is do the jobs really require 3 GBytes of resident memory.<br>
Most code declares far more than required and then ends up running in what it actually uses.<br>
You can tell by running a job and viewing the memory statistics with top or something similar.<br>
<br>
Anyway - best - Don<br>
<br>
-----Original Message-----<br>
From: slurm-users [mailto:<a href="mailto:slurm-users-bounces@lists.schedmd.com" target="_blank">slurm-users-bounces@lists.schedmd.com</a>] On Behalf Of
<a href="mailto:rhc@open-mpi.org" target="_blank">rhc@open-mpi.org</a><br>
Sent: Wednesday, February 7, 2018 10:03 AM<br>
To: Slurm User Community List <<a href="mailto:slurm-users@lists.schedmd.com" target="_blank">slurm-users@lists.schedmd.com</a>><br>
Subject: Re: [slurm-users] Allocate more memory<br>
<br>
Afraid not - since you don’t have any nodes that meet the 3G requirement, you’ll just hang.<br>
<br>
> On Feb 7, 2018, at 7:01 AM, david vilanova <<a href="mailto:vilanew@gmail.com" target="_blank">vilanew@gmail.com</a>> wrote:<br>
><br>
> Thanks for the quick response.<br>
><br>
> Should the following script do the trick ?? meaning use all required nodes to have at least 3G total memory ? even though my nodes were setup with 2G each ??<br>
><br>
> #SBATCH array 1-10%10:1<br>
><br>
> #SBATCH mem-per-cpu=3000m<br>
><br>
> srun R CMD BATCH myscript.R<br>
><br>
><br>
><br>
> thanks<br>
><br>
><br>
><br>
><br>
> On 07/02/2018 15:50, Loris Bennett wrote:<br>
>> Hi David,<br>
>><br>
>> david martin <<a href="mailto:vilanew@gmail.com" target="_blank">vilanew@gmail.com</a>> writes:<br>
>><br>
>>> <br>
>>><br>
>>> Hi,<br>
>>><br>
>>> I would like to submit a job that requires 3Go. The problem is that I have 70 nodes available each node with 2Gb memory.<br>
>>><br>
>>> So the command sbatch --mem=3G will wait for ressources to become available.<br>
>>><br>
>>> Can I run sbatch and tell the cluster to use the 3Go out of the 70Go<br>
>>> available or is that a particular setup ? meaning is the memory<br>
>>> restricted to each node ? or should i allocate two nodes so that i<br>
>>> have 2x4Go availble ?<br>
>> Check<br>
>><br>
>> man sbatch<br>
>><br>
>> You'll find that --mem means memory per node. Thus, if you specify<br>
>> 3GB but all the nodes have 2GB, your job will wait forever (or until<br>
>> you buy more RAM and reconfigure Slurm).<br>
>><br>
>> You probably want --mem-per-cpu, which is actually more like memory<br>
>> per task. This is obviously only going to work if your job can<br>
>> actually run on more than one node, e.g. is MPI enabled.<br>
>><br>
>> Cheers,<br>
>><br>
>> Loris<br>
>><br>
><br>
><br>
<br>
<o:p></o:p></p>
</blockquote>
</div>
</div>
</div>
</body>
</html>