[slurm-users] Allocate more memory
rhc at open-mpi.org
rhc at open-mpi.org
Wed Feb 7 08:03:01 MST 2018
Afraid not - since you don’t have any nodes that meet the 3G requirement, you’ll just hang.
> On Feb 7, 2018, at 7:01 AM, david vilanova <vilanew at gmail.com> wrote:
>
> Thanks for the quick response.
>
> Should the following script do the trick ?? meaning use all required nodes to have at least 3G total memory ? even though my nodes were setup with 2G each ??
>
> #SBATCH array 1-10%10:1
>
> #SBATCH mem-per-cpu=3000m
>
> srun R CMD BATCH myscript.R
>
>
>
> thanks
>
>
>
>
> On 07/02/2018 15:50, Loris Bennett wrote:
>> Hi David,
>>
>> david martin <vilanew at gmail.com> writes:
>>
>>>
>>>
>>> Hi,
>>>
>>> I would like to submit a job that requires 3Go. The problem is that I have 70 nodes available each node with 2Gb memory.
>>>
>>> So the command sbatch --mem=3G will wait for ressources to become available.
>>>
>>> Can I run sbatch and tell the cluster to use the 3Go out of the 70Go
>>> available or is that a particular setup ? meaning is the memory
>>> restricted to each node ? or should i allocate two nodes so that i
>>> have 2x4Go availble ?
>> Check
>>
>> man sbatch
>>
>> You'll find that --mem means memory per node. Thus, if you specify 3GB
>> but all the nodes have 2GB, your job will wait forever (or until you buy
>> more RAM and reconfigure Slurm).
>>
>> You probably want --mem-per-cpu, which is actually more like memory per
>> task. This is obviously only going to work if your job can actually run
>> on more than one node, e.g. is MPI enabled.
>>
>> Cheers,
>>
>> Loris
>>
>
>
More information about the slurm-users
mailing list