[slurm-users] Question about memory allocation

Kraus, Sebastian sebastian.kraus at tu-berlin.de
Mon Dec 16 07:04:11 UTC 2019


Sorry Mahmood,

10 GB per node is requested not 200 GB per node. For all nodes this counts in total to 40 GB as you request 4 nodes. The number of tasks per node does not matter for this limit.


Best ;-)
Sebastian



Sebastian Kraus
Team IT am Institut für Chemie
Gebäude C, Straße des 17. Juni 115, Raum C7

Technische Universität Berlin
Fakultät II
Institut für Chemie
Sekretariat C3
Straße des 17. Juni 135
10623 Berlin


Tel.: +49 30 314 22263
Fax: +49 30 314 29309
Email: sebastian.kraus at tu-berlin.de

________________________________
From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of Mahmood Naderan <mahmood.nt at gmail.com>
Sent: Monday, December 16, 2019 07:56
To: Slurm User Community List
Subject: Re: [slurm-users] Question about memory allocation

>your job will be only runnable on nodes that offer at least 200 GB main memory (sum of memory on all sockets/cpu of >the node)

But according to the manual

--mem=<size[units]>
Specify the real memory required per node.

so, with

#SBATCH --mem=10GB
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=5

the total requested memory should be 40GB and not 200GB.

Regards,
Mahmood




On Mon, Dec 16, 2019 at 10:19 AM Mahmood Naderan <mahmood.nt at gmail.com<mailto:mahmood.nt at gmail.com>> wrote:
>No, this indicates the amount of residual/real memory as reqeusted per node. Your job will be only runnable on nodes >that offer at least 200 GB main memory (sum of memory on all sockets/cpu of the node). Please also have a closer look >at man sbatch.


Thanks.
Regarding the status of the nodes, I see
    RealMemory=120705 AllocMem=1024 FreeMem=309 Sockets=32 Boards=1

The question is why freemem is so much low while allocmem is far less than realmemory?

Regards,
Mahmood




On Mon, Dec 16, 2019 at 10:12 AM Kraus, Sebastian <sebastian.kraus at tu-berlin.de<mailto:sebastian.kraus at tu-berlin.de>> wrote:

Hi Mahmood,


>> will it reserve (looks for) 200GB of memory for the job? Or this is the hard limit of the memory required by job?


No, this indicates the amount of residual/real memory as reqeusted per node. Your job will be only runnable on nodes that offer at least 200 GB main memory (sum of memory on all sockets/cpu of the node). Please also have a closer look at man sbatch.

Best
Sebastian



Sebastian Kraus
Team IT am Institut für Chemie

Technische Universität Berlin
Fakultät II
Institut für Chemie
Sekretariat C3
Straße des 17. Juni 135
10623 Berlin

Email: sebastian.kraus at tu-berlin.de<mailto:sebastian.kraus at tu-berlin.de>

________________________________
From: slurm-users <slurm-users-bounces at lists.schedmd.com<mailto:slurm-users-bounces at lists.schedmd.com>> on behalf of Mahmood Naderan <mahmood.nt at gmail.com<mailto:mahmood.nt at gmail.com>>
Sent: Monday, December 16, 2019 07:19
To: Slurm User Community List
Subject: [slurm-users] Question about memory allocation

Hi,
If I write

#SBATCH --mem=10GB
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=5

will it reserve (looks for) 200GB of memory for the job? Or this is the hard limit of the memory required by job?

Regards,
Mahmood


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20191216/598f8914/attachment-0001.htm>


More information about the slurm-users mailing list