[slurm-users] Question about memory allocation

Mahmood Naderan mahmood.nt at gmail.com
Mon Dec 16 06:49:00 UTC 2019


 >No, this indicates the amount of residual/real memory as reqeusted per
node. Your job will be only runnable on nodes >that offer at least 200 GB
main memory (sum of memory on all sockets/cpu of the node). Please also
have a closer look >at man sbatch.


Thanks.
Regarding the status of the nodes, I see
    RealMemory=120705 AllocMem=1024 FreeMem=309 Sockets=32 Boards=1

The question is why freemem is so much low while allocmem is far less than
realmemory?

Regards,
Mahmood




On Mon, Dec 16, 2019 at 10:12 AM Kraus, Sebastian <
sebastian.kraus at tu-berlin.de> wrote:

> Hi Mahmood,
>
>
> >> will it reserve (looks for) 200GB of memory for the job? Or this is the
> hard limit of the memory required by job?
>
>
> No, this indicates the amount of residual/real memory as reqeusted per
> node. Your job will be only runnable on nodes that offer at least 200 GB
> main memory (sum of memory on all sockets/cpu of the node). Please also
> have a closer look at man sbatch.
>
> Best
> Sebastian
>
>
>
>
> Sebastian Kraus
> Team IT am Institut für Chemie
>
> Technische Universität Berlin
> Fakultät II
> Institut für Chemie
> Sekretariat C3
> Straße des 17. Juni 135
> 10623 Berlin
>
> Email: sebastian.kraus at tu-berlin.de
>
> ------------------------------
> *From:* slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of
> Mahmood Naderan <mahmood.nt at gmail.com>
> *Sent:* Monday, December 16, 2019 07:19
> *To:* Slurm User Community List
> *Subject:* [slurm-users] Question about memory allocation
>
> Hi,
> If I write
>
> #SBATCH --mem=10GB
> #SBATCH --nodes=4
> #SBATCH --ntasks-per-node=5
>
> will it reserve (looks for) 200GB of memory for the job? Or this is the
> hard limit of the memory required by job?
>
> Regards,
> Mahmood
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20191216/745fbcf4/attachment-0001.htm>


More information about the slurm-users mailing list