[slurm-users] Question about memory allocation

Mahmood Naderan mahmood.nt at gmail.com
Mon Dec 16 16:41:30 UTC 2019


Excuse me, still I have problem. Although I freed memory on the nodes as
below

   RealMemory=64259 AllocMem=1024 FreeMem=61882 Sockets=32 Boards=1
   RealMemory=120705 AllocMem=1024 FreeMem=115257 Sockets=32 Boards=1
   RealMemory=64259 AllocMem=26624 FreeMem=61795 Sockets=32 Boards=1
   RealMemory=64259 AllocMem=1024 FreeMem=51937 Sockets=10 Boards=1

still the job is in PD (resources).

$ squeue
             JOBID PARTITION     NAME     USER ST       TIME  NODES
NODELIST(REASON)
               119       SEA    qe-fb  mahmood PD       0:00      4
(Resources)
$ cat slurm_qe.sh
#!/bin/bash
#SBATCH --job-name=qe-fb
#SBATCH --output=my_fb.log
#SBATCH --partition=SEA
#SBATCH --account=fish
#SBATCH --mem=10GB
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=5

mpirun -np $SLURM_NTASKS /share/apps/q-e-qe-6.5/bin/pw.x -in
f_borophene_scf.in



Regards,
Mahmood




On Mon, Dec 16, 2019 at 10:35 AM Kraus, Sebastian <
sebastian.kraus at tu-berlin.de> wrote:

> Sorry Mahmood,
>
> 10 GB per node is requested not 200 GB per node. For all nodes this counts
> in total to 40 GB as you request 4 nodes. The number of tasks per node does
> not matter for this limit.
>
>
> Best ;-)
> Sebastian
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20191216/1ac8f32d/attachment.htm>


More information about the slurm-users mailing list