Hello,
Brian Andrus via slurm-users slurm-users@lists.schedmd.com writes:
Unless you are using cgroups and constraints, there is no limit imposed.
[...]
So your request did not exceed what slurm sees as available (1 cpu using 4GB), so it is happy to let your script run. I suspect if you look at the usage, you will see that 1 cpu spiked high while the others did nothing.
Thanks for the input.
I'm aware that without cgroups and constraints there is no real limit imposed, but what I don't understand is why the first three submissions below do get stopped by sbatch while the last one happily goes through?
,---- | $ sbatch -N 1 -n 1 -c 76 -p short --mem-per-cpu=4000M test.batch | sbatch: error: Batch job submission failed: Memory required by task is not available | | $ sbatch -N 1 -n 76 -c 1 -p short --mem-per-cpu=4000M test.batch | sbatch: error: Batch job submission failed: Memory required by task is not available | | $ sbatch -n 1 -c 76 -p short --mem-per-cpu=4000M test.batch | sbatch: error: Batch job submission failed: Memory required by task is not available `----
,---- | $ sbatch -n 76 -c 1 -p short --mem-per-cpu=4000M test.batch | Submitted batch job 133982 `----
Cheers,