[slurm-users] How to request ONLY one CPU instead of one socket or one node?

David Rhey drhey at umich.edu
Fri Feb 15 12:53:37 UTC 2019


Are you sure you're NOT getting 1 CPU when you run your job? You might want
to put some echo logic into your job to look at Slurm env variables of the
node your job lands on as a way of checking. E.g.:


I don't see anything wrong with your script. As a test I took the basic
parameters you've outlined and ran an interactive `srun` session,
requesting 1 CPU per task and 4 CPUs per task, and then looked at the
aforementioned variable output within each session. For example, requesting
1 CPU per task:

[drhey at beta-login ~]$ srun --cpus-per-task=1 --ntasks-per-node=1
--partition=standard --mem=1G --pty bash
[drhey at bn19 ~]$ echo $SLURM_CPUS_ON_NODE

And again, running this command now asking for 4 CPUs per task and then
echoing the env var:

[drhey at beta-login ~]$ srun --cpus-per-task=4 --ntasks-per-node=1
--partition=standard --mem=1G --pty bash
[drhey at bn19 ~]$ echo $SLURM_CPUS_ON_NODE



On Wed, Feb 13, 2019 at 9:24 PM Wang, Liaoyuan <wangly at alfred.edu> wrote:

> Dear there,
> I wrote an analytic program to analyze my data. The analysis costs around
> twenty days to analyze all data for one species. When I submit my job to
> the cluster, it always request one node instead of one CPU. I am wondering
> how I can ONLY request one CPU using “sbatch” command? Below is my batch
> file. Any comments and help would be highly appreciated.
> Appreciatively,
> Leon
> ================================================
> #!/bin/sh
> #SBATCH --ntasks=1
> #SBATCH --cpus-per-task=1
> #SBATCH -t 45-00:00:00
> #SBATCH -J 9625%j
> #SBATCH -o 9625.out
> #SBATCH -e 9625.err
> /home/scripts/wcnqn.auto.pl
> ===========================================
> Where wcnqn.auto.pl is my program. 9625 denotes the species number.

David Rhey
Advanced Research Computing - Technology Services
University of Michigan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190215/4a814801/attachment.html>

More information about the slurm-users mailing list