[slurm-users] [ext] Re: [External] Defining a default --nodes=1

Holtgrewe, Manuel manuel.holtgrewe at bihealth.de
Sun May 10 16:28:26 UTC 2020


Hi,

thank you for correcting my misconception. I'll check that out. You are probably right.

Cheers,
Manuel

--
Dr. Manuel Holtgrewe, Dipl.-Inform.
Bioinformatician
Core Unit Bioinformatics – CUBI
Berlin Institute of Health / Max Delbrück Center for Molecular Medicine in the Helmholtz Association / Charité – Universitätsmedizin Berlin

Visiting Address: Invalidenstr. 80, 3rd Floor, Room 03 028, 10117 Berlin
Postal Address: Chariteplatz 1, 10117 Berlin

E-Mail: manuel.holtgrewe at bihealth.de
Phone: +49 30 450 543 607
Fax: +49 30 450 7 543 901
Web: cubi.bihealth.org  www.bihealth.org  www.mdc-berlin.de  www.charite.de
________________________________
From: slurm-users [slurm-users-bounces at lists.schedmd.com] on behalf of Michael Robbert [mrobbert at mines.edu]
Sent: Friday, May 08, 2020 17:43
To: Slurm User Community List
Subject: [ext] Re: [slurm-users] [External] Defining a default --nodes=1

Manuel,
You may want to instruct your users to use ‘-c’ or ‘—cpus-per-task’ to define the number of cpus that they need. Please correct me if I’m wrong, but I believe that will restrict the jobs to a singe node whereas ‘-n’ or ‘—ntasks’ is really for multi process jobs which can be spread amongst multiple nodes.

Mike

From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of "Holtgrewe, Manuel" <manuel.holtgrewe at bihealth.de>
Reply-To: Slurm User Community List <slurm-users at lists.schedmd.com>
Date: Friday, May 8, 2020 at 03:28
To: "slurm-users at lists.schedmd.com" <slurm-users at lists.schedmd.com>
Subject: [External] [slurm-users] Defining a default --nodes=1

CAUTION: This email originated from outside of the Colorado School of Mines organization. Do not click on links or open attachments unless you recognize the sender and know the content is safe.

Dear all,

we're running a cluster where the large majority of jobs will use multi-threading and no message passing. Sometimes CPU>1 jobs are scheduled to run on more than one node (which would be fine for MPI jobs of course...)

Is it possible to automatically set "--nodes=1" for all jobs outside of the "mpi" partition (that we setup for message passing jobs)?

Thank you,
Manuel

--
Dr. Manuel Holtgrewe, Dipl.-Inform.
Bioinformatician
Core Unit Bioinformatics – CUBI
Berlin Institute of Health / Max Delbrück Center for Molecular Medicine in the Helmholtz Association / Charité – Universitätsmedizin Berlin

Visiting Address: Invalidenstr. 80, 3rd Floor, Room 03 028, 10117 Berlin
Postal Address: Chariteplatz 1, 10117 Berlin

E-Mail: manuel.holtgrewe at bihealth.de
Phone: +49 30 450 543 607
Fax: +49 30 450 7 543 901
Web: cubi.bihealth.org  www.bihealth.org  www.mdc-berlin.de  www.charite.de
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200510/489be6dc/attachment.htm>


More information about the slurm-users mailing list