[slurm-users] Submit Slurm Job in PE environment

Anson Abraham anson.abraham at gmail.com
Thu Jun 21 09:22:34 MDT 2018


Thanks,
However, i'm on ubuntu, so when i installed w/ apt install slurm-llnl and
slurm-llnl-basic-plugins{-dev}

i added the con_res entry.  however, the controller can't find the
select/con_res plugin.  Is there something else i need to install to get
the plugin to be recognized?

On Wed, Jun 20, 2018 at 2:49 AM Thomas HAMEL <hmlth at t-hamel.fr> wrote:

> Hello,
>
> Slurm can have a behavior similar to the one you were used on SGE, it's
> not the default but it's really commonly used. This is called Consumable
> Resources in Slurm, you don't allocate full nodes but resources on the
> node like cores or memory. It's activated in slurm.conf (after
> restarting or reconfiguring the controller) :
>
> SelectType=select/cons_res
> SelectTypeParameters=CR_Core_Memory
>
> https://slurm.schedmd.com/cons_res.html
>
> MPI serves another purpose that do not really fit you model. The
> solution to your issue is Consumable Resources but you can take a look
> at Job Arrays to group all jobs of your loop together.
>
> Regards,
>
> Thomas HAMEL
>
>
> On 19/06/2018 23:33, Anson Abraham wrote:
> > Hi,
> > Relatively new to Slurm.  I've been using Sun GridEngine mostly.
> > I have a cluster of 3 machines each with 8 cores.  In SGE i allocate
> > the PE slots per machine, where if i submit 24 jobs it'll run all 24
> > jobs (b/c each job will use 1 core).
> > however, if i submit job in Slurm through sbatch i can only get it to
> > run 3 jobs at a time, even when i define the cpus_per_task.  I was
> > told to use openMPI for this.
> > i'm not familiar w/ openMPI so i did an apt install of libopenmpi-dev.
> >
> > Do i have to loop through my job submission w/ mpirun
> > and run an sbatch outside of it?
> > Again, i'm still new to this, and w/ sge it was pretty straight fwd
> > where all i had to do was:
> >  loop through files
> >     qsub -N {name of job}  script.sh {filename}.
> >
> > not sure how would i do so here.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180621/04131c19/attachment.html>


More information about the slurm-users mailing list