[slurm-users] Chaining srun commands
Jake Jellinek
jakejellinek at outlook.com
Tue Feb 28 22:25:53 UTC 2023
Hi Brian
Thanks for your response
> I am guessing you are using srun to get an interactive session on a node. That approach is being deprecated and you get a shell by default with salloc
This is exactly what I'm trying to do .... I didn’t know about the salloc thing
Let me do some more testing and I'll see if you've just resolved my issue.
Will be in touch very soon
Jake
-----Original Message-----
From: slurm-users <slurm-users-bounces at lists.schedmd.com> On Behalf Of Brian Andrus
Sent: 28 February 2023 18:47
To: slurm-users at lists.schedmd.com
Subject: Re: [slurm-users] Chaining srun commands
Jake,
It may help more to understand what you are trying to do accomplish rather than find out how to do it the way you expect.
I am guessing you are using srun to get an interactive session on a node. That approach is being deprecated and you get a shell by default with salloc
If you are trying to start new jobs on other nodes, you would want to use salloc/sbatch to launch them.
If you are wanting to have multiple nodes on a single job, IIRC, you would request them with the initial salloc and then use options to srun to launch appropriately.
What specifically do you want to get (resource-wise) and how do you want to use them?
Brian Andrus
On 2/28/2023 9:49 AM, Jake Jellinek wrote:
> Hi all
>
> I come from a SGE/UGE background and am used to the convention that I can qrsh to a node and, from there, start a new qrsh to a different node with different parameters.
> I've tried this with Slurm and found that this doesn’t work the same.
>
> For example, if I issue an 'srun' command, I get a new node.
> However if I then try to start a new srun session to a different node type (different resource requirements), it just puts me back on the same box.
>
> I did find a post from 12 years ago that suggested that this was by design but am hoping that this has now changed or that there is a config option which turns off this feature.
>
> Thank you
> Jake
More information about the slurm-users
mailing list