[slurm-users] salloc with bash scripts problem

Terry Jones terry at jon.es
Wed Jan 2 10:20:39 MST 2019


I know very little about how SLURM works, but this sounds like it's a
configuration issue - that it hasn't been configured in a way that
indicates the login nodes cannot also be used as compute nodes. When I run
salloc on the cluster I use, I *always* get a shell on a compute node,
never on the login node that I ran salloc on.

Terry


On Wed, Jan 2, 2019 at 4:56 PM Mahmood Naderan <mahmood.nt at gmail.com> wrote:

> Currently, users run "salloc --spankx11 ./qemu.sh" where qemu.sh is a
> script to run a qemu-system-x86_64 command.
> When user (1) runs that command, the qemu is run on the login node since
> the user is accessing the login node. When user (2) runs that command, his
> qemu process is also running on the login node and so on.
>
> That is not what I want!
> I expected slurm to dispatch the jobs on compute nodes.
>
>
> Regards,
> Mahmood
>
>
>
>
> On Wed, Jan 2, 2019 at 7:39 PM Renfro, Michael <Renfro at tntech.edu> wrote:
>
>> Not sure what the reasons behind “have to manually ssh to a node”, but
>> salloc and srun can be used to allocate resources and run commands on the
>> allocated resources:
>>
>> Before allocation, regular commands run locally, and no Slurm-related
>> variables are present:
>>
>> =====
>>
>> [renfro at login ~]$ hostname
>> login
>> [renfro at login ~]$ echo $SLURM_TASKS_PER_NODE
>>
>>
>> =====
>>
>> After allocation, regular commands still run locally, Slurm-related
>> variables are present, and srun runs commands on the allocated node (my
>> prompt change inside a job is a local thing, not done by default):
>>
>> =====
>>
>> [renfro at login ~]$ salloc
>> salloc: Granted job allocation 147867
>> [renfro at login(job 147867) ~]$ hostname
>> login
>> [renfro at login(job 147867) ~]$ echo $SLURM_TASKS_PER_NODE
>> 1
>> [renfro at login(job 147867) ~]$ srun hostname
>> node004
>> [renfro at login(job 147867) ~]$ exit
>> exit
>> salloc: Relinquishing job allocation 147867
>> [renfro at login ~]$
>>
>> =====
>>
>> Lots of people get interactive shells on a reserved node with some
>> variant of ‘srun --pty $SHELL -I’, which doesn’t require explicitly running
>> salloc or ssh, so what are you trying to accomplish in the end?
>>
>> --
>> Mike Renfro, PhD / HPC Systems Administrator, Information Technology
>> Services
>> 931 372-3601     / Tennessee Tech University
>>
>> > On Jan 2, 2019, at 9:24 AM, Mahmood Naderan <mahmood.nt at gmail.com>
>> wrote:
>> >
>> > I want to know if there any any way to push the node selection part on
>> slurm and not a manual thing that is done by user.
>> > Currently, I have to manually ssh to a node and try to "allocate
>> resources" using salloc.
>> >
>> >
>> > Regards,
>> > Mahmood
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190102/516132a8/attachment-0001.html>


More information about the slurm-users mailing list