[slurm-users] [External] Re: What is an easy way to prevent users run programs on the master/login node.

Prentice Bisbal pbisbal at pppl.gov
Tue Apr 27 15:48:24 UTC 2021


But won't that first process be able to use 100% of a core? What if 
enough users do this such that every core is at 100% utilization? Or, 
what if the application is MPI + OpenMP? In that case, that one process 
on the login node could spawn multiple threads that use the remaining 
cores on the login node.

Prentice

On 4/26/21 2:01 AM, Marcus Wagner wrote:
> Hi,
>
> we also have a wrapper script, together with a number of "MPI-Backends".
> If mpiexec is called on the login nodes, only the first process is 
> started on the login node, the rest runs on the MPI backends.
>
> Best
> Marcus
>
> Am 25.04.2021 um 09:46 schrieb Patrick Begou:
>> Hi,
>>
>> I also saw a cluster setup where mpirun or mpiexec commands were
>> replaced by a shell script just saying "please use srun or sbatch...".
>>
>> Patrick
>>
>> Le 24/04/2021 à 10:03, Ole Holm Nielsen a écrit :
>>> On 24-04-2021 04:37, Cristóbal Navarro wrote:
>>>> Hi Community,
>>>> I have a set of users still not so familiar with slurm, and yesterday
>>>> they bypassed srun/sbatch and just ran their CPU program directly on
>>>> the head/login node thinking it would still run on the compute node.
>>>> I am aware that I will need to teach them some basic usage, but in
>>>> the meanwhile, how have you solved this type of user-behavior
>>>> problem? Is there a preffered way to restrict the master/login
>>>> resources, or actions,  to the regular users ?
>>>
>>> We restrict user limits in /etc/security/limits.conf so users can't
>>> run very long or very big tasks on the login nodes:
>>>
>>> # Normal user limits
>>> *               hard    cpu             20
>>> *               hard    rss             50000000
>>> *               hard    data            50000000
>>> *               soft    stack           40000000
>>> *               hard    stack           50000000
>>> *               hard    nproc           250
>>>
>>> /Ole
>>>
>>
>>
>



More information about the slurm-users mailing list