[slurm-users] [External] Re: PropagateResourceLimits

Prentice Bisbal pbisbal at pppl.gov
Thu Apr 29 16:35:52 UTC 2021


On 4/28/21 2:26 AM, Diego Zuccato wrote:

> Il 27/04/2021 17:31, Prentice Bisbal ha scritto:
>
>> I don't think PAM comes into play here. Since Slurm is starting the 
>> processes on the compute nodes as the user, etc., PAM is being bypassed.
> Then maybe slurmd somehow goes throught the PAM stack another way, 
> since limits on the frontend got propagated (as implied by 
> PropagateResourceLimits default value of ALL).
> And I can confirm that setting it to NONE seems to have solved the 
> issue: users on the frontend get limited resources, and jobs on the 
> nodes get the resources they asked.
>
In this case, Slurm is deliberately looking at the resource limits 
effect when the job is submitted on the submission host, and then 
copying them the to job's environment. From the slurm.conf  
documentation (https://slurm.schedmd.com/slurm.conf.html):

> *PropagateResourceLimits*
>     A comma-separated list of resource limit names. The slurmd daemon
>     uses these names to obtain the associated (soft) limit values from
>     the user's process environment on the submit node. These limits
>     are then propagated and applied to the jobs that will run on the
>     compute nodes.'
>
Then later on, it indicates that all resource limits are propagated by 
default:

> The following limit names are supported by Slurm (although some 
> options may not be supported on some systems):
>
> *ALL*
>     All limits listed below (default)
>
You should be able to verify this yourself in the following manner:

1. Start two separate shells on the submission host

2. Change the limits in one of the shells. For example, reduce core size 
to 0, with 'ulimit -c 0' in just one shell.

3. Then run 'srun ulimit -a' from each shell.

4. Compare the output. The one shell should show that core size is now 
zero.

--

Prentice

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20210429/6ada027d/attachment.htm>


More information about the slurm-users mailing list