Ole Holm Nielsen Ole.H.Nielsen@fysik.dtu.dk writes:
Hi Bjørn-Helge,
That sounds interesting, but which limit might affect the kernel's fs.file-max? For example, a user already has a narrow limit:
ulimit -n 1024
AFAIK, the fs.file-max limit is a node-wide limit, whereas "ulimit -n" is per user.
Now that I think of it, fs.file-max of 65536 seems *very* low. On our CentOS-7-based clusters, we have in the order of tens of millions, and on our Rocky 9 based clusters, we have 9223372036854775807(!)
Also a per-user limit of 1024 seems low to me; I think we have in the order of 200K files per user on most clusters.
But if you have ulimit -n == 1024, then no user should be able to hit the fs.file-max limit, even if it is 65536. (Technically, 96 jobs from 96 users each trying to open 1024 files would do it, though.)
whereas the permitted number of user processes is a lot higher:
ulimit -u 3092846
I guess any process will have a few open files, which I believe count against the ulimit -n for each user (and fs.file-max).
I'm not sure how the number 3092846 got set, since it's not defined in /etc/security/limits.conf. The "ulimit -u" varies quite a bit among our compute nodes, so which dynamic service might affect the limits?
There is a vague thing in my head saying that I've looked for this before, and found that the default value dependened on the size of the RAM of the machine. But the vague thing might of course be lying to me. :)