[slurm-users] Cgroup file write content error in 20.11.7

smmzkd at mail.ustc.edu.cn smmzkd at mail.ustc.edu.cn
Tue May 25 13:01:40 UTC 2021


Hi all,
I have upgrade my cluster to 20.11.7 version.However I have found my cgroup seems to be invalid.In my log files I can see :


[2021-05-25T20:21:44.185] [18.0] debug:  xcgroup.c:1366: _file_write_content: safe_write (11 of 11) failed: Operation not permitted
[2021-05-25T20:21:44.185] [18.0] error: _file_write_content: unable to write 11 bytes to cgroup /sys/fs/cgroup/devices/slurm/uid_0/job_18/step_0/devices.allow: Operation not permitted
[2021-05-25T20:21:44.185] [18.0] debug2: xcgroup_set_param: unable to set parameter 'devices.allow' to 'c 195:1 rwm' for '/sys/fs/cgroup/devices/slurm/uid_0/job_18/step_0'
[2021-05-25T20:21:44.185] [18.0] debug2: task/cgroup: _cgroup_create_callback: Default access allowed to device c 195:0 rwm(/dev/nvidia0) for step
[2021-05-25T20:21:44.185] [18.0] debug:  xcgroup.c:1366: _file_write_content: safe_write (11 of 11) failed: Operation not permitted
[2021-05-25T20:21:44.185] [18.0] error: _file_write_content: unable to write 11 bytes to cgroup /sys/fs/cgroup/devices/slurm/uid_0/job_18/step_0/devices.allow: Operation not permitted
[2021-05-25T20:21:44.185] [18.0] debug2: xcgroup_set_param: unable to set parameter 'devices.allow' to 'c 195:0 rwm' for '/sys/fs/cgroup/devices/slurm/uid_0/job_18/step_0'
[2021-05-25T20:21:44.185] [18.0] debug2: task/cgroup: _cgroup_create_callback: Default access allowed to device c 195:255 rwm(/dev/nvidiactl) for step


my package command is:
rpmbuild -ta --with slurmrestd  --with mysql --with cgroup --with lua "slurm-20.11.7.tar.bz2"
my SlurmUser in slurm.conf is root.


my slurm.conf and cgroup.conf are same as before which works well.


Does anyone have advice to slove this problem ?Thanks a lot! I have tried all the methods I thought of.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20210525/f4e385d6/attachment.htm>


More information about the slurm-users mailing list