Ugh I think I did not catch up with the docs.

I started with a system that defaults to cgroup v1 but the Slurm doc for that plugin is NOT available at that time. Thus I converted everything to cgroup v2.

It appears that they are both supported and that documentation issue is more on the dev side than admin side.

Thanks for pointing that out. I misinterpreted the "coming soon" part of cgroup v1 plugin and the "legacy" naming for "do not use". It should be fine.

2025年3月27日(木) 0:48 Williams, Jenny Avis <jenny_williams@unc.edu>:

“ … As cgroup is likely not supposed to be used in newer deployments of Slurm.”

 

I am curious about this statement. Would someone expand on this, to either support or counter it?

 

Jenny Williams

UNC Chapel Hill

 

 

From: Shunran Zhang via slurm-users <slurm-users@lists.schedmd.com>
Sent: Wednesday, March 26, 2025 10:52 AM
To: Gestió Servidors <sysadmin.caos@uab.cat>
Cc: Slurm User Community List <slurm-users@lists.schedmd.com>
Subject: [slurm-users] Re: Using more cores/CPUs that requested with

 

If you are letting systemd taking most things over, you got systemd-cgtop that work better than top for your case. There is also systemd-cgls for non-interactive listing.

 

Also mind to check if you are using cgroup2? A mount to check your cgroup would suffice. As cgroup is likely not supposed to be used in newer deployments of Slurm.

 

 

2025326() 17:14 Gestió Servidors via slurm-users <slurm-users@lists.schedmd.com>:

Hello,

 

Thanks for your answers. I will try now!! One more question: is there any way to check if Cgroups restrictions is working fine during a “running” job or during SLURM scheduling process?

 

Thanks again!

 


--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-leave@lists.schedmd.com