Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
3595 views
Cannot set memory.memsw.limit_in_bytes in cgroup on Ubuntu server using cgm
I'm trying to limit resource usage for a cgroup without having root access. I can set memory.limit_in_bytes using cgm setvalue memory memory.limit_in_bytes 150G But I can not set memsw.limit_in_bytes in the same way, regardless of whether memsw.limit_in_bytes is greater than memory.limit_in_bytes (T...
I'm trying to limit resource usage for a cgroup without having root access. I can set memory.limit_in_bytes using cgm setvalue memory memory.limit_in_bytes 150G But I can not set memsw.limit_in_bytes in the same way, regardless of whether memsw.limit_in_bytes is greater than memory.limit_in_bytes (This is necessary, because the memsw option sets maximum memory + swap limit) All I receive is: Error org.freedesktop.DBus.Error.InvalidArgs: invalid request Any ideas?
trevore (149 rep)
Jan 10, 2016, 01:21 PM • Last activity: Aug 4, 2025, 12:05 AM
2 votes
0 answers
25 views
cgroup / cpu scheduler tuning questions, cpu pressure & uclamp behavior
I've been working on this for a few days and I'm scratching my head. The kernel docs for cgroups, pressure stall information, and the scheduler have not helped me shed any light on this so far, so I am hoping you can help. I have 3 related processes which run in their own cgroup. It is a root partit...
I've been working on this for a few days and I'm scratching my head. The kernel docs for cgroups, pressure stall information, and the scheduler have not helped me shed any light on this so far, so I am hoping you can help. I have 3 related processes which run in their own cgroup. It is a root partition and the only partition which may use cpus 3-10. One application runs on CPU 11 and all other processes run on cpus 0-2.
$ cat cpuset.cpus.effective
3-10
Aside from the cpuset the other cgroup properties are currently defaults. It's just using the normal SCHED_OTHER scheduler class right now.
$ for PROC in $(cat cgroup.procs); do chrt -p $PROC; done
pid 16049's current scheduling policy: SCHED_OTHER
pid 16049's current scheduling priority: 0
pid 16058's current scheduling policy: SCHED_OTHER
pid 16058's current scheduling priority: 0
pid 16059's current scheduling policy: SCHED_OTHER
pid 16059's current scheduling priority: 0
The cgroup is never getting throttled
$ cat cpu.stat
usage_usec 1724410414
user_usec 737614077
system_usec 986796337
nr_periods 0
nr_throttled 0
throttled_usec 0
Yet somehow cpu.pressure is full ~1% of the time
$ cat cpu.pressure
some avg10=6.85 avg60=5.87 avg300=3.98 total=45578161
full avg10=1.00 avg60=0.73 avg300=0.27 total=9479354
If I change cpu.uclamp.min to max * full drops to 0% * some drops to ~3% * the main application in the group's CPU usage drops from ~156% to ~100% * average CPU utilization drops (see graph below, where it drops is when I set cpu.uclamp.min to max) graph showing lower cpu utilization when cpu.uclamp.min = max
$ cat cpu.pressure
some avg10=6.85 avg60=5.87 avg300=3.98 total=45578161
full avg10=1.00 avg60=0.73 avg300=0.27 total=9479354

$ echo max > cpu.uclamp.min

# wait a lil bit

$ cat cpu.pressure
some avg10=3.00 avg60=3.18 avg300=4.09 total=61530804
full avg10=0.00 avg60=0.06 avg300=0.32 total=12734850
No CPU core is ever fully loaded in either case, and it isn't being throttled, so I'm really confused how CPU pressure can ever have a nonzero value for full - wouldn't that mean there are CPU cores sitting idle while threads are not being scheduled?
# with cpu.uclamp.min = 0
CPU [22%@1971,16%@1971,17%@1971,4%@1971,39%@729,38%@729,40%@729,36%@729,36%@729,37%@729,37%@729,10%@729]

# with cpu.uclamp.min = max
CPU [26%@729,14%@1971,21%@729,2%@1971,29%@806,26%@1971,26%@1971,27%@1971,20%@729,25%@729,25%@729,7%@727]
I am assuming that setting the minimum uclamp value is causing the scheduler to prioritize scheduling this cgroups threads but given that the group has exclusive access to the cores it is running on, and no core in the system is fully utilized, I'm struggling to understand the exact mechanism at play here. There's no memory or io pressure system wide with either cpu.uclamp.min setting. The system has 47 GiB of free memory, ~5 GiB in use, and ~1.5 Gib as caches. 1. How can I have CPU pressure when none of my CPUs are full, and there's no memory or io pressure? 2. Why does changing the uclamp.min value result in lower CPU usage when the system has excess CPU, memory, and IO resources available? * especially since no other process is allowed to use those cores either way 3. Are there other scheduler classes or settings I can use to tune the performance so that my processes aren't waiting for CPU while my cores sit idle? 4. Any other ways I can debug what the bottleneck is? Edit: Additional details - I'm running 5.15.148 kernel on aarch64 using a (customized) poky based yocto image.
tbot (21 rep)
Aug 1, 2025, 07:52 PM • Last activity: Aug 1, 2025, 07:56 PM
0 votes
0 answers
39 views
I'm trying to resolve "Failed to open cgroup2 by ID" from my socket statistics "ss"
I'm learning to investigate my socket statistics so I do.. sudo ss -tulerp I get the following in the output.. Failed to open cgroup2 by ID Failed to open cgroup2 by ID Failed to open cgroup2 by ID Failed to open cgroup2 by ID Failed to open cgroup2 by ID Failed to open cgroup2 by ID udp UNCONN 0 0...
I'm learning to investigate my socket statistics so I do.. sudo ss -tulerp I get the following in the output.. Failed to open cgroup2 by ID Failed to open cgroup2 by ID Failed to open cgroup2 by ID Failed to open cgroup2 by ID Failed to open cgroup2 by ID Failed to open cgroup2 by ID udp UNCONN 0 0 0.0.0.0:rpc.nlockmgr 0.0.0.0:* ino:9653 sk:379 cgroup:unreachable:1696 udp UNCONN 0 0 [::]:34245 [::]:* ino:14892 sk:387 cgroup:unreachable:1696 v6only:1 tcp LISTEN 0 64 0.0.0.0:rpc.nfs 0.0.0.0:* ino:7020 sk:395 cgroup:unreachable:1696 tcp LISTEN 0 64 0.0.0.0:rpc.nlockmgr 0.0.0.0:* ino:9654 sk:398 cgroup:unreachable:1696 tcp LISTEN 0 64 [::]:rpc.nfs [::]:* ino:9648 sk:39c cgroup:unreachable:1696 v6only:1 tcp LISTEN 0 64 [::]:34827 [::]:* ino:3924 sk:39d cgroup:unreachable:1696 v6only:1 I try to close port 34827 with.. sudo ss -K dport = 34827 but it just fails silently. I assume each "Failed to open cgroup2 by ID" corresponds to one of the "cgroup:unreachable" entries. What is happening? And how do I resolve this? This is on Ubuntu 22.04 in case it is relevant.
slowcoder (71 rep)
Jul 30, 2025, 11:14 PM • Last activity: Jul 30, 2025, 11:27 PM
1 votes
0 answers
41 views
Too slow Tiered Memory Demotion and CPU Lock-up(maybe) with cgroup v2 memory.high
We are currently testing tiered memory demotion on a machine equipped with a CXL device. To facilitate this, we created a specific script (https://github.com/hyun-sa/comem) and are using the memory.high setting within a cgroup to force memory demotion. These are the commands we used to enable demoti...
We are currently testing tiered memory demotion on a machine equipped with a CXL device. To facilitate this, we created a specific script (https://github.com/hyun-sa/comem) and are using the memory.high setting within a cgroup to force memory demotion. These are the commands we used to enable demotion: echo 1 > /sys/kernel/mm/numa/demotion_enabled echo 2 > /proc/sys/kernel/numa_balancing The issue we're facing is that while demotion does occur, it proceeds extremely slowly—even slower than swapping to disk. Furthermore, during a 7-Zip benchmark, we observe a severe drop in CPU utilization, as if some process is causing a lock. This is our running example (7zr b -md25 while memory is limited as 8G by memory.high) 7-Zip (r) 23.01 (x64) : Igor Pavlov : Public domain : 2023-06-20 64-bit locale=C.UTF-8 Threads:128 OPEN_MAX:1024 d25 Compiler: 13.2.0 GCC 13.2.0: SSE2 Linux : 6.15.6 : #1 SMP PREEMPT_DYNAMIC Tue Jul 15 06:39:48 UTC 2025 : x86_64 PageSize:4KB THP:madvise hwcap:2 hwcap2:2 AMD EPYC 9554 64-Core Processor (A10F11) 1T CPU Freq (MHz): 3710 3731 3732 3733 3733 3732 3732 64T CPU Freq (MHz): 6329% 3674 6006% 3495 RAM size: 386638 MB, # CPU hardware threads: 128 RAM usage: 28478 MB, # Benchmark threads: 128 Compressing | Decompressing Dict Speed Usage R/U Rating | Speed Usage R/U Rating KiB/s % MIPS MIPS | KiB/s % MIPS MIPS 22: 477942 10925 4256 464943 | 5843081 12451 4001 498193 23: 337115 8816 3896 343480 | 5826376 12606 3999 504053 24: 1785 108 1772 1919 | 5654618 12631 3928 496161 25: 960 63 1739 1097 | 1767869 4606 3415 157287 ---------------------------------- | ------------------------------ Avr: 204451 4978 2916 202860 | 4772986 10573 3836 413924 Tot: 7776 3376 308392 execution_time(ms): 2807639 Is there a potential misunderstanding of how cgroups function or a misconfiguration in my setup that could be causing this behavior? Our machine specifications are as follows: Mainboard : Supermicro H13SSL-NT CPU : Epyc 9554 (nps 1) Dram : 128G CXL device : SMART Modular Technologies Device c241 OS : Ubuntu 24.04 LTS Kernel : Linux 6.15.6 numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 node 0 size: 128640 MB node 0 free: 117909 MB node 1 cpus: node 1 size: 257998 MB node 1 free: 257840 MB node distances: node 0 1 0: 10 50 1: 255 10 Thank you for your help.
Hyunsa (11 rep)
Jul 22, 2025, 05:43 AM • Last activity: Jul 22, 2025, 06:04 AM
1 votes
1 answers
1025 views
how to add PID inside cgroup.procs with non-root privileges in cgroup-v2 in Ubuntu
I created a cgroup in `/sys/fs/cgroup` called `testGrp`. I need this cgroup to be controlled by a non-root user, so I changed the ownership of the whole directory. ```lang-shellsession /sys/fs/cgroup$ sudo chown -R normUser testGrp/ ``` I made sure that all files inside `testGrp` are owned by the ne...
I created a cgroup in /sys/fs/cgroup called testGrp. I need this cgroup to be controlled by a non-root user, so I changed the ownership of the whole directory.
-shellsession
/sys/fs/cgroup$ sudo chown -R normUser testGrp/
I made sure that all files inside testGrp are owned by the new user normUser. This user can change the interface files like io.max normally, but is not permitted to add any PID inside the cgroup.procs.
-shellsession
/sys/fs/cgroup/testGrp$ ll cgroup.procs 
-rw-r--r-- 1 normUser root 0 Aug 21 14:13 cgroup.procs
/sys/fs/cgroup/testGrp$ whoami
normUser 
/sys/fs/cgroup/testGrp$ echo $$ > cgroup.procs 
bash: echo: write error: Permission denied
I thought that changing the ownership of the cgroup would solve the issue of needing root privileges, but apparently it doesn't. So how can I control the cgroup without using the root user?
Belal Elkady (13 rep)
Aug 21, 2023, 07:19 PM • Last activity: Jul 5, 2025, 07:39 AM
3 votes
1 answers
2380 views
Setting up cgroups with /etc/cgconfig.conf failed with Cgroup, requested group parameter does not exist
I'm looking at getting `cgroups` working on my linux machine and well it's been a pain. I feel like this should be a lot easier as resource management is pretty key to a healthy desktop environment which is why I'm trying to use it but I've just run into so many problems. I have a `/etc/cgconfig.con...
I'm looking at getting cgroups working on my linux machine and well it's been a pain. I feel like this should be a lot easier as resource management is pretty key to a healthy desktop environment which is why I'm trying to use it but I've just run into so many problems. I have a /etc/cgconfig.conf file that looks like this:
group "chromium_slack" {
    perm {
            admin {
                    uid = "nate";
                    gid = "nate";
            }
            task {
                    uid = "nate";
                    gid = "nate";
            }
    }
    cpu {
            shares="50";
    }
    memory {
            swappiness="60";
            limit_in_bytes="256000000";
    }
}
And when I start the cgconfig service with this:
sudo systemctl start cgconfig.service
I get a service status Cgroup, requested group parameter does not exist that looks like this:
× cgconfig.service - Control Group configuration service
     Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; enabled; preset: disabled)
     Active: failed (Result: exit-code) since Thu 2022-12-15 15:17:16 EST; 11min ago
    Process: 9559 ExecStart=/usr/bin/cgconfigparser -l /etc/cgconfig.conf -s 1664 (code=exited, status=95)
   Main PID: 9559 (code=exited, status=95)
        CPU: 8ms

Dec 15 15:17:16 nx systemd: Starting Control Group configuration service...
Dec 15 15:17:16 nx cgconfigparser: /usr/bin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup, requested group parameter does not exist
Dec 15 15:17:16 nx systemd: cgconfig.service: Main process exited, code=exited, status=95/n/a
Dec 15 15:17:16 nx systemd: cgconfig.service: Failed with result 'exit-code'.
Dec 15 15:17:16 nx systemd: Failed to start Control Group configuration service.
But when I try to do this all manually with cgcreate like so:
sudo cgcreate -a $USER -g memory,cpu:chromium_slack
sudo echo 256M > /sys/fs/cgroup/chromium_slack/memory.limit_in_bytes
I get a permission denied: /sys/fs/cgroup/chromium_slack/memory.limit_in_bytes error. So I guess my question is... How the heck do I get this working?
Nate-Wilkins (133 rep)
Dec 20, 2022, 02:35 AM • Last activity: Jun 29, 2025, 10:00 PM
2 votes
1 answers
86 views
How does a cgroup namespace work?
I’m trying to understand how cgroup namespaces work, but I’m stuck on something that doesn’t make sense to me. My understanding is that a cgroup namespace should virtualize the cgroup hierarchy for a process, so that the process sees its current cgroup as / and doesn’t see the full host hierarchy. S...
I’m trying to understand how cgroup namespaces work, but I’m stuck on something that doesn’t make sense to me. My understanding is that a cgroup namespace should virtualize the cgroup hierarchy for a process, so that the process sees its current cgroup as / and doesn’t see the full host hierarchy. So I tried to create a cgroup namespace like this:
sudo unshare --cgroup

cat /proc/self/cgroup
0::/

echo $$
3183
Then, from another terminal on the host, I checked the cgroup for that process:
cat /proc/3183/cgroup 
0::/user.slice/user-1000.slice/user@1000.service/app.slice/app-org.gnome.Terminal.slice/vte-spawn-ffe09412-f0d6-413e-b480-6d14f1290f84.scope
This matches what the man page says:
Cgroup namespaces virtualize the view of a process's cgroups (see cgroups(7)) as seen via /proc/[pid]/cgroup and /proc/[pid]/mountinfo.

Each cgroup namespace has its own set of cgroup root directories.
These root directories are the base points for the relative locations displayed in the corresponding records in the /proc/[pid]/cgroup file.
However, when I create a new cgroup inside my cgroup namespace, it appears in the host’s hierarchy too:
# Inside the namespace:
mkdir /sys/fs/cgroup/test

# On the host:
ls /sys/fs/cgroup/
...
test
...
So it seems that the entire host hierarchy is still visible and any new cgroup I make is visible system-wide. There’s no real isolation — from inside the namespace I can still see and modify all the host’s cgroups. I also tried combining it with a user namespace to avoid sudo but the result is the same:
unshare --map-root-user
unshare --cgroup
ls /sys/fs/cgroup/
Again, I see the full host hierarchy. So my questions are: - Am I misunderstanding how cgroup namespaces are supposed to work? - Is the cgroup namespace not designed to isolate the entire hierarchy like mount or PID namespaces do? - Is there a correct way to use them to limit what cgroups are visible or writable? Any clarification would be really appreciated!
Liric Ramer (85 rep)
Jun 27, 2025, 10:22 AM • Last activity: Jun 29, 2025, 09:52 AM
8 votes
1 answers
2826 views
cgroups analogue in Darwin
Is there an analogue to cgroups in Darwin for preventing processes from escaping from the control / monitoring of its parent process by means of `fork()`? If yes, what is it? For some background, consider a process, P, a direct descendant, Q, and the descendants of Q, R: cgroups allows P to control...
Is there an analogue to cgroups in Darwin for preventing processes from escaping from the control / monitoring of its parent process by means of fork()? If yes, what is it? For some background, consider a process, P, a direct descendant, Q, and the descendants of Q, R: cgroups allows P to control and monitor Q and R. If P launches Q, but Q spawns a process (r in R), without something akin to cgroups, P is unable to monitor r. A real world example of this would be systemd (P) spawning openssh's sshd (Q) as a daemon, which then spawns other instances of sshd (R) to handle each opened session. Without cgroups, systemd would not be able to interact with the per-session sshd's. (In the NT environment, cgroups are analogous to Job objects .)
user314104 (379 rep)
Feb 4, 2014, 01:59 PM • Last activity: Jun 27, 2025, 08:04 PM
1 votes
0 answers
2711 views
Debian 12 - grub gets systemd.unified_cgroup_hierarchy=false by default?
I'm running Debian 12 with kernel 6.9.12-amd64. When grub generates its config file (e.g. via grub-mkconfig), all the Debian entries have `systemd.unified_cgroup_hierarchy=false` appended in their Linux command. This is *not* present in `/etc/default/grub`, where `GRUB_CMDLINE_LINUX_DEFAULT="quiet"`...
I'm running Debian 12 with kernel 6.9.12-amd64. When grub generates its config file (e.g. via grub-mkconfig), all the Debian entries have systemd.unified_cgroup_hierarchy=false appended in their Linux command. This is *not* present in /etc/default/grub, where GRUB_CMDLINE_LINUX_DEFAULT="quiet" and GRUB_CMDLINE_LINUX="" (no mention of cgroups appears). So where is this coming from? This is a problem, because when I boot now, I get a fatal error from systemd saying > Refusing to run under cgroup v1. SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1 not specified on kernel command line. When I add that parameter, it boots, but I get a deprecation message and boot is delayed by 30s. It seems like Debian is preventing using cgroups v2, but that the latest versions of systemd require it. What should I do here? It seems to me like I should switch to cgroups v2 and find a way to not have grub appending systemd.unified_cgroup_hierarchy=false by default, which is why I'm wondering where that's coming from. Is there some danger or downside to enabling cgroups v2? (I also manually removed that line and booted with cgroups v2, things seem fine?)
davewy (163 rep)
Jul 31, 2024, 03:54 AM • Last activity: May 29, 2025, 08:46 AM
2 votes
1 answers
2165 views
unable to append to cgroup v2 cgroup.subtree_controller
I have a tinker board 2s (like raspberry pi) running debian on kernel 4.4.194. I enabled `cgroups v2` by adding `systemd.unified_cgroup_hierarchy=1` into the `/boot/cmdline.txt` file as supposed to. the result of `ls /sys/fs/cgroup/` is: `cgroup.controllers cgroup.procs cgroup.subtree_control init.s...
I have a tinker board 2s (like raspberry pi) running debian on kernel 4.4.194. I enabled cgroups v2 by adding systemd.unified_cgroup_hierarchy=1 into the /boot/cmdline.txt file as supposed to. the result of ls /sys/fs/cgroup/ is: cgroup.controllers cgroup.procs cgroup.subtree_control init.scope system.slice user.slice which is correct it seems. However, according to this guide , now I need to add cpu and chipset into the cgroup.subtree_control as well, but this is where i am stuck. echo '+cpu' >> /sys/fs/cgroup/cgroup.subtree_control echo '+cpuset' >> /sys/fs/cgroup/cgroup.subtree_control these results in permission denied errors... even when i sudo echo it, it results in the same thing. ls -l for /sys/fs/cgroup shows: -r--r--r-- 1 root root 0 Dec 2 06:52 cgroup.controllers -rw-r--r-- 1 root root 0 Dec 2 06:29 cgroup.procs -rw-r--r-- 1 root root 0 Dec 2 06:53 cgroup.subtree_control drwxr-xr-x 2 root root 0 Dec 2 06:19 init.scope drwxr-xr-x 53 root root 0 Dec 2 06:33 system.slice drwxr-xr-x 4 root root 0 Dec 2 06:19 user.slice I'm at at loss as to who to add cpu and chipset into cgroup v2... My purpose is to install kubernetes and connect the boards up as a cluster. but kubeadm failed saying that CPU and CPUSET is not found. That problem then led me to cgroups v2.
jake wong (141 rep)
Dec 2, 2021, 11:42 AM • Last activity: May 27, 2025, 08:04 AM
0 votes
0 answers
41 views
cgoups: how safe is changing cpuset of a process?
I've recently started to observe unexpected output (and later `stty -a` outputs terminal settings different from new shell tab) from my shell scripts project. As of now, I'm very close to be sure the reason is recent introduction of "cpuset" change of the main process by its asynchronously run subpr...
I've recently started to observe unexpected output (and later stty -a outputs terminal settings different from new shell tab) from my shell scripts project. As of now, I'm very close to be sure the reason is recent introduction of "cpuset" change of the main process by its asynchronously run subprocess (another script run with & doing cgset -r cpuset.cpus= for the cgroup into which main process moved itself before running that subprocess). It got me thinking: how safe is changing cpuset of a process? Like the process is restricted to run on cpu1 and next command sets its cgroup to run on cpu2 instead. Could some data run at cores to be cut off be lost? How cgroups handles changing of "cpuset" internally? As of now I've observed issues related to changing cpuset by asynchronously run subprocess, is changing the "cpuset" by the process itself safe? Is changing time allotments (cgset -r cpu.max=) always safe? TIA P.S. mostly bash, system is Linux Mint 21 based.
Alex Martian (1287 rep)
May 18, 2025, 05:48 AM • Last activity: May 18, 2025, 08:21 AM
4 votes
2 answers
3265 views
Limiting users ram with cgroups not working (for me)
I registred because I didn't manage running cgroups with several tutorials/comments/whatever you find on google. I want to limit the amount of ram a specifix user may use. Internet says "cgroups". My testserver is running Ubuntu 14.04. You can divide the mentioned tutorials in two categories. Direct...
I registred because I didn't manage running cgroups with several tutorials/comments/whatever you find on google. I want to limit the amount of ram a specifix user may use. Internet says "cgroups". My testserver is running Ubuntu 14.04. You can divide the mentioned tutorials in two categories. Directly set limits using echo and use specific config. Neither is working for me. **Setting Limits using echo** cgcreate -g cpu,cpuacct,...:/my_group finishes without any notices. When I try to run echo 100M > memory.limit_in_bytes it just says "not permitted" even when using sudo. I don't even reach any point of limiting another user. **Setting limits using config** I read about two config files. So here are my config files: *cgconfig.conf* mount { memory = /cgroup/memory; } group limit_grp { memory { memory.limit_in_bytes=100M; memory.memsw.limit_in_bytes=125M; } } *cgrules.conf* testuser memory limit_grp When I run cgconfigparser -l /etc/cgconfig.conf it mounts to systemd. Now I log on with testuser, run an memory intense task - and it runs without caring about my limit. I tried rebooting, nothing changed. Even some strange attempts using kernel config didn't work. I'm new to cgroups and didn't expect it to be that complicated. I'd appreciate any suggestions to my topic. Thank you in advance!
darkspirit510 (41 rep)
Jun 25, 2016, 05:47 PM • Last activity: May 14, 2025, 12:07 PM
7 votes
1 answers
8237 views
"cannot allocate memory" error when trying to create folder in cgroup hierarchy
we ran into an interesting bug today. on our servers we put users into cgroup folders to monitor + control usage of resources like cpu and memory. we started getting errors when trying to add user-specific memory cgroup folders: mkdir /sys/fs/cgroup/memory/users/newuser mkdir: cannot create director...
we ran into an interesting bug today. on our servers we put users into cgroup folders to monitor + control usage of resources like cpu and memory. we started getting errors when trying to add user-specific memory cgroup folders: mkdir /sys/fs/cgroup/memory/users/newuser mkdir: cannot create directory ‘/sys/fs/cgroup/memory/users/newusers’: Cannot allocate memory That seemed a little strange, because the machine actually had a reasonable amount of free memory and swap. Changing the sysctl values for vm.overcommit_memory from 0 to 1 had no effect. We did notice that we were running with quite a lot of user-specific subfolders (about 7,000 in fact), and most of them were for users that were no longer running processes on that machine. ls /sys/fs/cgroup/memory/users/ | wc -l 7298 deleting unused folders in the cgroup hierarchy actually fixed the problem cd /sys/fs/cgroup/memory/users/ ls | xargs -n1 rmdir # errors for folders in-use, succeeds for unused mkdir /sys/fs/cgroup/memory/users/newuser # now works fine interestingly, the problem only affected the memory cgroup. the cpu/accounting cgroup was fine, even though it actually had more users in the hierarchy: ls /sys/fs/cgroup/cpu,cpuacct/users/ | wc -l 7450 mkdir /sys/fs/cgroup/cpu,cpuacct/users/newuser # fine So, what was causing these out-of-memory errors? Does the memory-cgroup subsystem itself have some sort of memory limit of its own? contents of cgroup mounts may be found [here](https://pastebin.com/tkmw60Df)
hwjp (123 rep)
Aug 21, 2017, 04:21 PM • Last activity: May 5, 2025, 08:00 PM
6 votes
1 answers
377 views
When using cgroup v2, is there command line tool equivalent to cgexec command?
As we know, there is command-line management suite, among which there is cgexec command, in cgroup v1. We can start a process immediately under control group. When we are using cgroup v2, is there command line tools equivalent to cgexec command? So that we can start a process under control immediate...
As we know, there is command-line management suite, among which there is cgexec command, in cgroup v1. We can start a process immediately under control group. When we are using cgroup v2, is there command line tools equivalent to cgexec command? So that we can start a process under control immediately, instead of starting the process first, then moving to some cgroup with echoing its pid?
Cu635 (69 rep)
Mar 27, 2023, 02:51 PM • Last activity: May 2, 2025, 09:13 AM
0 votes
0 answers
11 views
KDE Desktop file to open a URL: Errors about cgroups no such process
I'm trying to get `mailto` URLs to open in gmail on KDE. [Viagee](https://github.com/davesteele/viagee) used to be the accepted approach, but this seems to have been killed by Google [revoking the app from its application tore](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1083120) and ignoring...
I'm trying to get mailto URLs to open in gmail on KDE. [Viagee](https://github.com/davesteele/viagee) used to be the accepted approach, but this seems to have been killed by Google [revoking the app from its application tore](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1083120) and ignoring the developer. You'd think they would care more about all the linux users in the world Google does have a special URL https://mail.google.com/mail/?extsrc=mailto&url=%u which allows you to write emails is a "secure" environment with no access to other information about your account. So I'm using this. I create a new desktop files with a mime type for the mailto: URL scheme (x-scheme-handler/mailto). Like so:
[Desktop Entry]
Name=gmail mailto
MimeType=x-scheme-handler/mailto
Exec= bash -c 'echo %U | xargs -I ARG open "https://mail.google.com/mail/?extsrc=mailto&url=ARG "'

Terminal=false
StartupNotify=true
This works fine, but when I open a mailto URL from the command line I get the following warning. I presume because open exits quickly and KDE wants to track it.
kf.kio.gui: Failed to register new cgroup: "app-gmail\\x2dmailto-181d9665b276498992d078f40c9bba08.scope" "org.freedesktop.DBus.Error.UnixProcessIdUnknown" "Failed to set unit properties: No such process"
How do I get rid of this warning?
Att Righ (1412 rep)
Apr 28, 2025, 06:36 PM • Last activity: Apr 29, 2025, 06:37 AM
1 votes
1 answers
164 views
isolcpus kernel option appears to break taskset
I have a laptop with an [Intel Core i7-12700H][1] CPU, running Ubuntu 24.04 LTS. This CPU has 6 “performance“ cores, each one running 2 threads, and 8 “efficient” cores. Most of the time, the “efficient” cores can provide much more power than I need, so I’d rather keep the “performance” cores idle,...
I have a laptop with an Intel Core i7-12700H CPU, running Ubuntu 24.04 LTS. This CPU has 6 “performance“ cores, each one running 2 threads, and 8 “efficient” cores. Most of the time, the “efficient” cores can provide much more power than I need, so I’d rather keep the “performance” cores idle, to save power, and only use the “efficient” ones. Occasionally, I have to run and benchmark some intensive (multi-threaded) computation code. Then I wan to use the “performance” cores, without being disturbed by other processes. I may want either to run a single thread per core, to get the maximum per-thread performance, or to use both threads of each core. At first, I considered using cgroup’s cpusets, with the cset tool. I wrote the following script to create the sets: cset set -c 12-19 --cpu_exclusive -s efficient cset set -c 0-11 --cpu_exclusive -s performance cset set -c 0,2,4,6,8,10 -s performance/no-ht cset proc -m -f root -t efficient It works as expected, but I find it troublesome since I need root to escape the efficient cgroup, with ugly commands like sudo cset proc --exec performance/no-ht -- sudo -u $(whoami) ~/bin/my_code Moreover, because it switches to root, I can’t wrap that whole command in time or a profiler. I can still run time or the profiler within the performance cgroup, but then they share the same cores as the code under test… Hence I considered taskset , with the following command: /usr/bin/time taskset -c 0-11:2 ~/bin/my_code It works as expected, does not need root, but other processes may use the “performance“ cores. One solution is to use both systems, with an awful command like: sudo cset proc --exec root -- sudo -u $(whoami) taskset -c 12-19 /usr/bin/time taskset -c 0-11:2 ~/bin/my_code It works fine but looks over-complicated and still needs root… Then I read about the isolcpus kernel parameter and, despite being deprecated, it looked great to me. So I added isolcpus=0-11 to the GRUB_CMDLINE_LINUX_DEFAULT parameter in /etc/default/grub, ran sudo update-grub and rebooted. At first sight it looked fine, only a few kernel threads were running on CPUs 0 to 11, all userspace processes were running on CPUs 12 to 19. So I tried: /usr/bin/time taskset -c 0-11:2 ~/bin/my_code As expected, OpenMP correctly discovered it could use 6 cores and launched 6 threads, but I figured out all of them were running on CPU 0… I checked the Cpus_allowed_list line in /proc//status and it was correct (0,2,4,6,8,10). Running taskset to allow each thread on a single core works, but it’s not very convenient and does not allow to share load if there are more threads than cores… Any idea why the threads seem to be stuck on the first allowed CPU when using the isolcpus kernel option? Is there a better way to keep everything running on CPUs 12 to 19 by default, like if everything was run through taskset -c 12-19, without using cgroup jails than need root to escape?
user2233709 (1709 rep)
Apr 9, 2025, 08:46 AM • Last activity: Apr 24, 2025, 07:47 AM
1 votes
1 answers
109 views
Cgroups permission denied on Ubuntu 22.04.5 but works on Ubuntu 22.04.03
I'm trying to learn about cgroups. I ran these two commands: ``` root@localhost:~# mkdir /sys/fs/cgroup/container_cpu root@localhost:~# echo '50000 100000' | sudo tee /sys/fs/cgroup/container_cpu/cpu.max ``` This worked fine on my Ubuntu 22.04.3 from my virtualbox, but on Ubuntu 22.04.5 on linode.co...
I'm trying to learn about cgroups. I ran these two commands:
root@localhost:~# mkdir /sys/fs/cgroup/container_cpu
root@localhost:~# echo '50000 100000' | sudo tee /sys/fs/cgroup/container_cpu/cpu.max
This worked fine on my Ubuntu 22.04.3 from my virtualbox, but on Ubuntu 22.04.5 on linode.com, I get the error:
tee: /sys/fs/cgroup/container_cpu/cpu.max: Permission denied
Can anyone tell me what I did wrong?
learningtech (631 rep)
Apr 17, 2025, 08:43 PM • Last activity: Apr 18, 2025, 12:18 PM
3 votes
0 answers
41 views
Implement a recovery virtual console for a hanged system
This might be a duplicate of "[reserve memory for a set of processes](https://unix.stackexchange.com/questions/401769/reserve-memory-for-a-set-of-processes)", but I think my question is a little broader. I have a system that likes to hang a lot. I tend to use a lot of browser tabs and a bunch of Ele...
This might be a duplicate of "[reserve memory for a set of processes](https://unix.stackexchange.com/questions/401769/reserve-memory-for-a-set-of-processes) ", but I think my question is a little broader. I have a system that likes to hang a lot. I tend to use a lot of browser tabs and a bunch of Electron apps; sometimes when too many of these are open my system comes to a complete stop. Usually my solution is to just Alt+SysRq+F to forcefully invoke oom_kill, this usually results in my browser getting killed. I have been meaning to install a user-space service killer for a while so that I get a more proactive prevention of a system freeze, but honestly, it wouldn't change much (unless I was doing something time sensitive and couldn't afford a 2.5s before oom_kill is complete). I would much rather the ability to choose what I want to be killed. To that end, I would like to have a "holy virtual terminal", one that I can simply Ctrl+Alt+F1 (my graphical session lives in Ctrl+Alt+F2) and then use either top or btop to kill whatever I want to restore my system to a working state. I want a guarantee that it _will have enough memory/cpu priority to function_. Right now if the hang is bad enough, it either takes minutes for a login prompt to show up on virtual consoles, or the login itself times out after 60 seconds. Is this possible? How? Would "niceness" be relevant? Perhaps there is a kernel module for it? I wouldn't mind permanently losing 1GB or less of memory in my system if that was necessary (if it needed to be "allocated" to such a console). Previously I thought one of the solutions would be to create a default cgroup that would only allow up to System Ram - 1Gb, then I could somehow have tty1 be on a different cgroup which wouldn't have that memory limit. I don't really know how that would be achieved, but it sounds possible. However, there seems kernel parameter that was designed specifically for this purpose: [vm.admin_reserve_kbytes](https://www.kernel.org/doc/Documentation/sysctl/vm.txt) > The amount of free memory in the system that should be reserved for users with the capability cap_sys_admin. > > admin_reserve_kbytes defaults to min(3% of free pages, 8MB) > > That should provide enough for the admin to log in and kill a process, if necessary, under the default overcommit 'guess' mode. My system is already in 'guess' mode, but if I can't even get a login prompt, I don't really see the point of reserving for root if you don't have enough memory to login as root. It sounds like some combinations of these parameters would allow me to get what I want, but it isn't clear to me right now what that would be.
Mathias Sven (273 rep)
Apr 15, 2025, 05:27 PM • Last activity: Apr 16, 2025, 01:06 PM
1 votes
1 answers
43 views
Performance Degradation with rsync in container or cgroupv2 with MEM limit
I'm experiencing a significant performance degradation when using "rsync" to copy files over the network from within a container or cgroup with memory limits on Oracle Linux 9.2. The issue occurs with the Red Hat Compatible Kernel (RHCK) 5.14.0-284.11.1.el9_2.x86_64 but not with the Unbreakable Ente...
I'm experiencing a significant performance degradation when using "rsync" to copy files over the network from within a container or cgroup with memory limits on Oracle Linux 9.2. The issue occurs with the Red Hat Compatible Kernel (RHCK) 5.14.0-284.11.1.el9_2.x86_64 but not with the Unbreakable Enterprise Kernel (UEK) 5.15.0-101.103.2.1.el9uek.x86_64. Details: Setup: Oracle Linux 9.2 with containers/cgroups having memory limits. Issue: Network file copying speed drops drastically when memory limits are hit, specifically when the page cache (inactive files) fills up. Tests: - Using "rsync" from within a container or a cgroup to copy data from remote source. - Using "pg_basebackup" PostgreSQL data replication between two PG Containers (Leader vs Replica). Results: - Initial high speeds (~100MBps) drop significantly (to ~1MBps) once memory limits are reached. Commands to Reproduce: 1. Create cgroup with memory limit and run rsync: sudo systemd-run --scope --property=MemoryMax=1G rsync -av --progress rsync:///files /destination_path 2. Test with drop_caches on hosting OS during slow rsync: free && sync && echo 3 > /proc/sys/vm/drop_caches && free After cache is dropped, rsync is again fast until MEM limit is reached again Observations: - When the container's memory limit is reached, the page cache (inactive files) fills up, leading to network bandwidth degradation. - This affects, for example, PostgreSQL replication, causing lag and potential data loss. Has anyone else encountered this issue? Any insights or suggestions on how to address correctly (or maybe workarounds) this would be greatly appreciated!
ALZ (961 rep)
Mar 26, 2025, 11:48 AM • Last activity: Apr 9, 2025, 12:35 PM
0 votes
0 answers
244 views
cgroups v2 and systemd missing memory controller on individual user slices
I have several Rocky 8 systems configured for cgroups v2 with systemd.unified_cgroup_hierarchy=1 set in kernel boot parameters. On these systems I set something like ``` systemctl set-property user.slice MemoryMax=498G systemctl set-property user.slice MemoryHigh=494G systemctl set-property user-0.s...
I have several Rocky 8 systems configured for cgroups v2 with systemd.unified_cgroup_hierarchy=1 set in kernel boot parameters. On these systems I set something like
systemctl set-property user.slice MemoryMax=498G
systemctl set-property user.slice MemoryHigh=494G
systemctl set-property user-0.slice MemoryMin=100M
which is supposed to prevent normal users in sum from using all the system memory and make sure the root user has at least 100M always available for its slice so I can ssh into the box even with the other users are using all 498G Anyway, on some boxes these settings seem to have at least "applied" just fine with after reboot I see
/sys/fs/cgroup/user.slice/memory.max:534723428352
/sys/fs/cgroup/user.slice/memory.high:530428461056
/sys/fs/cgroup/user.slice/user-0.slice/memory.min:104857600
On other boxes though the user-0.slice does not work. In fact there is no memory.min or other memory limit files at all in /sys/fs/cgroup/user.slice/user-0.slice or for other users in these boxes. Also on these boxes /sys/fs/cgroup/user.slice/cgroup.subtree_control is empty (while it has "memory pids" on the boxes where it works) I cannot figure out what controls this. If on the boxes it is not working I do
echo "+memory +pids" > /sys/fs/cgroup/user.slice/cgroup.subtree_control
I then see the memory files needed in /sys/fs/cgroup/user.slice/user-0.slice but the property for memory.min is not set. So I try doing a
systemctl daemon-reload
to see if systemd will set it. Well, instead of setting it systemd for some reason removes my change to user.slice/cgroup.subtree_control so the memory.min file disappears from user-0.slice I cannot figure out what is going on here. How do I force "+memory +pids" at boot for user.slice/cgroup.subtree_control and have it survive a systemctl daemon-reload? Why on some systems does it work and others it does not? I see no difference in configuration between the working and non-working systems.
raines (324 rep)
Dec 9, 2024, 08:59 PM • Last activity: Apr 8, 2025, 01:34 PM
Showing page 1 of 20 total questions