Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
0
answers
32
views
Analysis of memory consumption in Linux
I am analyzing issues with unexpected increase of memory consumption in Linux. It is an embedded system with no swap. What is happening is over some period of time around 2GB of **MemFree/MemAvailable** (they change over time nearly identically) from **/proc/meminfo** is being dropped. But by analyz...
I am analyzing issues with unexpected increase of memory consumption in Linux. It is an embedded system with no swap.
What is happening is over some period of time around 2GB of **MemFree/MemAvailable** (they change over time nearly identically) from **/proc/meminfo** is being dropped.
But by analyzing just **/proc/meminfo** doesn't seem explaining "where" memory went. What I mean is while **MemFree/MemAvailable** is decreases, other fields doesn't seem increasing in a way to fully contribute to memory issue, there is increase in some of the consumptions in the fields but not in the amount of the drop.
So I have several questions:
1. Is **/proc/meminfo** a sufficient source to analyze memory consumption. I mean not to exactly blame somebody but at least to see that let's say Slab,AnonPages etc. areas increased it consumption and so it dropped overall free/available memory? If it is not that should I analyse together with it for better picture
2. A bit more specific question. Are **HugeAnonPages** accounted within **AnonPages** or they are separately tracked?
exbluesbreaker
(101 rep)
Jul 28, 2025, 02:23 PM
• Last activity: Jul 30, 2025, 08:48 AM
1
votes
2
answers
6870
views
Why is memory (rss) from ps command different than memory seen in top command?
Here on MacOS Catalina, when checking the memory usage of a process I see that the `ps` command shows a RSS value which is different from the memory usage shown in top: ``` $> ps e -o command,vsize,rss,%mem|grep "myapplication"|head -n 1 myapplication 4594896 51364 0.3 ``` **RSS -> 51364** ``` top P...
Here on MacOS Catalina, when checking the memory usage of a process I see that the
ps
command shows a RSS value which is different from the memory usage shown in top:
$> ps e -o command,vsize,rss,%mem|grep "myapplication"|head -n 1
myapplication 4594896 51364 0.3
**RSS -> 51364**
top
PID COMMAND %CPU TIME #TH #WQ #PORT MEM
48106 myapplication 115.7 09:06.12 69/1 1 101 37M+
**MEM -> 37M**
Why this difference?
**UPDATE:**
Another example with IntelliJ process:
top -pid 357
PID COMMAND %CPU TIME #TH #WQ #POR MEM PURG CMPRS PGRP PPID STATE BOOSTS %CPU_ME %CPU_OTHRS UID FAULTS COW MSGSENT MSGRECV SYSBSD
357 idea 2.6 03:16:46 112 1 925 4906M 0B 1583M 357 1 sleeping 0 0.00000 0.00000 281451937 28337096 54627 8404446+ 2733245+ 156093159+
Top shows **4906M**
ps aux
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
xxxxxxx 357 3.6 14.5 180050484 2430728 ?? S 1:44PM 196:48.70 /Applications/IntelliJ IDEA.app/Contents/MacOS/idea -psn_0_73746
ps shows RSS **2430728** (KB)
codependent
(123 rep)
Mar 30, 2020, 12:01 PM
• Last activity: Jul 26, 2025, 04:06 PM
4
votes
1
answers
180
views
Why are 6GB unavailable on 32GB? (Debian 12)
(This is updated with each new information) My HP Proliant ML350 Gen 10 server has 32 GB RAM, that seem correctly detected. But only 26GB are available, according to free/htop/proc.meminfo… ``` free -m total used free shared buff/cache available Mem: 31755 6185 25533 6 436 25570 Swap: 976 0 ``` ```...
(This is updated with each new information)
My HP Proliant ML350 Gen 10 server has 32 GB RAM, that seem correctly detected.
But only 26GB are available, according to free/htop/proc.meminfo…
free -m
total used free shared buff/cache available
Mem: 31755 6185 25533 6 436 25570
Swap: 976 0
MemTotal: 32518056 kB
MemFree: 26173304 kB
MemAvailable: 26250792 kB
OS : Debian 12 bookworm
Other details and command outputs are too long for this forum, I've put them here:
https://semestriel.framapad.org/p/phqm1kqhph-afqg?lang=fr
Here are the different tests/ideas/options:
* These are not buffers.
* tmpfs system mounts are almost empty.
* I have no huge pages.
* I have killed or desinstalled more or less anything consuming memory that I could find.
* I have no graphic card (VGA), and only one CPU.
* No ZRAM
* No ZFS, only ext4
* I've removed the NVMe disc, the SSD drives. The OS lies on a 128 GB SDCard.
* A Live Debian with nothing installed shows the same already-used 6GB (with all graphical interfaces off)
* Same problem in emergency mode.
* An old Kali on LiveUSB avec a 5.10 kernel does not show this. But a new Kali with Linux 6.12 shows the same problem as Debian 12. Kali is Linux-based, so this is coherent. This confirms that this the RAM is not eaten by a software that I have installed.
* I've erased the LVM partitions, removed the disks (physically)
* I've deactivated ILO (the remote administration interface from HP): /proc/iomem does not show any memory relase. But it should not amount to more thant 300MB.
I have discovered things like /proc/iomem or /proc/modules far from my area of expertise, without finding huge amounts of hidden memory.
I've found [this thread](https://unix.stackexchange.com/a/730970) on memory used by the kernel, but this is a 1.5% loss, not 20%!
I found [this other thread on "non-cached kernel dynamic memory"](https://unix.stackexchange.com/questions/62066/what-is-kernel-dynamic-memory-as-reported-by-smem) that does not explain anything in fact.
* By comparison, an (empty) Vagrant machine with 32GB RAM shows 32'476'804 available.
* And my laptop with 64 GB RAM, Debian 12 too, can go down to less than 2GB used in emergency mode.
I really would like to know where these 6GB are gone, and if I can recover a part of it.
And yes, the system really feels like it has only 25-26GB. As a test, I've created 24GB of static huge pages (more seems impossible), the free/available memory becomes almost nothing.
(I've found this problem in a test, where a PG cluster with a 24GB shared_buffers collapses, with massive IOwait on the slow root partition.)
Thank you for any advice.
Krysztt
(61 rep)
Jul 20, 2025, 06:57 PM
• Last activity: Jul 23, 2025, 10:11 AM
1
votes
0
answers
40
views
Too slow Tiered Memory Demotion and CPU Lock-up(maybe) with cgroup v2 memory.high
We are currently testing tiered memory demotion on a machine equipped with a CXL device. To facilitate this, we created a specific script (https://github.com/hyun-sa/comem) and are using the memory.high setting within a cgroup to force memory demotion. These are the commands we used to enable demoti...
We are currently testing tiered memory demotion on a machine equipped with a CXL device.
To facilitate this, we created a specific script (https://github.com/hyun-sa/comem) and are using the memory.high setting within a cgroup to force memory demotion.
These are the commands we used to enable demotion:
echo 1 > /sys/kernel/mm/numa/demotion_enabled
echo 2 > /proc/sys/kernel/numa_balancing
The issue we're facing is that while demotion does occur, it proceeds extremely slowly—even slower than swapping to disk. Furthermore, during a 7-Zip benchmark, we observe a severe drop in CPU utilization, as if some process is causing a lock.
This is our running example (7zr b -md25 while memory is limited as 8G by memory.high)
7-Zip (r) 23.01 (x64) : Igor Pavlov : Public domain : 2023-06-20
64-bit locale=C.UTF-8 Threads:128 OPEN_MAX:1024
d25
Compiler: 13.2.0 GCC 13.2.0: SSE2
Linux : 6.15.6 : #1 SMP PREEMPT_DYNAMIC Tue Jul 15 06:39:48 UTC 2025 : x86_64
PageSize:4KB THP:madvise hwcap:2 hwcap2:2
AMD EPYC 9554 64-Core Processor (A10F11)
1T CPU Freq (MHz): 3710 3731 3732 3733 3733 3732 3732
64T CPU Freq (MHz): 6329% 3674 6006% 3495
RAM size: 386638 MB, # CPU hardware threads: 128
RAM usage: 28478 MB, # Benchmark threads: 128
Compressing | Decompressing
Dict Speed Usage R/U Rating | Speed Usage R/U Rating
KiB/s % MIPS MIPS | KiB/s % MIPS MIPS
22: 477942 10925 4256 464943 | 5843081 12451 4001 498193
23: 337115 8816 3896 343480 | 5826376 12606 3999 504053
24: 1785 108 1772 1919 | 5654618 12631 3928 496161
25: 960 63 1739 1097 | 1767869 4606 3415 157287
---------------------------------- | ------------------------------
Avr: 204451 4978 2916 202860 | 4772986 10573 3836 413924
Tot: 7776 3376 308392
execution_time(ms): 2807639
Is there a potential misunderstanding of how cgroups function or a misconfiguration in my setup that could be causing this behavior?
Our machine specifications are as follows:
Mainboard : Supermicro H13SSL-NT
CPU : Epyc 9554 (nps 1)
Dram : 128G
CXL device : SMART Modular Technologies Device c241
OS : Ubuntu 24.04 LTS
Kernel : Linux 6.15.6
numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
node 0 size: 128640 MB
node 0 free: 117909 MB
node 1 cpus:
node 1 size: 257998 MB
node 1 free: 257840 MB
node distances:
node 0 1
0: 10 50
1: 255 10
Thank you for your help.
Hyunsa
(11 rep)
Jul 22, 2025, 05:43 AM
• Last activity: Jul 22, 2025, 06:04 AM
4
votes
2
answers
181
views
How to find out a process's proportional use of system-wide Committed_AS memory on Linux?
On Linux, it's possible to [disable overcommitting memory](https://unix.stackexchange.com/questions/797835/disabling-overcommitting-memory-seems-to-cause-allocs-to-fail-too-early-what-co/797836#797836) which makes it behave like Windows, in that `malloc()` will fail once all physical memory is used...
On Linux, it's possible to [disable overcommitting memory](https://unix.stackexchange.com/questions/797835/disabling-overcommitting-memory-seems-to-cause-allocs-to-fail-too-early-what-co/797836#797836) which makes it behave like Windows, in that
malloc()
will fail once all physical memory is used up. As explained [in this insightful and good answer](https://unix.stackexchange.com/a/797888/104885) , in that mode, the Committed_AS
memory statistic shown in /proc/meminfo
becomes the relevant value for used up memory, rather than any of the other metrics like calculating it based on MemFree
and so on.
**Here's my question:** So when running that mode, how do I find out a process's proportional use of system-wide Committed_AS
total value on Linux? Is there an easy way to do so?
**As for more background info**, I've been using this mode now for some days. It's useful for example to test out how software I work on would behave on Windows when hitting the memory limit.
However I ran into the practical issue when I run out of memory, it's hard to find the biggest offenders. It seems to be the case that no common system monitor tool shows how much a process actually committed in terms of memory, since in my understanding the usual resident memory, shared memory, and so on only apply to memory *actually written into* (which I think is smaller than committed memory).
Hence, it becomes difficult to judge which program actually committed the most memory and may be worth terminating when I run out. Seeing the committed memory might also help identifying programs that accidentally use fork()
in situations where they perhaps should be using vfork()
.
E. K.
(153 rep)
Jul 16, 2025, 08:15 AM
• Last activity: Jul 17, 2025, 06:33 PM
2
votes
1
answers
44
views
Are the page tables of the process preempted swapped out if there is a dearth of memory for new process
Suppose process A has been preempted to allow process B to run. If system memory is low and the kernel needs to reclaim memory for process B, is it possible for the page tables of process A to be swapped out to disk? I understand that when a page belonging to a process is swapped out, the correspond...
Suppose process A has been preempted to allow process B to run. If system memory is low and the kernel needs to reclaim memory for process B, is it possible for the page tables of process A to be swapped out to disk?
I understand that when a page belonging to a process is swapped out, the corresponding page table entry (PTE) must be updated to indicate that the page has been swapped. But if the page tables of process A were already swapped out before its pages are selected for swapping, how does the kernel update the PTEs to reflect this?
In such a scenario, will the kernel swap the page tables of process A back into memory just to update the PTEs? Or is there some alternative mechanism used?
Just tried reading other sources from the internet, didn't find much
Padala Teja Sai Kumar Reddy
(23 rep)
Jul 13, 2025, 01:12 PM
• Last activity: Jul 13, 2025, 01:57 PM
0
votes
3
answers
497
views
Does the time command include the memory claimed by forked processes?
I want to benchmark some scripts with the `time` command. I am wondering if this command catches child processes' memory usage. ``` command time -f '%M' python my_script.py ``` If not, what are my options? Is `valgrind` suitable for this purpose? I also don't want to double count copy-on-write memor...
I want to benchmark some scripts with the
time
command. I am wondering if this command catches child processes' memory usage.
command time -f '%M' python my_script.py
If not, what are my options? Is valgrind
suitable for this purpose?
I also don't want to double count copy-on-write memory that is not actually filling up space.
HappyFace
(1694 rep)
Jan 19, 2022, 12:29 PM
• Last activity: Jul 13, 2025, 01:44 PM
1
votes
1
answers
106
views
Disabling overcommitting memory seems to cause allocs to fail too early, what could be the reason?
I tested out `echo 2 > /proc/sys/vm/overcommit_memory`, which I know isn't a commonly used or recommended mode, but for various reasons it could be beneficial for some of my workloads. However, when I tested this out on a desktop system with 15.6GiB RAM, with barely a quarter of memory used, most pr...
I tested out
I'm guessing it may not be intended the kernel keeps sitting on this dynamic memory of more than 6GiB forever without that being usable. Does anybody have an idea why it behaves like that with overcommitting disabled? Perhaps I'm missing something here.
**Update 2:**
Here's more information collected when hitting this weird condition again, where the dynamic kernel memory won't get out of the way:
echo 2 > /proc/sys/vm/overcommit_memory
, which I know isn't a commonly used or recommended mode, but for various reasons it could be beneficial for some of my workloads.
However, when I tested this out on a desktop system with 15.6GiB RAM, with barely a quarter of memory used, most programs would already start crashing or erroring, and Brave would fail to open tabs:
$ dmesg
...
[24551.333140] __vm_enough_memory: pid: 19014, comm: brave, bytes: 268435456 not enough memory for the allocation
[24551.417579] __vm_enough_memory: pid: 19022, comm: brave, bytes: 268435456 not enough memory for the allocation
[24552.506934] __vm_enough_memory: pid: 19033, comm: brave, bytes: 268435456 not enough memory for the allocation
$ ./smem -tkw
Area Used Cache Noncache
firmware/hardware 0 0 0
kernel image 0 0 0
kernel dynamic memory 4.0G 3.5G 519.5M
userspace memory 3.4G 1.3G 2.1G
free memory 8.2G 8.2G 0
----------------------------------------------------------
15.6G 13.0G 2.7G
I understand that with overcommitting memory disabled, fork()
instead of vfork()
which many Linux suboptimally programs use, can cause issues once the process has more memory allocated. But it seems like this isn't the case here, since 1. the affected processes seem to at most use a few hundred megabytes of memory, and 2. the allocation listed in dmesg
as failing is way smaller than what's listed as free, and 3. the overall system memory doesn't seem to be even a quarter filled up.
Some more system info:
# /sbin/sysctl vm.overcommit_ratio vm.overcommit_kbytes vm.admin_reserve_kbytes vm.user_reserve_kbytes
vm.overcommit_ratio = 50
vm.overcommit_kbytes = 0
vm.admin_reserve_kbytes = 8192
vm.user_reserve_kbytes = 131072
I'm therefore wondering what the cause here is. Is there some obvious reason for this, perhaps some misconfiguration on my part that could be improved?
**Update:** so, in part it seems to have been the overcommit_ratio that @StephenKitt helped me find, which needed adjustment like this:
echo 2 > /proc/sys/vm/overcommit_memory
echo 100 > /proc/sys/vm/overcommit_ratio
But now I seem to be running into another wall, and I first thought it would be the fork()
vs vfork()
issue, but instead it seems to be once app memory usage reaches the dynamic kernel memory:

[32915.298484] __vm_enough_memory: pid: 24347, comm: brave, bytes: 268435456 not enough memory for the allocation
[32916.293690] __vm_enough_memory: pid: 24355, comm: brave, bytes: 268435456 not enough memory for the allocation
# exit
~/Develop/smem $ ./smem -tkw
Area Used Cache Noncache
firmware/hardware 0 0 0
kernel image 0 0 0
kernel dynamic memory 7.8G 7.4G 384.0M
userspace memory 5.2G 1.5G 3.7G
free memory 2.7G 2.7G 0
----------------------------------------------------------
15.6G 11.5G 4.1G
~/Develop/smem $ cat /proc/sys/vm/overcommit_ratio
100
~/Develop/smem $ cat /proc/sys/vm/overcommit_memory
2
~/Develop/smem $ cat /proc/meminfo
MemTotal: 16384932 kB
MemFree: 2803496 kB
MemAvailable: 10297132 kB
Buffers: 1796 kB
Cached: 8749580 kB
SwapCached: 0 kB
Active: 7032032 kB
Inactive: 4760088 kB
Active(anon): 4698776 kB
Inactive(anon): 0 kB
Active(file): 2333256 kB
Inactive(file): 4760088 kB
Unevictable: 825908 kB
Mlocked: 1192 kB
SwapTotal: 2097148 kB
SwapFree: 2097148 kB
Zswap: 0 kB
Zswapped: 0 kB
Dirty: 252 kB
Writeback: 0 kB
AnonPages: 3866720 kB
Mapped: 1520696 kB
Shmem: 1658104 kB
KReclaimable: 570808 kB
Slab: 743788 kB
SReclaimable: 570808 kB
SUnreclaim: 172980 kB
KernelStack: 18720 kB
PageTables: 53772 kB
SecPageTables: 0 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 18482080 kB
Committed_AS: 17610184 kB
VmallocTotal: 261087232 kB
VmallocUsed: 86372 kB
VmallocChunk: 0 kB
Percpu: 864 kB
CmaTotal: 65536 kB
CmaFree: 608 kB
E. K.
(153 rep)
Jul 11, 2025, 04:59 AM
• Last activity: Jul 13, 2025, 08:26 AM
3
votes
3
answers
7003
views
What is paging space in AIX?
I get that Paging Space in AIX is actually like SWAP in Linux. In one of my AIX servers at work, i'm actually seeing 99.7% Physical memory being utilized when my application is running(handling quite some data). Most of the time Server is utilizing 95% of Physical memory(RAM). From the pic attached,...
I get that Paging Space in AIX is actually like SWAP in Linux. In one of my AIX servers at work, i'm actually seeing 99.7% Physical memory being utilized when my application is running(handling quite some data). Most of the time Server is utilizing 95% of Physical memory(RAM). From the pic attached, we can see Paging Space is being utilized. And i believe my Application can run little faster if i upgrade RAM.
But i am not able to convince the management. They say that still Paging Space is there and until it's utilized fully, no need to upgrade RAM.
Isn't paging Space actually in Hard Disk ?
OS actually transfer data between Paging Space(Hard Disk) & RAM back & forth in case of High Memory Utilization ?
Can someone please shed light that if i am using up 99.7% of Physical Memory - RAM in Server,it's a Good reason to upgrade RAM ?
Note: I'm posting here as a last resort and in need of proof to convince my management to upgrade the RAM in my server before Christmas, as i will be seeing quite a lot of data during Christmas. So Please, before down-voting, at least help me what's wrong with my question and help me get an answer.

Bruce
(31 rep)
Dec 5, 2017, 06:59 PM
• Last activity: Jul 11, 2025, 02:05 AM
0
votes
2
answers
373
views
How to use vm.overcommit_memory=1 without getting system hung?
I am using ```vm.overcommit_memory=1``` on my linux system which has been helpful to allow starting multiple applications which otherwise wouldn't even start with default value of 0, however, sometimes my system just freezes and seems the OOM killer is unable to do anything to prevent this situation...
I am using
.overcommit_memory=1
on my linux system which has been helpful to allow starting multiple applications which otherwise wouldn't even start with default value of 0, however, sometimes my system just freezes and seems the OOM killer is unable to do anything to prevent this situation. I have some swap memory which also got consumed. I've also noticed some instances when system is unresponsive, even the magic SysRq keys don't work. Sorry, no logs are available at this time to include here.
In general, is there any configuration or tunable that can get the OOM killer to kill the highest memory consuming process(es) immediately without ever letting the system go unresponsive when using .overcommit_memory=1
?
eagle007
(3 rep)
Nov 27, 2024, 10:10 PM
• Last activity: Jul 10, 2025, 10:16 AM
9
votes
1
answers
739
views
When suspending to disk, does memory cache got dumped to disk?
My system has 64GiB of memory. I noticed it usually uses about 20GiB for cache. I wonder if I do "suspend to disk", does the cached part get dumped to disk as well, or is it written to disk before the system is suspended to disk? I want to reduce the actual total used memory before suspending to dis...
My system has 64GiB of memory. I noticed it usually uses about 20GiB for cache. I wonder if I do "suspend to disk", does the cached part get dumped to disk as well, or is it written to disk before the system is suspended to disk?
I want to reduce the actual total used memory before suspending to disk so that the system can wake up faster.
Thanks
David S.
(5823 rep)
Jul 2, 2025, 10:32 AM
• Last activity: Jul 3, 2025, 09:05 AM
5
votes
2
answers
1887
views
Measuring peak memory usage of many processes
I have a bash script that calls various other scripts, one of which has a bunch of commands that launch scripts inside a screen session like this: screen -S $SESSION_NAME -p $f -X stuff "$CMD\n" Will running my top script with /usr/bin/time -v capture the peak memory usage everything? I want to have...
I have a bash script that calls various other scripts, one of which has a bunch of commands that launch scripts inside a screen session like this:
screen -S $SESSION_NAME -p $f -X stuff "$CMD\n"
Will running my top script with /usr/bin/time -v capture the peak memory usage everything? I want to have this script run as a cron job but I need to know how much memory it will take before I cause problems for other users on the machine.
Thanks
Matt Feldman
(51 rep)
Oct 10, 2016, 05:33 PM
• Last activity: Jul 2, 2025, 08:44 AM
8
votes
3
answers
3118
views
How to measure peak memory usage of a process tree?
By a process tree I mean the process and everything it executes in any way. I tried `/usr/bin/time -v`, but the results are entirely wrong. For example running a `npm test` in one of our projects with 14GiB free RAM and 8GiB of free swap results in OOM killer starting to kill my applications (most c...
By a process tree I mean the process and everything it executes in any way.
I tried
/usr/bin/time -v
, but the results are entirely wrong. For example running a npm test
in one of our projects with 14GiB free RAM and 8GiB of free swap results in OOM killer starting to kill my applications (most commonly a browser and IDE). time
reports only 800MiB was used, even though the real memory consumption must have been very high, over 20GiB...
menfon
(262 rep)
Nov 22, 2022, 09:45 AM
• Last activity: Jul 1, 2025, 04:22 PM
0
votes
0
answers
9
views
BUG: Bad page state in process swapper pfn:801bd while booting linux (5.15.68, arm64)
I'm tyring to boot linux on an SoC (arm64 based) and seeing trap in early stage. Here is the boot message. Booting Linux on physical CPU 0x0000000002 [0x411fd401] Linux version 5.15.68 (etri@AB21-T07) (aarch64-none-elf-gcc (GNU Toolchain for the Arm Architecture 11.2-2022.02 (arm-11.14)) 11.2.1 2022...
I'm tyring to boot linux on an SoC (arm64 based) and seeing trap in early stage.
Here is the boot message.
Booting Linux on physical CPU 0x0000000002 [0x411fd401]
Linux version 5.15.68 (etri@AB21-T07) (aarch64-none-elf-gcc (GNU Toolchain for the Arm Architecture 11.2-2022.02 (arm-11.14)) 11.2.1 20220111, GNU ld (GNU Toolchain for the Arm Architecture 11.2-2022.02 (arm-11.14)) 2.37.20220122) #106 SMP PREEMPT Sun Jun 29 20:47:09 KST 2025
Machine model: ETRI ab21m
earlycon: ns16550a0 at MMIO 0x0000000010220020 (options '115200n8')
printk: bootconsole [ns16550a0] enabled
Reserved memory: created CMA memory pool at 0x00000000f0000000, size 256 MiB
OF: reserved mem: initialized node linux,cma, compatible id shared-dma-pool
Reserved memory: created DMA memory pool at 0x00000000e0000000, size 128 MiB
OF: reserved mem: initialized node axpursvd@e0000000, compatible id shared-dma-pool
Zone ranges:
DMA [mem 0x0000000080000000-0x00000000ffffffff]
DMA32 empty
Normal empty
Movable zone start for each node
Early memory node ranges
node 0: [mem 0x0000000080000000-0x00000000dfffffff]
node 0: [mem 0x00000000e0000000-0x00000000e7ffffff]
node 0: [mem 0x00000000e8000000-0x00000000ffffffff]
Initmem setup node 0 [mem 0x0000000080000000-0x00000000ffffffff]
percpu: Embedded 23 pages/cpu s57312 r8192 d28704 u94208
Detected PIPT I-cache on CPU0
CPU features: detected: GIC system register CPU interface
CPU features: detected: Virtualization Host Extensions
CPU features: detected: Hardware dirty bit management
CPU features: detected: Spectre-v4
CPU features: detected: Spectre-BHB
alternatives: patching kernel code
Built 1 zonelists, mobility grouping on. Total pages: 517120
Kernel command line: earlycon root=/dev/ram init=/init nokaslr, cpuidle.off=1
Unknown kernel command line parameters "nokaslr,", will be passed to user space.
Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
entering mm_init..
mem auto-init: stack:off, heap alloc:off, heap free:off
BUG: Bad page state in process swapper pfn:801bd
------------[ cut here ]------------
kernel BUG at arch/arm64/kernel/traps.c:498!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 5.15.68 #106
Hardware name: ETRI ab21m (DT)
pstate: 000000c9 (nzcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : do_undefinstr+0x180/0x1a4
lr : do_undefinstr+0x194/0x1a4
sp : ffffffc008693ad0
x29: ffffffc008693ad0 x28: ffffffc00869a100 x27: 000000000030f231
x26: fffffffe00007000 x25: 0000000080000000 x24: 0000000000000008
x23: 00000000800000c9 x22: ffffffc00813c810 x21: ffffffc008693c90
x20: 00000000ffffffff x19: ffffffc008693b40 x18: ffffffffffffffff
x17: 2c73657479622036 x16: 373538343031202c x15: ffffffc008701fcd
x14: 0000000000000000 x13: ffffffc0086a78f8 x12: 0000000000000072
x11: 0000000000000026 x10: ffffffc0086a78f8 x9 : ffffffc0086a78f8
x8 : 00000000ffffff7f x7 : ffffffc0086aa4f8 x6 : ffffffc0086aa4f8
x5 : ffffffc00869bca8 x4 : fffffffffefffffe x3 : 0000000000000000
x2 : 0000000000000002 x1 : ffffffc00869a100 x0 : 00000000800000c9
Call trace:
do_undefinstr+0x180/0x1a4
el1_undef+0x28/0x44
el1h_64_sync_handler+0x80/0xc0
el1h_64_sync+0x74/0x78
__dump_page+0x0/0x410
bad_page+0xe8/0x120
__free_pages_ok+0x418/0x460
__free_pages_core+0xa8/0xc0
memblock_free_pages+0x10/0x18
memblock_free_all+0x1ac/0x258
mem_init+0x70/0x88
mm_init+0x2c/0x78
start_kernel+0x260/0x64c
__primary_switched+0xa0/0xac
Code: b94037f4 a9025bf5 17ffffca a9025bf5 (d4210000)
------------[ cut here ]------------
kernel BUG at arch/arm64/kernel/traps.c:498!
I think my device tree have something wrong. This is the line related to memory, and I don't know what is wrong. Can any one notice anything suspicous?
memory@80000000 {
device_type = "memory";
/*reg = , trusted ram */
/*, non-trusted ram */
reg = ;
};
reserved-memory {
#address-cells = ;
#size-cells = ;
ranges;
axpu_reserved_mem: axpursvd@e0000000 {
compatible = "shared-dma-pool";
no-map;
reg = ;
};
linux,cma {
compatible = "shared-dma-pool";
reusable;
size = ;
alloc-ranges = ;
linux,cma-default;
};
};
Chan Kim
(459 rep)
Jun 30, 2025, 03:13 AM
3
votes
1
answers
2380
views
Setting up cgroups with /etc/cgconfig.conf failed with Cgroup, requested group parameter does not exist
I'm looking at getting `cgroups` working on my linux machine and well it's been a pain. I feel like this should be a lot easier as resource management is pretty key to a healthy desktop environment which is why I'm trying to use it but I've just run into so many problems. I have a `/etc/cgconfig.con...
I'm looking at getting
cgroups
working on my linux machine and well it's been a pain.
I feel like this should be a lot easier as resource management is pretty key to a healthy desktop environment which is why I'm trying to use it but I've just run into so many problems.
I have a /etc/cgconfig.conf
file that looks like this:
group "chromium_slack" {
perm {
admin {
uid = "nate";
gid = "nate";
}
task {
uid = "nate";
gid = "nate";
}
}
cpu {
shares="50";
}
memory {
swappiness="60";
limit_in_bytes="256000000";
}
}
And when I start the cgconfig
service with this:
sudo systemctl start cgconfig.service
I get a service status Cgroup, requested group parameter does not exist
that looks like this:
× cgconfig.service - Control Group configuration service
Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; enabled; preset: disabled)
Active: failed (Result: exit-code) since Thu 2022-12-15 15:17:16 EST; 11min ago
Process: 9559 ExecStart=/usr/bin/cgconfigparser -l /etc/cgconfig.conf -s 1664 (code=exited, status=95)
Main PID: 9559 (code=exited, status=95)
CPU: 8ms
Dec 15 15:17:16 nx systemd: Starting Control Group configuration service...
Dec 15 15:17:16 nx cgconfigparser: /usr/bin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup, requested group parameter does not exist
Dec 15 15:17:16 nx systemd: cgconfig.service: Main process exited, code=exited, status=95/n/a
Dec 15 15:17:16 nx systemd: cgconfig.service: Failed with result 'exit-code'.
Dec 15 15:17:16 nx systemd: Failed to start Control Group configuration service.
But when I try to do this all manually with cgcreate
like so:
sudo cgcreate -a $USER -g memory,cpu:chromium_slack
sudo echo 256M > /sys/fs/cgroup/chromium_slack/memory.limit_in_bytes
I get a permission denied: /sys/fs/cgroup/chromium_slack/memory.limit_in_bytes
error.
So I guess my question is... How the heck do I get this working?
Nate-Wilkins
(133 rep)
Dec 20, 2022, 02:35 AM
• Last activity: Jun 29, 2025, 10:00 PM
0
votes
1
answers
2411
views
How do I determine the nominal memory bandwidth of my system?
I'm running some (not so new) Linux distribution. I want to determine what the memory bandwidth of my system is - not the effective bandwidth I can get from benchmarking/testing - but the _nominal_ bandwidth, given my board, CPU sockets, memory channels and RAM DIMMs. I should mention that when I tr...
I'm running some (not so new) Linux distribution. I want to determine what the memory bandwidth of my system is - not the effective bandwidth I can get from benchmarking/testing - but the _nominal_ bandwidth, given my board, CPU sockets, memory channels and RAM DIMMs.
I should mention that when I try to figure this out in my head I always get the calculations mixed up: gigabytes verus gigabits, transactions per seconds vs bytes per second, the number of channels vs the number of DIMMs etc.
Note: If possible, assume I don't have utilities such as lshw or inxi installed.
einpoklum
(10753 rep)
Jul 5, 2022, 11:01 AM
• Last activity: Jun 25, 2025, 02:05 PM
1
votes
1
answers
1938
views
How to use three monitors on a laptop with linux mint through hdmi and usb-c
I have an HP EliteBook 840 G7 Notebook, with Linux Mint 20.3. My goal is to connect two more monitors to the machine. I have a monitor connected via HDMI port, and a monitor connected via USB-C port. Both monitors at the moment work if connected individually, but if I connect both, the one on HDMI t...
I have an HP EliteBook 840 G7 Notebook, with Linux Mint 20.3.
My goal is to connect two more monitors to the machine.
I have a monitor connected via HDMI port, and a monitor connected via USB-C port.
Both monitors at the moment work if connected individually, but if I connect both, the one on HDMI takes over, and no signal goes to the one connected via USB-C.
The strange part is that I have been able to work with three monitors for some time, up until (several months ago) I have upgraded the system (from 20.1 to 20.2, I think) ad I lost this ability.
After reading this answer I have tried to change settings in the bios (allocated bigger video memory, and "enabled high resolution mode" (something involving USB devices in a dock, and Gigabit NIC), but I had no results.
Fabio
(535 rep)
Dec 19, 2022, 11:26 AM
• Last activity: Jun 22, 2025, 08:06 AM
1
votes
1
answers
744
views
Constant hdd write but iotop shows nothing
The disk activity monitor widget in KDE (Debian) shows constant HDD write around 12 MiB/s, when I run `iotop`, there is nothing that would be constantly using HDD. When I run `atop`, at first `PAG` is red and blinking but after about 3 seconds disappears, when i run `free -h`, I get: total used free...
The disk activity monitor widget in KDE (Debian) shows constant HDD write around 12 MiB/s, when I run
iotop
, there is nothing that would be constantly using HDD. When I run atop
, at first PAG
is red and blinking but after about 3 seconds disappears, when i run free -h
, I get:
total used free shared buff/cache available
Mem: 7.7Gi 2.2Gi 3.0Gi 1.1Gi 2.5Gi 4.2Gi
Swap: 7.9Gi 0.0Ki 7.9Gi
Any idea what can be causing this or how to find out?
Also, i tried to clear the cache, it cleared to 1.5 Gi but after less than 5 minutes it was back to 2.5 Gi as shown above. Also i am thinking that Debian is using quite a lot of memory given that only firefox with the stackexchange window is open.
atapaka
(675 rep)
Sep 3, 2022, 02:56 PM
• Last activity: Jun 9, 2025, 05:57 PM
1
votes
1
answers
48
views
Obtain memory values in SAR the same way as in FREE
I'd like to know if I can get memory in `sar` the same way I get it from `free`. Currently `free` shows me a memory usage of 47.06% (16956/36027)*100 [used/total x 100]. Whereas `sar` is showing me a usage of almost 100%. The benefit of `sar` is that in my system, I can read values for the last 30 d...
I'd like to know if I can get memory in
sar
the same way I get it from free
.
Currently free
shows me a memory usage of 47.06% (16956/36027)*100 [used/total x 100]. Whereas sar
is showing me a usage of almost 100%. The benefit of sar
is that in my system, I can read values for the last 30 days, whereas free
is just the current moment.
Is there any way to see in sar
values similar to those shown by free
?

Álvaro
(111 rep)
Jun 7, 2025, 09:01 AM
• Last activity: Jun 8, 2025, 05:14 AM
611
votes
10
answers
1083608
views
How to display `top` results sorted by memory usage in real time?
How can I display the `top` results in my terminal in real time so that the list is sorted by memory usage?
How can I display the
top
results in my terminal in real time so that the list is sorted by memory usage?
Theodor Coogan
(6111 rep)
May 11, 2014, 08:27 PM
• Last activity: May 23, 2025, 01:08 PM
Showing page 1 of 20 total questions