Sample Header Ad - 728x90

How are /proc/meminfo values calculated?

5 votes
1 answer
6587 views
/!\ Current state: Update 4 /!\ Some /proc/meminfo values are a sum or a difference of some other values. However, not much is said about how they are calculated in these two links (just do ctrl-f meminfo to get there): - https://www.kernel.org/doc/Documentation/filesystems/proc.txt - man 5 proc Besides, I've also dug here and there, and here's what I found so far: MemFree: LowFree + HighFree Active: Active(anon) + Active(file) Inactive: Inactive(anon) + Inactive(file) I have not found much about the other fields, and where I have, the results don't match, like in these Stack Overflow posts: - How to Calculate MemTotal in /proc/meminfo (2035272 kB vs expected 2034284 kB) - Entry in /proc/meminfo - on Stack Overflow Are these two values correctly calculated? Or is there some variability due to some external means? Also, some values can't -obviously- be calculated without external values, but I'm still interested in that. How are /proc/meminfo values calculated? ---- If that helps, here's an example of /proc/meminfo: MemTotal: 501400 kB MemFree: 38072 kB MemAvailable: 217652 kB Buffers: 0 kB Cached: 223508 kB SwapCached: 11200 kB Active: 179280 kB Inactive: 181680 kB Active(anon): 69032 kB Inactive(anon): 70908 kB Active(file): 110248 kB Inactive(file): 110772 kB Unevictable: 0 kB Mlocked: 0 kB HighTotal: HighFree: LowTotal: LowFree: MmapCopy: SwapTotal: 839676 kB SwapFree: 785552 kB Dirty: 4 kB Writeback: 0 kB AnonPages: 128964 kB Mapped: 21840 kB Shmem: 2488 kB Slab: 71940 kB SReclaimable: 41372 kB SUnreclaim: 30568 kB KernelStack: 2736 kB PageTables: 5196 kB Quicklists: NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1090376 kB Committed_AS: 486916 kB VmallocTotal: 34359738367 kB VmallocUsed: 4904 kB VmallocChunk: 34359721736 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: ShmemPmdMapped: CmaTotal: CmaFree: HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 36800 kB DirectMap2M: 487424 kB DirectMap4M: DirectMap1G: ---- **Update 1**: Here's the code used by /proc/meminfo to fill its data: http://elixir.free-electrons.com/linux/v4.15/source/fs/proc/meminfo.c#L46 However, since I'm not that of a coder, I'm having a hard time to figure out where these enums (e.g. NR_LRU_LISTS, etc) and global variables (e.g totalram_pages from si_meminfo in page_alloc.c#L4673 ) are filled. **Update 2**: The enums part is now solved, and NR_LRU_LISTS equals 5. But the totalram_pages part seems to be harder to find out... **Update 3**: It looks like I won't be able to read the code since it looks very complex. If someone manages to do it and shows how /proc/meminfo valures are calculated, he/she can have the bounty. The more the answer is detailed is, the higher the bounty will be. **Update 4**: A year and a half after, I learned that one of the reasons of this very question is in fact related to the very infamous OOM (Out Of Memory) bug that was finally recognized in August 2019 after **AT LEAST 16 YEARS** of "**wontfix**", until some famous Linux guy (thank you again, Artem S Tashkinov ! :) ) made the non-elitst Linux community voices' finally heard: "Yes, Linux Does Bad In Low RAM / Memory Pressure Situations On The Desktop" . Also, most Linux distributions do calculate the real **available** RAM more precisely mainly since around 2017 (haven't updated my distro at the time of this question) despite the kernel fix landed in 3.14 (March 2014), which also gives a little bit more clues: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=34e431b0ae398fc54ea69ff85ec700722c9da773 But, the OOM problem is still here in 2021 even if it does happen less often with somewhat borderline fixes (earlyoom and systemd-oomd), while the calculated *available* RAM is still not correctly reporting the real used RAM. Also, these related questions might have some answers: - https://unix.stackexchange.com/questions/440558/30-of-ram-is-buffers-what-is-it - https://unix.stackexchange.com/questions/390518/what-do-the-buff-cache-and-avail-mem-fields-in-top-mean - https://unix.stackexchange.com/questions/261247/how-can-i-get-the-amount-of-available-memory-portably-across-distributions So, my point on "update 3" about how /proc/meminfo is getting its values still stands. **However**, more insights about the OOM issue on the next link, which also talk about a very promising project against that, and it does even comes with a little bit of a GUI !: https://github.com/hakavlad/nohang Firsts tests I did seems to show that this nohang tool really seems to do what it promised, and even better than earlyoom.
Asked by X.LINK (1362 rep)
Jan 31, 2018, 01:14 AM
Last activity: Feb 16, 2021, 10:10 AM