Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
4
votes
1
answers
180
views
Why are 6GB unavailable on 32GB? (Debian 12)
(This is updated with each new information) My HP Proliant ML350 Gen 10 server has 32 GB RAM, that seem correctly detected. But only 26GB are available, according to free/htop/proc.meminfo… ``` free -m total used free shared buff/cache available Mem: 31755 6185 25533 6 436 25570 Swap: 976 0 ``` ```...
(This is updated with each new information)
My HP Proliant ML350 Gen 10 server has 32 GB RAM, that seem correctly detected.
But only 26GB are available, according to free/htop/proc.meminfo…
free -m
total used free shared buff/cache available
Mem: 31755 6185 25533 6 436 25570
Swap: 976 0
MemTotal: 32518056 kB
MemFree: 26173304 kB
MemAvailable: 26250792 kB
OS : Debian 12 bookworm
Other details and command outputs are too long for this forum, I've put them here:
https://semestriel.framapad.org/p/phqm1kqhph-afqg?lang=fr
Here are the different tests/ideas/options:
* These are not buffers.
* tmpfs system mounts are almost empty.
* I have no huge pages.
* I have killed or desinstalled more or less anything consuming memory that I could find.
* I have no graphic card (VGA), and only one CPU.
* No ZRAM
* No ZFS, only ext4
* I've removed the NVMe disc, the SSD drives. The OS lies on a 128 GB SDCard.
* A Live Debian with nothing installed shows the same already-used 6GB (with all graphical interfaces off)
* Same problem in emergency mode.
* An old Kali on LiveUSB avec a 5.10 kernel does not show this. But a new Kali with Linux 6.12 shows the same problem as Debian 12. Kali is Linux-based, so this is coherent. This confirms that this the RAM is not eaten by a software that I have installed.
* I've erased the LVM partitions, removed the disks (physically)
* I've deactivated ILO (the remote administration interface from HP): /proc/iomem does not show any memory relase. But it should not amount to more thant 300MB.
I have discovered things like /proc/iomem or /proc/modules far from my area of expertise, without finding huge amounts of hidden memory.
I've found [this thread](https://unix.stackexchange.com/a/730970) on memory used by the kernel, but this is a 1.5% loss, not 20%!
I found [this other thread on "non-cached kernel dynamic memory"](https://unix.stackexchange.com/questions/62066/what-is-kernel-dynamic-memory-as-reported-by-smem) that does not explain anything in fact.
* By comparison, an (empty) Vagrant machine with 32GB RAM shows 32'476'804 available.
* And my laptop with 64 GB RAM, Debian 12 too, can go down to less than 2GB used in emergency mode.
I really would like to know where these 6GB are gone, and if I can recover a part of it.
And yes, the system really feels like it has only 25-26GB. As a test, I've created 24GB of static huge pages (more seems impossible), the free/available memory becomes almost nothing.
(I've found this problem in a test, where a PG cluster with a 24GB shared_buffers collapses, with massive IOwait on the slow root partition.)
Thank you for any advice.
Krysztt
(61 rep)
Jul 20, 2025, 06:57 PM
• Last activity: Jul 23, 2025, 10:11 AM
1
votes
1
answers
5524
views
Why is VmallocTotal 34359738367 kB?
`/proc/meminfo` has a memory statistic `VmallocTotal`. It is described as > Total size of vmalloc memory area. in [proc's man page](https://man7.org/linux/man-pages/man5/proc.5.html) and elsewhere as > Total memory available in kernel for vmalloc allocations It sparked my curiosity because it is a v...
/proc/meminfo
has a memory statistic VmallocTotal
. It is described as
> Total size of vmalloc memory area.
in [proc's man page](https://man7.org/linux/man-pages/man5/proc.5.html)
and elsewhere as
> Total memory available in kernel for vmalloc allocations
It sparked my curiosity because it is a very high number and everywhere I searched it is exactly 34359738367 kB. It seems like an arbitrary maximum. But what is the significance of 34359738367 kB? It is not a multiple of 2, not a prime number, but it is 0x7FFFFFFFF in hexadecimal. I also noticed pmap
process memory map addresses max out at 0x7FFFFFFFF. But then what is the practical significance of 0x7FFFFFFFF?
Arthur Tarasov
(113 rep)
Apr 28, 2022, 01:46 PM
• Last activity: Mar 25, 2025, 11:10 AM
0
votes
0
answers
89
views
What's left unaccounted by /proc/meminfo
What is the proper formula to understand distribution of MemTotal - MemFree? On a system where Huge pages are almost not utilized I assume that it should be the sum of the following stats: * page cache ('Buffers', 'Cached', 'SwapCached') * kernel memory ('VmallocUsed', 'Slab', 'PageTables', 'KernelS...
What is the proper formula to understand distribution of MemTotal - MemFree?
On a system where Huge pages are almost not utilized I assume that it should be the sum of the following stats:
* page cache ('Buffers', 'Cached', 'SwapCached')
* kernel memory ('VmallocUsed', 'Slab', 'PageTables', 'KernelStack')
* user-space allocated memory ('AnonPages')
On my system:\
MemTotal - MemFree = 3970852 \
Buffers + Cached + SwapCached + VmallocUsed + Slab + PageTables + KernelStack + AnonPages = 1747364
Why the difference is so huge?
Is it possible that something is not included in meminfo stats?
As per stated here :
> The memory reported by the non overlapping counters may not
add up to the overall memory usage and the difference for some workloads
can be substantial.
Full meminfo:
MemTotal 4040824
MemFree 69972
MemAvailable 116364
Buffers 35720
Cached 118712
SwapCached 67364
Active 625652
Inactive 827128
Active(anon) 566120
Inactive(anon) 757488
Active(file) 59532
Inactive(file) 69640
Unevictable 1396
Mlocked 1396
SwapTotal 7340028
SwapFree 5805036
Dirty 256
Writeback 136
AnonPages 1280096
Mapped 86192
Shmem 23924
Slab 219608
SReclaimable 89480
SUnreclaim 130128
KernelStack 12096
PageTables 13768
NFS_Unstable 0
Bounce 0
WritebackTmp 0
CommitLimit 9360440
Committed_AS 6649808
VmallocTotal 263061440
VmallocUsed 0
VmallocChunk 0
Percpu 2144
AnonHugePages 2048
ShmemHugePages 0
ShmemPmdMapped 0
HugePages_Total 0
HugePages_Free 0
HugePages_Rsvd 0
HugePages_Surp 0
Hugepagesize 2048
Hugetlb 0
Maksym L
(1 rep)
May 23, 2024, 04:03 PM
5
votes
2
answers
1671
views
How to diagnose high `Shmem`? (was: Why `echo 3 > drop_caches` cannot zero the cache?)
I have a problem with my Linux machine where system now seems to run easily out of RAM (and trigger OOM Killer) when it normally can handle similar load just fine. Inspecting `free -tm` shows that `buff/cache` is eating lots of RAM. Normally this would be fine because I want to cache the disk IO but...
I have a problem with my Linux machine where system now seems to run easily out of RAM (and trigger OOM Killer) when it normally can handle similar load just fine. Inspecting
free -tm
shows that buff/cache
is eating lots of RAM. Normally this would be fine because I want to cache the disk IO but it now seems that kernel cannot release this memory even if system is going out of RAM.
The system looks currently like this:
total used free shared buff/cache available
Mem: 31807 15550 1053 14361 15203 1707
Swap: 993 993 0
Total: 32801 16543 1053
but when I try to force the cache to be released I get this:
$ grep -E "^MemTotal|^Cached|^Committed_AS" /proc/meminfo
MemTotal: 32570668 kB
Cached: 15257208 kB
Committed_AS: 47130080 kB
$ time sync
real 0m0.770s
user 0m0.000s
sys 0m0.002s
$ time echo 3 | sudo tee /proc/sys/vm/drop_caches
3
real 0m3.587s
user 0m0.008s
sys 0m0.680s
$ grep -E "^MemTotal|^Cached|^Committed_AS" /proc/meminfo
MemTotal: 32570668 kB
Cached: 15086932 kB
Committed_AS: 47130052 kB
So writing all dirty pages to disks and dropping all caches was only able to release about 130 MB out of 15 GB cache? As you can see, I'm running pretty heavy overcommit already so I really cannot waste 15 GB of RAM for a non-working cache.
Kernel slabtop
also claims to use less than 600 MB:
$ sudo slabtop -sc -o | head
Active / Total Objects (% used) : 1825203 / 2131873 (85.6%)
Active / Total Slabs (% used) : 57745 / 57745 (100.0%)
Active / Total Caches (% used) : 112 / 172 (65.1%)
Active / Total Size (% used) : 421975.55K / 575762.55K (73.3%)
Minimum / Average / Maximum Object : 0.01K / 0.27K / 16.69K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
247219 94755 0% 0.57K 8836 28 141376K radix_tree_node
118864 118494 0% 0.69K 5168 23 82688K xfrm_state
133112 125733 0% 0.56K 4754 28 76064K ecryptfs_key_record_cache
$ cat /proc/version_signature
Ubuntu 5.4.0-80.90~18.04.1-lowlatency 5.4.124
$ cat /proc/meminfo
MemTotal: 32570668 kB
MemFree: 1009224 kB
MemAvailable: 0 kB
Buffers: 36816 kB
Cached: 15151936 kB
SwapCached: 760 kB
Active: 13647104 kB
Inactive: 15189688 kB
Active(anon): 13472248 kB
Inactive(anon): 14889144 kB
Active(file): 174856 kB
Inactive(file): 300544 kB
Unevictable: 117868 kB
Mlocked: 26420 kB
SwapTotal: 1017824 kB
SwapFree: 696 kB
Dirty: 200 kB
Writeback: 0 kB
AnonPages: 13765260 kB
Mapped: 879960 kB
Shmem: 14707664 kB
KReclaimable: 263184 kB
Slab: 601400 kB
SReclaimable: 263184 kB
SUnreclaim: 338216 kB
KernelStack: 34200 kB
PageTables: 198116 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 17303156 kB
Committed_AS: 47106156 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 67036 kB
VmallocChunk: 0 kB
Percpu: 1840 kB
HardwareCorrupted: 0 kB
AnonHugePages: 122880 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 9838288 kB
DirectMap2M: 23394304 kB
**Can you suggest any explanation what could be causing the Cached
in /proc/meminfo
to take about 50% of the system RAM without ability to release it?** I know that PostgreSQL shared_buffers
with huge pages enabled would show up as Cached
but I'm not running PostgreSQL on this machine. I see that Shmem
in meminfo
looks suspiciously big but how to figure out which processes are using that?
I guess it could be some misbehaving program but how to query the system to figure out which process is holding that RAM? I currently have 452 processes / 2144 threads so investigating all of those manually would be a huge task.
I also checked that the cause of this RAM usage is not (only?) System V shared memory:
$ ipcs -m | awk 'BEGIN{ sum=0 } { sum += $5 } END{print sum}'
1137593612
While total bytes reported by ipcs
is big, it's still "only" 1.1 GB.
I also found similar question https://askubuntu.com/questions/762717/high-shmem-memory-usage where high Shmem usage was caused by crap in tmpfs
mounted directory. However, that doesn't seem to be the problem with my system either, using only 221 MB:
$ df -h -B1M | grep tmpfs
tmpfs 3181 3 3179 1% /run
tmpfs 15904 215 15689 2% /dev/shm
tmpfs 5 1 5 1% /run/lock
tmpfs 15904 0 15904 0% /sys/fs/cgroup
tmpfs 3181 1 3181 1% /run/user/1000
tmpfs 3181 1 3181 1% /run/user/1001
I found another answer that explained that files that used to live on tmpfs
system that's already been deleted but the file handle is still alive doesn't show up in df
output but still eats RAM. I found out that Google Chrome wastes about 1.6 GB to deleted files that it has forgotten(?) to close:
$ sudo lsof -n | grep "/dev/shm" | grep deleted | grep -o 'REG.*' | awk 'BEGIN{sum=0}{sum+=$3}END{print sum}'
1667847810
(Yeah, above doesn't filter chrome
but I also tested with filtering and that's pretty much just Google Chrome wasting my RAM via deleted files with open file handles.)
**Update:** It seems that the real culprit is Shmem: 14707664 kB
and 1.6 GB is explained by deleted files in tmpfs
, System V shared memory explains 1.1 GB and existing files in tmpfs
about 220 MB. So I'm still missing about 11.8 GB somewhere.
**At least with Linux kernel 5.4.124 it appears that Cached
includes all of Shmem
which is the explanation why echo 3 > drop_caches
cannot zero the field Cached
even though it does free the cache.**
So the real question is why Shmem
is taking over 10 GB of RAM when I wasn't expecting any?
**Update:** I checked out top
and found out that fields RSan
("RES Anonymous") and RSsh
("RES Shared") pointed to thunderbird
and Eclipse. Closing Thunderbird didn't release any cached memory but closing Eclipse freed 3.9 GB of Cached
. I'm running Eclipse with JVM flag -Xmx4000m
so it seems that JVM memory usage may appear as Cached
! I'd still prefer to find a method to map memory usage to processes instead of randomly closing processes and checking if it freed any memory.
**Update:** File systems that use tmpfs
behind the scenes could also cause Shmem
to increase. I tested it like this:
$ df --output=used,source,fstype -B1M | grep -v '/dev/sd' | grep -v ecryptfs | tail -n +2 | awk 'BEGIN{sum=0}{sum+=$1}END{print sum}'
4664
So it seems that even if I only exclude filesystems backed by real block devices (my ecryptfs
is mounted on those block devices, too) I can only explain about 4.7 GB of lost memory. And 4.3 GB of that is explained by snapd
created squashfs
mounts which to my knowledge do not use Shmem
.
**Update:** For some people, the explanation has been GEM objects reserved by GPU driver. There doesn't seem to be any standard interface to query these but for my Intel integrated grapchics, I get following results:
$ sudo sh -c 'cat /sys/kernel/debug/dri/*/i915_gem_objects' | perl -npe 's#([0-9]+) bytes#sprintf("%.1f", $1/1024/1024)." MB"#e'
1166 shrinkable [0 free] objects, 776.8 MB
Xorg: 114144 objects, 815.9 MB (38268928 active, 166658048 inactive, 537980928 unbound, 0 closed)
calibre-paralle: 1 objects, 0.0 MB (0 active, 0 inactive, 32768 unbound, 0 closed)
Xorg: 595 objects, 1329.9 MB (0 active, 19566592 inactive, 1360146432 unbound, 0 closed)
chrome: 174 objects, 63.2 MB (0 active, 0 inactive, 66322432 unbound, 0 closed)
chrome: 174 objects, 63.2 MB (0 active, 0 inactive, 66322432 unbound, 0 closed)
chrome: 20 objects, 1.2 MB (0 active, 0 inactive, 1241088 unbound, 0 closed)
firefox: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
GLXVsyncThread: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
chrome: 1100 objects, 635.1 MB (0 active, 0 inactive, 180224 unbound, 0 closed)
chrome: 1100 objects, 635.1 MB (0 active, 665772032 inactive, 180224 unbound, 0 closed)
chrome: 20 objects, 1.2 MB (0 active, 0 inactive, 1241088 unbound, 0 closed)
[k]contexts: 3 objects, 0.0 MB (0 active, 40960 inactive, 0 unbound, 0 closed)
Those results do not sensible to me. If each of those lines were an actual memory allocation the total would be in hundreds of gigabytes!
Even if I assume that the GPU driver just reports some lines multiple times, I get this:
$ sudo sh -c 'cat /sys/kernel/debug/dri/*/i915_gem_objects' | sort | uniq | perl -npe 's#([0-9]+) bytes#sprintf("%.1f", $1/1024/1024)." MB"#e'
1218 shrinkable [0 free] objects, 797.6 MB
calibre-paralle: 1 objects, 0.0 MB (0 active, 0 inactive, 32768 unbound, 0 closed)
chrome: 1134 objects, 645.0 MB (0 active, 0 inactive, 163840 unbound, 0 closed)
chrome: 1134 objects, 645.0 MB (0 active, 676122624 inactive, 163840 unbound, 0 closed)
chrome: 174 objects, 63.2 MB (0 active, 0 inactive, 66322432 unbound, 0 closed)
chrome: 20 objects, 1.2 MB (0 active, 0 inactive, 1241088 unbound, 0 closed)
firefox: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
GLXVsyncThread: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
[k]contexts: 2 objects, 0.0 MB (0 active, 24576 inactive, 0 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Xorg: 114162 objects, 826.8 MB (0 active, 216350720 inactive, 537980928 unbound, 0 closed)
Xorg: 594 objects, 1329.8 MB (14794752 active, 4739072 inactive, 1360146432 unbound, 0 closed)
That's still way over the expected total numbers in range 4-8 GB. (The system has currently two seats logged in so I'm expecting to see two Xorg processes.)
**Update:** Looking the GPU debug output a bit more, I now think that those unbound
numbers mean virtual blocks without actual RAM used. If I do this I get more sensible numbers for GPU memory usage:
$ sudo sh -c 'cat /sys/kernel/debug/dri/*/i915_gem_objects' | perl -npe 's#^(.*?): .*?([0-9]+) bytes.*?([0-9]+) unbound.*#sprintf("%s: %.1f", $1, ($2-$3)/1024/1024)." MB"#eg' | grep -v '0.0 MB'
1292 shrinkable [0 free] objects, 848957440 bytes
Xorg: 303.1 MB
Xorg: 32.7 MB
chrome: 667.5 MB
chrome: 667.5 MB
That could explain about 1.5 GB of RAM which seems normal for the data I'm handling. I'm still missing multiple gigabytes to somewhere!
**Update:** I'm currently thinking that the problem is actually caused by deleted files backed by RAM. These could be caused by broken software that leaks open file handle after deleting/discarding the file. When I run
$ sudo lsof -n | grep -Ev ' /home/| /tmp/| /lib/| /usr/' | grep deleted | grep -o " REG .*" | awk 'BEGIN{sum=0}{sum+=$3}END{print sum / 1024 / 1024 " MB"}'
4560.65 MB
(The manually collected list of path prefixes are actually backed by real block devices - since my root is backed by real block device, I cannot just list all the block mount points here. A more clever script could list all non-mount-point directories in root and also list all block mounts longer than just /
here.)
This explains nearly 4.6 GB of lost RAM. Combined with the output from ipcs
, GPU RAM (with the assumption about unbound memory) and tmpfs
usage I'm still currently missing about 4 GB Shmem
somewhere!
Mikko Rantalainen
(4399 rep)
Aug 29, 2021, 07:17 PM
• Last activity: May 8, 2023, 08:07 AM
308
votes
9
answers
471432
views
How to display meminfo in megabytes in top?
Sometimes it is not comfortable to see meminfo in kilobytes when you have several gigs of RAM. In Linux, it looks like: ![top, with memory stats all scaled to Kb][1] And here is how it looks in Mac OS X: ![top, with memory stats scaled to Mb and Gb][2] Is there a way to display meminfo in Linux top...
Sometimes it is not comfortable to see meminfo in kilobytes when you have several gigs of RAM. In Linux, it looks like:
And here is how it looks in Mac OS X:
Is there a way to display meminfo in Linux top in terabytes, gigabytes and megabytes?


Anthony Ananich
(7492 rep)
Dec 19, 2013, 03:44 PM
• Last activity: Feb 25, 2023, 08:37 AM
2
votes
1
answers
1225
views
Linux server high memory usage without applications
I have an Ubuntu 20.04.4 server with 32GB RAM. The server is running a bunch of LXD containers and two VMs (libvirt+qemu+kvm). After startup, with all services running, the RAM utilization is about ~12GB. After 3-4 weeks the RAM utilization reaches ~90%. If I stop all containers and VMs the utilizat...
I have an Ubuntu 20.04.4 server with 32GB RAM.
The server is running a bunch of LXD containers and two VMs (libvirt+qemu+kvm).
After startup, with all services running, the RAM utilization is about ~12GB.
After 3-4 weeks the RAM utilization reaches ~90%.
If I stop all containers and VMs the utilization is still ~20GB.
However, I cannot figure out who is claiming this memory.
I have already tried clearing the cache, but that doesn't change much.
I compiled the kernel with support for *kmemleak* but it did not detect anything useful but shows up in slabtop.
systemd-cgtop:
/ 593 - 23.7G - -
machine.slice - - 1.4G - -
system.slice 116 - 301.1M - -
user.slice 11 - 141.9M - -
user.slice/user-1000.slice 11 - 121.6M - -
system.slice/systemd-journald.service 1 - 83.8M - -
user.slice/user-1000.slice/session-297429.scope 5 - 81.0M - -
system.slice/libvirtd.service 22 - 46.2M - -
user.slice/user-1000.slice/user@1000.service 6 - 39.8M - -
system.slice/snapd.service 36 - 19.8M - -
system.slice/cron.service 1 - 19.3M - -
init.scope 1 - 14.0M - -
system.slice/systemd-udevd.service 1 - 13.2M - -
system.slice/multipathd.service 7 - 10.8M - -
system.slice/NetworkManager.service 3 - 5.8M - -
system.slice/networkd-dispatcher.service 1 - 5.4M - -
system.slice/ssh.service 1 - 5.0M - -
system.slice/ModemManager.service 3 - 4.5M - -
system.slice/systemd-networkd.service 1 - 3.5M - -
system.slice/accounts-daemon.service 3 - 3.5M - -
system.slice/udisks2.service 5 - 3.4M - -
system.slice/polkit.service 3 - 3.0M - -
system.slice/rsyslog.service 4 - 2.8M - -
system.slice/systemd-resolved.service 1 - 2.4M - -
system.slice/unattended-upgrades.service 2 - 1.8M - -
system.slice/dbus.service 1 - 1.8M - -
system.slice/systemd-logind.service 1 - 1.7M - -
system.slice/smartmontools.service 1 - 1.5M - -
system.slice/systemd-machined.service 1 - 1.5M - -
system.slice/systemd-timesyncd.service 2 - 1.4M - -
system.slice/virtlogd.service 1 - 1.3M - -
system.slice/rtkit-daemon.service 3 - 1.2M - -
/proc/meminfo:
MemTotal: 32718604 kB
MemFree: 11480728 kB
MemAvailable: 11612788 kB
Buffers: 28 kB
Cached: 144512 kB
SwapCached: 855404 kB
Active: 520504 kB
Inactive: 541588 kB
Active(anon): 441708 kB
Inactive(anon): 484240 kB
Active(file): 78796 kB
Inactive(file): 57348 kB
Unevictable: 18664 kB
Mlocked: 18664 kB
SwapTotal: 33043136 kB
SwapFree: 32031680 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 94680 kB
Mapped: 126592 kB
Shmem: 660 kB
KReclaimable: 432484 kB
Slab: 10784740 kB
SReclaimable: 432484 kB
SUnreclaim: 10352256 kB
KernelStack: 10512 kB
PageTables: 5052 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 49402436 kB
Committed_AS: 1816364 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 152512 kB
VmallocChunk: 0 kB
Percpu: 8868864 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 19383100 kB
DirectMap2M: 14053376 kB
DirectMap1G: 0 kB
slabtop:
Active / Total Objects (% used) : 30513607 / 33423869 (91.3%)
Active / Total Slabs (% used) : 1384092 / 1384092 (100.0%)
Active / Total Caches (% used) : 123 / 203 (60.6%)
Active / Total Size (% used) : 9965969.20K / 10757454.91K (92.6%)
Minimum / Average / Maximum Object : 0.01K / 0.32K / 16.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
27156909 26001970 95% 0.30K 1194104 26 9552832K kmemleak_object
754624 742232 98% 0.06K 11791 64 47164K kmalloc-64
654675 278378 42% 0.57K 23382 28 374112K radix_tree_node
593436 348958 58% 0.08K 11636 51 46544K Acpi-State
559744 418325 74% 0.03K 4373 128 17492K kmalloc-32
496320 483104 97% 0.12K 15510 32 62040K kernfs_node_cache
487104 155952 32% 0.06K 7611 64 30444K vmap_area
394240 165965 42% 0.14K 14080 28 56320K btrfs_extent_map
355580 342674 96% 0.09K 7730 46 30920K trace_event_file
339573 338310 99% 4.00K 42465 8 1358880K kmalloc-4k
306348 154794 50% 0.19K 7410 42 59280K dentry
145931 104400 71% 1.13K 11552 28 369664K btrfs_inode
137728 137174 99% 0.02K 538 256 2152K kmalloc-16
112672 74034 65% 0.50K 3671 32 58736K kmalloc-512
102479 62366 60% 0.30K 4093 26 32744K btrfs_delayed_node
68880 66890 97% 2.00K 4305 16 137760K kmalloc-2k
66656 48345 72% 0.25K 2083 32 16664K kmalloc-256
64110 47818 74% 0.59K 2376 27 38016K inode_cache
50176 50176 100% 0.01K 98 512 392K kmalloc-8
44710 43744 97% 0.02K 263 170 1052K lsm_file_cache
43056 11444 26% 0.25K 1418 32 11344K pool_workqueue
36480 29052 79% 0.06K 570 64 2280K kmalloc-rcl-64
33920 25846 76% 0.06K 530 64 2120K anon_vma_chain
24822 14264 57% 0.19K 832 42 6656K kmalloc-192
23552 23552 100% 0.03K 184 128 736K fsnotify_mark_connector
23517 17994 76% 0.20K 603 39 4824K vm_area_struct
19572 14909 76% 0.09K 466 42 1864K kmalloc-rcl-96
18262 15960 87% 0.09K 397 46 1588K anon_vma
14548 12905 88% 1.00K 459 32 14688K kmalloc-1k
14162 14162 100% 0.05K 194 73 776K file_lock_ctx
13104 12141 92% 0.09K 312 42 1248K kmalloc-96
13062 13062 100% 0.19K 311 42 2488K cred_jar
13056 10983 84% 0.12K 408 32 1632K kmalloc-128
12192 8922 73% 0.66K 508 24 8128K proc_inode_cache
11730 11444 97% 0.69K 1444 46 46208K squashfs_inode_cache
11067 11067 100% 0.08K 217 51 868K task_delay_info
10752 10752 100% 0.03K 84 128 336K kmemleak_scan_area
10656 8666 81% 0.25K 333 32 2664K filp
10252 10252 100% 0.18K 235 44 1880K kvm_mmu_page_header
10200 10200 100% 0.05K 120 85 480K ftrace_event_field
10176 10176 100% 0.12K 318 32 1272K pid
9906 9906 100% 0.10K 254 39 1016K Acpi-ParseExt
9600 9213 95% 0.12K 300 32 1200K kmalloc-rcl-128
9520 9520 100% 0.07K 170 56 680K Acpi-Operand
8502 8063 94% 0.81K 218 39 6976K sock_inode_cache
7733 7733 100% 0.70K 169 46 5408K shmem_inode_cache
7392 7231 97% 0.19K 176 42 1408K skbuff_ext_cache
6552 6552 100% 0.19K 163 42 1304K kmalloc-rcl-192
6480 6480 100% 0.11K 180 36 720K khugepaged_mm_slot
6144 6144 100% 0.02K 24 256 96K ep_head
5439 5439 100% 0.42K 147 37 2352K btrfs_ordered_extent
5248 4981 94% 0.25K 164 32 1312K skbuff_head_cache
4792 4117 85% 4.00K 606 8 19392K biovec-max
4326 4326 100% 0.19K 103 42 824K proc_dir_entry
4125 4125 100% 0.24K 125 33 1000K tw_sock_TCPv6
3978 3978 100% 0.10K 102 39 408K buffer_head
3975 3769 94% 0.31K 159 25 1272K mnt_cache
3328 3200 96% 1.00K 104 32 3328K RAW
3136 3136 100% 1.12K 112 28 3584K signal_cache
3072 2560 83% 0.03K 24 128 96K dnotify_struct
2910 2820 96% 1.06K 97 30 3104K UNIX
2522 2396 95% 1.19K 97 26 3104K RAWv6
2448 2448 100% 0.04K 24 102 96K pde_opener
2400 2400 100% 0.50K 75 32 1200K skbuff_fclone_cache
2112 2080 98% 1.00K 66 32 2112K biovec-64
1695 1587 93% 2.06K 113 15 3616K sighand_cache
1518 1518 100% 0.69K 33 46 1056K files_cache
1500 1500 100% 0.31K 60 25 480K nf_conntrack
1260 894 70% 6.06K 252 5 8064K task_struct
1260 1260 100% 1.06K 42 30 1344K mm_struct
1222 1158 94% 2.38K 94 13 3008K TCPv6
1150 1150 100% 0.34K 25 46 400K taskstats
924 924 100% 0.56K 33 28 528K task_group
888 888 100% 0.21K 24 37 192K file_lock_cache
864 864 100% 0.11K 24 36 96K btrfs_trans_handle
855 855 100% 2.19K 62 14 1984K TCP
851 851 100% 0.42K 23 37 368K uts_namespace
816 816 100% 0.12K 24 34 96K seq_file
816 816 100% 0.04K 8 102 32K ext4_extent_status
792 792 100% 0.24K 24 33 192K tw_sock_TCP
782 782 100% 0.94K 23 34 736K mqueue_inode_cache
720 720 100% 0.13K 24 30 96K pid_namespace
704 704 100% 0.06K 11 64 44K kmem_cache_node
648 648 100% 1.16K 24 27 768K perf_event
640 640 100% 0.12K 20 32 80K scsi_sense_cache
624 624 100% 0.30K 24 26 192K request_sock_TCP
624 624 100% 0.15K 24 26 96K fuse_request
596 566 94% 8.00K 149 4 4768K kmalloc-8k
576 576 100% 1.31K 24 24 768K UDPv6
494 494 100% 0.30K 19 26 152K request_sock_TCPv6
480 480 100% 0.53K 16 30 256K user_namespace
432 432 100% 1.15K 16 27 512K ext4_inode_cache
416 416 100% 0.25K 13 32 104K kmem_cache
416 416 100% 0.61K 16 26 256K hugetlbfs_inode_cache
390 390 100% 0.81K 10 39 320K fuse_inode
306 306 100% 0.04K 3 102 12K bio_crypt_ctx
292 292 100% 0.05K 4 73 16K mbcache
260 260 100% 1.56K 13 20 416K bdev_cache
256 256 100% 0.02K 1 256 4K jbd2_revoke_table_s
232 232 100% 4.00K 29 8 928K names_cache
192 192 100% 1.98K 12 16 384K request_queue
170 170 100% 0.02K 1 170 4K mod_hash_entries
168 168 100% 4.12K 24 7 768K net_namespace
155 155 100% 0.26K 5 31 40K numa_policy
132 132 100% 0.72K 3 44 96K fat_inode_cache
128 128 100% 0.25K 4 32 32K dquot
128 128 100% 0.06K 2 64 8K ext4_io_end
108 108 100% 2.61K 9 12 288K x86_emulator
84 84 100% 0.19K 2 42 16K ext4_groupinfo_4k
68 68 100% 0.12K 2 34 8K jbd2_journal_head
68 68 100% 0.12K 2 34 8K abd_t
64 64 100% 8.00K 16 4 512K irq_remap_cache
64 64 100% 2.00K 4 16 128K biovec-128
63 63 100% 4.06K 9 7 288K x86_fpu
56 56 100% 0.07K 1 56 4K fsnotify_mark
56 56 100% 0.14K 2 28 8K ext4_allocation_context
42 42 100% 0.75K 1 42 32K dax_cache
40 40 100% 0.20K 1 40 8K ip4-frags
36 36 100% 7.86K 9 4 288K kvm_vcpu
30 30 100% 1.06K 1 30 32K dmaengine-unmap-128
24 24 100% 0.66K 1 24 16K ovl_inode
15 15 100% 2.06K 1 15 32K dmaengine-unmap-256
6 6 100% 16.00K 3 2 96K zio_buf_comb_16384
0 0 0% 0.01K 0 512 0K kmalloc-rcl-8
0 0 0% 0.02K 0 256 0K kmalloc-rcl-16
0 0 0% 0.03K 0 128 0K kmalloc-rcl-32
0 0 0% 0.25K 0 32 0K kmalloc-rcl-256
0 0 0% 0.50K 0 32 0K kmalloc-rcl-512
0 0 0% 1.00K 0 32 0K kmalloc-rcl-1k
0 0 0% 2.00K 0 16 0K kmalloc-rcl-2k
0 0 0% 4.00K 0 8 0K kmalloc-rcl-4k
0 0 0% 8.00K 0 4 0K kmalloc-rcl-8k
0 0 0% 0.09K 0 42 0K dma-kmalloc-96
0 0 0% 0.19K 0 42 0K dma-kmalloc-192
0 0 0% 0.01K 0 512 0K dma-kmalloc-8
0 0 0% 0.02K 0 256 0K dma-kmalloc-16
0 0 0% 0.03K 0 128 0K dma-kmalloc-32
0 0 0% 0.06K 0 64 0K dma-kmalloc-64
0 0 0% 0.12K 0 32 0K dma-kmalloc-128
0 0 0% 0.25K 0 32 0K dma-kmalloc-256
0 0 0% 0.50K 0 32 0K dma-kmalloc-512
0 0 0% 1.00K 0 32 0K dma-kmalloc-1k
0 0 0% 2.00K 0 16 0K dma-kmalloc-2k
0 0 0% 4.00K 0 8 0K dma-kmalloc-4k
0 0 0% 8.00K 0 4 0K dma-kmalloc-8k
0 0 0% 0.12K 0 34 0K iint_cache
0 0 0% 1.00K 0 32 0K PING
0 0 0% 0.75K 0 42 0K xfrm_state
0 0 0% 0.37K 0 43 0K request_sock_subflow
0 0 0% 1.81K 0 17 0K MPTCP
0 0 0% 0.62K 0 25 0K dio
0 0 0% 0.19K 0 42 0K userfaultfd_ctx_cache
0 0 0% 0.03K 0 128 0K ext4_pending_reservation
0 0 0% 0.08K 0 51 0K ext4_fc_dentry_update
0 0 0% 0.04K 0 102 0K fat_cache
0 0 0% 0.81K 0 39 0K ecryptfs_auth_tok_list_item
0 0 0% 0.02K 0 256 0K ecryptfs_file_cache
0 0 0% 0.94K 0 34 0K ecryptfs_inode_cache
0 0 0% 2.82K 0 11 0K dm_uevent
0 0 0% 3.23K 0 9 0K kcopyd_job
0 0 0% 1.19K 0 26 0K PINGv6
0 0 0% 0.18K 0 44 0K ip6-frags
0 0 0% 2.00K 0 16 0K MPTCPv6
0 0 0% 0.13K 0 30 0K fscrypt_info
0 0 0% 0.25K 0 32 0K fsverity_info
0 0 0% 1.25K 0 25 0K AF_VSOCK
0 0 0% 0.19K 0 42 0K kcf_sreq_cache
0 0 0% 0.50K 0 32 0K kcf_areq_cache
0 0 0% 0.19K 0 42 0K kcf_context_cache
0 0 0% 4.00K 0 8 0K zfs_btree_leaf_cache
0 0 0% 0.44K 0 36 0K ddt_entry_cache
0 0 0% 1.22K 0 26 0K zio_cache
0 0 0% 0.05K 0 85 0K zio_link_cache
0 0 0% 0.50K 0 32 0K zio_buf_comb_512
0 0 0% 1.00K 0 32 0K zio_buf_comb_1024
0 0 0% 1.50K 0 21 0K zio_buf_comb_1536
0 0 0% 2.00K 0 16 0K zio_buf_comb_2048
0 0 0% 2.50K 0 12 0K zio_buf_comb_2560
0 0 0% 3.00K 0 10 0K zio_buf_comb_3072
0 0 0% 3.50K 0 9 0K zio_buf_comb_3584
0 0 0% 4.00K 0 8 0K zio_buf_comb_4096
0 0 0% 8.00K 0 4 0K zio_buf_comb_5120
0 0 0% 8.00K 0 4 0K zio_buf_comb_6144
0 0 0% 8.00K 0 4 0K zio_buf_comb_7168
0 0 0% 8.00K 0 4 0K zio_buf_comb_8192
0 0 0% 12.00K 0 2 0K zio_buf_comb_10240
0 0 0% 12.00K 0 2 0K zio_buf_comb_12288
0 0 0% 16.00K 0 2 0K zio_buf_comb_14336
0 0 0% 16.00K 0 2 0K lz4_cache
0 0 0% 0.24K 0 33 0K sa_cache
0 0 0% 0.96K 0 33 0K dnode_t
0 0 0% 0.32K 0 24 0K arc_buf_hdr_t_full
0 0 0% 0.38K 0 41 0K arc_buf_hdr_t_full_crypt
0 0 0% 0.09K 0 42 0K arc_buf_hdr_t_l2only
0 0 0% 0.08K 0 51 0K arc_buf_t
0 0 0% 0.38K 0 42 0K dmu_buf_impl_t
0 0 0% 0.37K 0 43 0K zil_lwb_cache
0 0 0% 0.15K 0 26 0K zil_zcw_cache
0 0 0% 0.13K 0 30 0K sio_cache_0
0 0 0% 0.15K 0 26 0K sio_cache_1
0 0 0% 0.16K 0 24 0K sio_cache_2
0 0 0% 1.06K 0 30 0K zfs_znode_cache
0 0 0% 0.09K 0 46 0K zfs_znode_hold_cache
user2653646
(51 rep)
Sep 19, 2022, 02:23 PM
• Last activity: Jan 2, 2023, 12:20 PM
12
votes
2
answers
5201
views
Is "Cached" memory de-facto free?
When running `cat /proc/meminfo`, you get these 3 values at the top: MemTotal: 6291456 kB MemFree: 4038976 kB Cached: 1477948 kB As far as I know, the "Cached" value is disk caches made by the Linux system that will be freed immediately if any application needs more RAM, thus Linux will never run ou...
When running
cat /proc/meminfo
, you get these 3 values at the top:
MemTotal: 6291456 kB
MemFree: 4038976 kB
Cached: 1477948 kB
As far as I know, the "Cached" value is disk caches made by the Linux system that will be freed immediately if any application needs more RAM, thus Linux will never run out of memory until both MemFree and Cached are at zero.
Unfortunately, "MemAvailable" is not reported by /proc/meminfo, probably because it is running in a virtual server. (Kernel version is 4.4)
Thus for all practical purposes, the RAM available for applications is MemFree + Cached.
Is that view correct?
Roland Seuhs
(395 rep)
Feb 9, 2019, 01:50 PM
• Last activity: Sep 9, 2021, 10:11 AM
1
votes
1
answers
3321
views
How to find the right memory size?
We have a Linux machine with 32G. We capture the mem as follows: mem=` cat /proc/meminfo | grep MemTotal | awk '{print $2}' ` echo $mem 32767184 and now we convert it to GIGA: mem_in_giga=` echo $(( $mem / 1024 / 1024)) ` echo $mem_in_giga 31 but from the results we get 31 and not 32G. The same stor...
We have a Linux machine with 32G. We capture the mem as follows:
mem=
cat /proc/meminfo | grep MemTotal | awk '{print $2}'
echo $mem
32767184
and now we convert it to GIGA:
mem_in_giga= echo $(( $mem / 1024 / 1024))
echo $mem_in_giga
31
but from the results we get 31 and not 32G.
The same story with the free
command:
free -g
total used free shared buff/cache available
Mem: 31 9 17 0 4 20
Swap: 7 0 7
So how do we get "32G" from any command?
yael
(13936 rep)
Feb 1, 2018, 12:33 PM
• Last activity: Aug 31, 2021, 11:51 AM
5
votes
1
answers
6586
views
How are /proc/meminfo values calculated?
/!\ Current state: Update 4 /!\ Some /proc/meminfo values are a sum or a difference of some other values. However, not much is said about how they are calculated in these two links (just do ctrl-f `meminfo` to get there): - https://www.kernel.org/doc/Documentation/filesystems/proc.txt - [`man 5 proc...
/!\ Current state: Update 4 /!\
Some /proc/meminfo values are a sum or a difference of some other values.
However, not much is said about how they are calculated in these two links (just do ctrl-f
meminfo
to get there):
- https://www.kernel.org/doc/Documentation/filesystems/proc.txt
- man 5 proc
Besides, I've also dug here and there, and here's what I found so far:
MemFree: LowFree + HighFree
Active: Active(anon) + Active(file)
Inactive: Inactive(anon) + Inactive(file)
I have not found much about the other fields, and where I have, the results don't match, like in these Stack Overflow posts:
- How to Calculate MemTotal in /proc/meminfo
(2035272 kB vs expected 2034284 kB)
- Entry in /proc/meminfo - on Stack Overflow
Are these two values correctly calculated? Or is there some variability due to some external means?
Also, some values can't -obviously- be calculated without external values, but I'm still interested in that.
How are /proc/meminfo
values calculated?
----
If that helps, here's an example of /proc/meminfo
:
MemTotal: 501400 kB
MemFree: 38072 kB
MemAvailable: 217652 kB
Buffers: 0 kB
Cached: 223508 kB
SwapCached: 11200 kB
Active: 179280 kB
Inactive: 181680 kB
Active(anon): 69032 kB
Inactive(anon): 70908 kB
Active(file): 110248 kB
Inactive(file): 110772 kB
Unevictable: 0 kB
Mlocked: 0 kB
HighTotal:
HighFree:
LowTotal:
LowFree:
MmapCopy:
SwapTotal: 839676 kB
SwapFree: 785552 kB
Dirty: 4 kB
Writeback: 0 kB
AnonPages: 128964 kB
Mapped: 21840 kB
Shmem: 2488 kB
Slab: 71940 kB
SReclaimable: 41372 kB
SUnreclaim: 30568 kB
KernelStack: 2736 kB
PageTables: 5196 kB
Quicklists:
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 1090376 kB
Committed_AS: 486916 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 4904 kB
VmallocChunk: 34359721736 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages:
ShmemPmdMapped:
CmaTotal:
CmaFree:
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 36800 kB
DirectMap2M: 487424 kB
DirectMap4M:
DirectMap1G:
----
**Update 1**:
Here's the code used by /proc/meminfo
to fill its data:
http://elixir.free-electrons.com/linux/v4.15/source/fs/proc/meminfo.c#L46
However, since I'm not that of a coder, I'm having a hard time to figure out where these enums (e.g. NR_LRU_LISTS
, etc) and global variables (e.g totalram_pages
from si_meminfo
in page_alloc.c#L4673 ) are filled.
**Update 2**:
The enums part is now solved, and NR_LRU_LISTS
equals 5
.
But the totalram_pages
part seems to be harder to find out...
**Update 3**:
It looks like I won't be able to read the code since it looks very complex.
If someone manages to do it and shows how /proc/meminfo
valures are calculated, he/she can have the bounty.
The more the answer is detailed is, the higher the bounty will be.
**Update 4**:
A year and a half after, I learned that one of the reasons of this very question is in fact related to the very infamous OOM (Out Of Memory) bug that was finally recognized in August 2019 after **AT LEAST 16 YEARS** of "**wontfix**", until some famous Linux guy (thank you again, Artem S Tashkinov ! :) ) made the non-elitst Linux community voices' finally heard: "Yes, Linux Does Bad In Low RAM / Memory Pressure Situations On The Desktop" .
Also, most Linux distributions do calculate the real **available** RAM more precisely mainly since around 2017 (haven't updated my distro at the time of this question) despite the kernel fix landed in 3.14 (March 2014), which also gives a little bit more clues: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=34e431b0ae398fc54ea69ff85ec700722c9da773
But, the OOM problem is still here in 2021 even if it does happen less often with somewhat borderline fixes (earlyoom
and systemd-oomd
), while the calculated *available* RAM is still not correctly reporting the real used RAM.
Also, these related questions might have some answers:
- https://unix.stackexchange.com/questions/440558/30-of-ram-is-buffers-what-is-it
- https://unix.stackexchange.com/questions/390518/what-do-the-buff-cache-and-avail-mem-fields-in-top-mean
- https://unix.stackexchange.com/questions/261247/how-can-i-get-the-amount-of-available-memory-portably-across-distributions
So, my point on "update 3" about how /proc/meminfo
is getting its values still stands.
**However**, more insights about the OOM issue on the next link, which also talk about a very promising project against that, and it does even comes with a little bit of a GUI !: https://github.com/hakavlad/nohang
Firsts tests I did seems to show that this nohang
tool really seems to do what it promised, and even better than earlyoom
.
X.LINK
(1362 rep)
Jan 31, 2018, 01:14 AM
• Last activity: Feb 16, 2021, 10:10 AM
16
votes
2
answers
16799
views
How can I get the amount of available memory portably across distributions?
The standard files/tools that report memory seem to have different formats on different Linux distributions. For example, on Arch and Ubuntu. * Arch $ free total used free shared buff/cache available Mem: 8169312 3870392 2648348 97884 1650572 4110336 Swap: 16777212 389588 16387624 $ head /proc/memin...
The standard files/tools that report memory seem to have different formats on different Linux distributions. For example, on Arch and Ubuntu.
* Arch
$ free
total used free shared buff/cache available
Mem: 8169312 3870392 2648348 97884 1650572 4110336
Swap: 16777212 389588 16387624
$ head /proc/meminfo
MemTotal: 8169312 kB
MemFree: 2625668 kB
MemAvailable: 4088520 kB
Buffers: 239688 kB
Cached: 1224520 kB
SwapCached: 17452 kB
Active: 4074548 kB
Inactive: 1035716 kB
Active(anon): 3247948 kB
Inactive(anon): 497684 kB
* Ubuntu
$ free
total used free shared buffers cached
Mem: 80642828 69076080 11566748 3063796 150688 58358264
-/+ buffers/cache: 10567128 70075700
Swap: 20971516 5828472 15143044
$ head /proc/meminfo
MemTotal: 80642828 kB
MemFree: 11565936 kB
Buffers: 150688 kB
Cached: 58358264 kB
SwapCached: 2173912 kB
Active: 27305364 kB
Inactive: 40004480 kB
Active(anon): 7584320 kB
Inactive(anon): 4280400 kB
Active(file): 19721044 kB
So, how can I portably (across Linux distros only) and reliably get the amount of memory—excluding swap—that is available for my software to use at a particular time? Presumably that's what's shown as "available" and "MemAvailable" in the output of
free
and cat /proc/meminfo
in Arch but how do I get the same in Ubuntu or another distribution?
terdon
(251585 rep)
Feb 10, 2016, 12:53 PM
• Last activity: Oct 16, 2020, 03:46 PM
0
votes
1
answers
218
views
HighTotal not showing up in /proc/meminfo
I'm trying to evaluate what is the peak memory of a program (in a docker image). I'm running `cat proc/meminfo` at the end but I don't see HighTotal, any idea why is that? (using docker's debian:latest) ``` cat /proc/meminfo MemTotal: 2046752 kB MemFree: 1781060 kB MemAvailable: 1782308 kB Buffers:...
I'm trying to evaluate what is the peak memory of a program (in a docker image).
I'm running
cat proc/meminfo
at the end but I don't see HighTotal, any idea why is that?
(using docker's debian:latest)
cat /proc/meminfo
MemTotal: 2046752 kB
MemFree: 1781060 kB
MemAvailable: 1782308 kB
Buffers: 7004 kB
Cached: 169056 kB
SwapCached: 2480 kB
Active: 116740 kB
Inactive: 93680 kB
Active(anon): 42712 kB
Inactive(anon): 43016 kB
Active(file): 74028 kB
Inactive(file): 50664 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1048572 kB
SwapFree: 898920 kB
Dirty: 124 kB
Writeback: 0 kB
AnonPages: 32032 kB
Mapped: 38144 kB
Shmem: 51332 kB
Slab: 37356 kB
SReclaimable: 16256 kB
SUnreclaim: 21100 kB
KernelStack: 3664 kB
PageTables: 1172 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 2071948 kB
Committed_AS: 743920 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 28672 kB
DirectMap2M: 2068480 kB
DirectMap1G: 3145728 kB
Thomas
(953 rep)
Apr 30, 2020, 06:24 PM
• Last activity: Apr 30, 2020, 06:40 PM
2
votes
1
answers
12255
views
How to troubleshoot what eats memory?
I'm trying to figure out why memory consumption started to constantly increase on my server during the last hours. I've tried to find the cause on the application level, but no success. That is why now looking into possible server cause. I'm not a pro in servers administration, so any help is apprec...
I'm trying to figure out why memory consumption started to constantly increase on my server during the last hours. I've tried to find the cause on the application level, but no success. That is why now looking into possible server cause. I'm not a pro in servers administration, so any help is appreciated. First common memory was eaten, now swap consumption is also constantly increasing.
My server runs on CentOS 7 with the kernel 3.10.0-514.26.2.el7.x86_64
**SOLUTION**
Finally, the issue was identified to be caused by a recently updated server library. The accepted answer is a good reminder, in the situation when you're stressed out by the memory usage, to trace back what had been changed in your system before the issue appeared.
Some tips I've been looking for and found to be very useful are described in https://unix.stackexchange.com/questions/4999/how-to-find-which-processes-are-taking-all-the-memory
I'm listing below the commands that I used and may help in such situation.
**ps auwx --sort rss** - processes sorted by memory usage
**ps -fu username** - processes by a user
**htop** usage/analysis showed many hung application cron-launched processes in my case. I configured htop to output both PID and PPID, because I needed to correlate **PPID** to **/var/log/cron** logged processes.
**free -m**
total used free shared buff/cache available
Mem: 7565 6525 440 47 599 657
Swap: 8191 2612 5579
**cat /proc/meminfo**
MemTotal: 7747260 kB
MemFree: 253960 kB
MemAvailable: 498904 kB
Buffers: 6160 kB
Cached: 189076 kB
SwapCached: 467788 kB
Active: 5572588 kB
Inactive: 1258540 kB
Active(anon): 5498664 kB
Inactive(anon): 1185908 kB
Active(file): 73924 kB
Inactive(file): 72632 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 8388604 kB
SwapFree: 5686452 kB
Dirty: 104 kB
Writeback: 0 kB
AnonPages: 6168400 kB
Mapped: 68668 kB
Shmem: 48676 kB
Slab: 456672 kB
SReclaimable: 389064 kB
SUnreclaim: 67608 kB
KernelStack: 7232 kB
PageTables: 106848 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 12262232 kB
Committed_AS: 10244216 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 26276 kB
VmallocChunk: 34359705340 kB
HardwareCorrupted: 0 kB
AnonHugePages: 5191680 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 83968 kB
DirectMap2M: 8300544 kB
How can I proceed to find if there's any issue in how the server is functioning or configured in terms of memory usage?
Y. E.
(123 rep)
Jul 18, 2017, 10:10 PM
• Last activity: Apr 29, 2019, 09:00 AM
0
votes
1
answers
1283
views
MemAvailable higher than expected
$ free -h total used free shared buff/cache available Mem: 7.7Gi 4.5Gi 692Mi 305Mi 2.5Gi 2.6Gi Swap: 2.0Gi 25Mi 2.0Gi How can `MemAvailable` be this high on my system? When I read [the kernel code](https://github.com/torvalds/linux/commit/34e431b0ae39), I thought we could approximate `MemAvailable`...
$ free -h
total used free shared buff/cache available
Mem: 7.7Gi 4.5Gi 692Mi 305Mi 2.5Gi 2.6Gi
Swap: 2.0Gi 25Mi 2.0Gi
How can
MemAvailable
be this high on my system?
When I read [the kernel code](https://github.com/torvalds/linux/commit/34e431b0ae39) , I thought we could approximate MemAvailable
with a formula like MemFree + (Buffers + Cached - Shmem)/2 + SReclaimable/2
. So I would have guessed MemAvailable would be more like 1.8G.
I don't think the 0.8G difference is due to the Reclaimable Slabs part, because I only have 100M of them:
$ grep SReclaimable /proc/meminfo
SReclaimable: 106492 kB
---
$ uname -r
4.20.3-200.fc29.x86_64
$ cat /proc/meminfo
MemTotal: 8042592 kB
MemFree: 708864 kB
MemAvailable: 2740432 kB
Buffers: 225472 kB
Cached: 2289436 kB
SwapCached: 1768 kB
Active: 4367844 kB
Inactive: 2538636 kB
Active(anon): 3443868 kB
Inactive(anon): 1265012 kB
Active(file): 923976 kB
Inactive(file): 1273624 kB
Unevictable: 11528 kB
Mlocked: 11528 kB
SwapTotal: 2097148 kB
SwapFree: 2071412 kB
Dirty: 80 kB
Writeback: 44 kB
AnonPages: 4402684 kB
Mapped: 554452 kB
Shmem: 313044 kB
KReclaimable: 106492 kB
Slab: 249164 kB
SReclaimable: 106492 kB
SUnreclaim: 142672 kB
KernelStack: 17888 kB
PageTables: 37020 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 6118444 kB
Committed_AS: 12077056 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
Percpu: 2688 kB
HardwareCorrupted: 0 kB
AnonHugePages: 2160640 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 297524 kB
DirectMap2M: 7968768 kB
DirectMap1G: 1048576 kB
sourcejedi
(53222 rep)
Feb 13, 2019, 02:25 PM
• Last activity: Feb 13, 2019, 02:44 PM
0
votes
1
answers
1747
views
What does ShmemHugePages mean?
I'm using `grep Huge /proc/meminfo` and getting: AnonHugePages: 16384 kB ShmemHugePages: 0 kB HugePages_Total: 33 HugePages_Free: 18 HugePages_Rsvd: 18 HugePages_Surp: 1 Hugepagesize: 2048 kB What does `ShmemHugePages` means/refers to?
I'm using
grep Huge /proc/meminfo
and getting:
AnonHugePages: 16384 kB
ShmemHugePages: 0 kB
HugePages_Total: 33
HugePages_Free: 18
HugePages_Rsvd: 18
HugePages_Surp: 1
Hugepagesize: 2048 kB
What does ShmemHugePages
means/refers to?
Adrian
(773 rep)
Jan 9, 2019, 05:15 PM
• Last activity: Jan 14, 2019, 08:51 AM
4
votes
1
answers
356
views
different kernels report different amounts of total memory on the same machine
I have two x86_64 kernels compiled on the same machine against the same code (4.15.0 in [Linus' source tree](https://github.com/torvalds/linux)). The config files were produced by running `make localmodconfig` against that source, using different, larger original config files coming from different d...
I have two x86_64 kernels compiled on the same machine against the same code (4.15.0 in [Linus' source tree](https://github.com/torvalds/linux)) .
The config files were produced by running
make localmodconfig
against that source, using different, larger original config files coming from different distros: Arch and Slackware respectively. I'll nickname them
__arch__ [config](https://pastebin.com/vXd4w34R)
and
__slk__ [config](https://pastebin.com/NWrCCDab)
for that reason.
The issue: running cat /proc/meminfo
consistently reports about 55-60 MB more in the MemTotal field for __arch__ than for __slk__:
MemTotal: 32600808 kB
for __arch__
vs
MemTotal: 32544992 kB
for __slk__
I say 'consistently' because I've tried the experiment with the same config files against earlier versions of the source (a bunch of the 4.15-rc kernels, 4.14 before that, etc., rolling over from one source to the next with make oldconfig
).
This is reflected in the figures reported by htop, with __slk__ reporting ~60MB less usage on bootup than __arch__. This is consistent with the [htop dev's explanation](https://stackoverflow.com/questions/41224738/how-to-calculate-system-memory-usage-from-proc-meminfo-like-htop) of how htop's used memory figures are based on MemTotal
.
My question is: any suggestions for which config options I should look at that would make the difference?
I of course don't mind the 60MB (the machine the kernels run on has 32 GB..), but it's an interesting puzzle to me and I'd like to use it as a learning opportunity.
Memory reporting on Linux is discussed heavily on these forums and outside in general, but searching for this specific type of issue (different kernels / same machine => different outcome in memory reporting) has not produced anything I found relevant.
---
__Edit__
As per the suggestions in the post linked by @ErikF, I had a look at the output of
journalctl --boot=#
where #
stands for 0 or -1, for the current and previous boots respectively (corresponding to the two kernels). These lines do seem to reflect the difference, so it is now a little clearer to me where it stems from:
__arch__ (the one reporting larger MemTotal):
Memory: 32587752K/33472072K available (10252K kernel code, 1157K rwdata, 2760K rodata, 1364K init, 988K bss, 884320K reserved, 0K cma-reserved)
__slk__ (the one reporting smaller MemTotal):
Memory: 32533996K/33472072K available (14348K kernel code, 1674K rwdata, 3976K rodata, 1616K init, 784K bss, 938076K reserved, 0K cma-reserved)
That's a difference of ~55 MB, as expected!
I know the __slk__ kernel is larger, as verified by comparing the sizes of the two vmlinuz files in my /boot/ folder, but the brunt of the difference seems to come from how much memory the two respective kernels reserve.
I'd like to better understand what in the config files affects *that* to the extent that it does, but this certainly sheds some light.
---
__Second edit__
Answering the questions in the comment by @Tim Kennedy.
> Do you have a dedicated GPU, or use shared video memory
No dedicated GPU; it's a laptop with on-board Intel graphics.
> and do both kernels load the same graphics driver?
Yes, i915.
> Also, compare the output of dmesg | grep BIOS-e820 | grep reserved
As you expected, does not change. In all cases it's 12 lines, identical in every respect (memory addresses and all).
---
__(Final?) edit__
I believe it may just be as simple as this: the kernel reporting less MemTotal has *much* more of the driver suite built in; I just hadn't realized it would make such a noticeable difference..
I compared:
du -sh /lib/modules//modules.builtin
returns 4K, while
du -sh /lib/modules//modules.builtin
returns 16K.
So in the end, I believe I was barking up the wrong tree: it won't be a single config option (or a handful), but rather the accumulated effect of many more built-in drivers.
grobber
(59 rep)
Feb 3, 2018, 10:51 PM
• Last activity: Feb 4, 2018, 01:34 PM
11
votes
1
answers
874
views
What is using 4GB of memory? (Not cache, not a process, not slab, not shm)
We have some EC2 servers that experience a memory leak over days or weeks. Eventually there gets to be many GB of memory that is used (according to tools like `free` and `htop`) and, if we don't restart the server, our processes start getting OOM-killed. One such server has 15GB of ram. Here's the o...
We have some EC2 servers that experience a memory leak over days or weeks. Eventually there gets to be many GB of memory that is used (according to tools like
free
and htop
) and, if we don't restart the server, our processes start getting OOM-killed.
One such server has 15GB of ram. Here's the output of free -m
:
total used free shared buffers cached
Mem: 15039 3921 11118 0 0 7
-/+ buffers/cache: 3913 11126
Swap: 0 0 0
This server is sitting idle; I've killed most userland processes. No process in htop is showing >100k VIRT. I recently ran echo 3 > /proc/sys/vm/drop_caches
, to no effect (that's why buffers
and cached
is so small). Additionally:
* Poking around in /proc/slabinfo
and slabtop
doesn't show anything promising
* There's nothing in /run/shm
Here's the output of cat /proc/meminfo
:
MemTotal: 15400880 kB
MemFree: 11385688 kB
Buffers: 564 kB
Cached: 7792 kB
SwapCached: 0 kB
Active: 27668 kB
Inactive: 2012 kB
Active(anon): 21368 kB
Inactive(anon): 380 kB
Active(file): 6300 kB
Inactive(file): 1632 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 21380 kB
Mapped: 7208 kB
Shmem: 380 kB
Slab: 39260 kB
SReclaimable: 16456 kB
SUnreclaim: 22804 kB
KernelStack: 1352 kB
PageTables: 2872 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 7700440 kB
Committed_AS: 39072 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 30336 kB
VmallocChunk: 34359691552 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 36864 kB
DirectMap2M: 15822848 kB
You can see that there's a large gap between MemFree
and MemTotal
that isn't explained by the other meminfo metrics.
Any idea where this memory went, or how I can debug further?
More server info, in case it's relevant:
$ lsb_release -d
Description: Ubuntu 14.04.1 LTS
$ uname -r
3.13.0-36-generic
**Update:** Here are some more commands and their output:
# dmesg | fgrep 'Memory:'
[ 0.000000] Memory: 15389980K/15728244K available (7373K kernel code, 1144K rwdata, 3404K rodata, 1336K init, 1440K bss, 338264K reserved)
# awk '{print $2 " " $1}' /proc/modules | sort -nr | head -5
106678 psmouse
97812 raid6_pq
86484 raid456
69418 floppy
55624 aesni_intel
# cat /proc/mounts | grep tmp
udev /dev devtmpfs rw,relatime,size=7695004k,nr_inodes=1923751,mode=755 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=1540088k,mode=755 0 0
none /sys/fs/cgroup tmpfs rw,relatime,size=4k,mode=755 0 0
none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0
none /run/user tmpfs rw,nosuid,nodev,noexec,relatime,size=102400k,mode=755 0 0
# df -h /dev /run /sys/fs/cgroup /run/lock /run/shm /run/user
Filesystem Size Used Avail Use% Mounted on
udev 7.4G 12K 7.4G 1% /dev
tmpfs 1.5G 368K 1.5G 1% /run
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 7.4G 0 7.4G 0% /run/shm
none 100M 0 100M 0% /run/user
**Update 2**: Here's the entire output of ps aux
:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 33636 2368 ? Ss 2015 0:03 /sbin/init
root 2 0.0 0.0 0 0 ? S 2015 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S 2015 0:11 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< 2015 0:00 [kworker/0:0H]
root 7 0.0 0.0 0 0 ? S 2015 1:31 [rcu_sched]
root 8 0.0 0.0 0 0 ? S 2015 0:30 [rcuos/0]
root 9 0.0 0.0 0 0 ? S 2015 0:25 [rcuos/1]
root 10 0.0 0.0 0 0 ? S 2015 0:33 [rcuos/2]
root 11 0.0 0.0 0 0 ? S 2015 0:25 [rcuos/3]
root 12 0.0 0.0 0 0 ? S 2015 0:14 [rcuos/4]
root 13 0.0 0.0 0 0 ? S 2015 0:14 [rcuos/5]
root 14 0.0 0.0 0 0 ? S 2015 0:14 [rcuos/6]
root 15 0.0 0.0 0 0 ? S 2015 0:33 [rcuos/7]
root 16 0.0 0.0 0 0 ? S 2015 0:00 [rcuos/8]
root 17 0.0 0.0 0 0 ? S 2015 0:00 [rcuos/9]
root 18 0.0 0.0 0 0 ? S 2015 0:00 [rcuos/10]
root 19 0.0 0.0 0 0 ? S 2015 0:00 [rcuos/11]
root 20 0.0 0.0 0 0 ? S 2015 0:00 [rcuos/12]
root 21 0.0 0.0 0 0 ? S 2015 0:00 [rcuos/13]
root 22 0.0 0.0 0 0 ? S 2015 0:00 [rcuos/14]
root 23 0.0 0.0 0 0 ? S 2015 0:00 [rcu_bh]
root 24 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/0]
root 25 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/1]
root 26 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/2]
root 27 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/3]
root 28 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/4]
root 29 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/5]
root 30 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/6]
root 31 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/7]
root 32 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/8]
root 33 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/9]
root 34 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/10]
root 35 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/11]
root 36 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/12]
root 37 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/13]
root 38 0.0 0.0 0 0 ? S 2015 0:00 [rcuob/14]
root 39 0.0 0.0 0 0 ? S 2015 0:01 [migration/0]
root 40 0.0 0.0 0 0 ? S 2015 0:06 [watchdog/0]
root 41 0.0 0.0 0 0 ? S 2015 0:05 [watchdog/1]
root 42 0.0 0.0 0 0 ? S 2015 0:00 [migration/1]
root 43 0.0 0.0 0 0 ? S 2015 0:08 [ksoftirqd/1]
root 45 0.0 0.0 0 0 ? S< 2015 0:00 [kworker/1:0H]
root 46 0.0 0.0 0 0 ? S 2015 0:05 [watchdog/2]
root 47 0.0 0.0 0 0 ? S 2015 0:01 [migration/2]
root 48 0.0 0.0 0 0 ? S 2015 0:08 [ksoftirqd/2]
root 50 0.0 0.0 0 0 ? S< 2015 0:00 [kworker/2:0H]
root 51 0.0 0.0 0 0 ? S 2015 0:06 [watchdog/3]
root 52 0.0 0.0 0 0 ? S 2015 0:01 [migration/3]
root 53 0.0 0.0 0 0 ? S 2015 0:17 [ksoftirqd/3]
root 55 0.0 0.0 0 0 ? S< 2015 0:00 [kworker/3:0H]
root 56 0.0 0.0 0 0 ? S 2015 0:07 [watchdog/4]
root 57 0.0 0.0 0 0 ? S 2015 0:01 [migration/4]
root 58 0.0 0.0 0 0 ? S 2015 0:02 [ksoftirqd/4]
root 60 0.0 0.0 0 0 ? S< 2015 0:00 [kworker/4:0H]
root 61 0.0 0.0 0 0 ? S 2015 0:06 [watchdog/5]
root 62 0.0 0.0 0 0 ? S 2015 0:01 [migration/5]
root 63 0.0 0.0 0 0 ? S 2015 0:07 [ksoftirqd/5]
root 65 0.0 0.0 0 0 ? S< 2015 0:00 [kworker/5:0H]
root 66 0.0 0.0 0 0 ? S 2015 0:06 [watchdog/6]
root 67 0.0 0.0 0 0 ? S 2015 0:01 [migration/6]
root 68 0.0 0.0 0 0 ? S 2015 0:04 [ksoftirqd/6]
root 70 0.0 0.0 0 0 ? S< 2015 0:00 [kworker/6:0H]
root 71 0.0 0.0 0 0 ? S 2015 0:06 [watchdog/7]
root 72 0.0 0.0 0 0 ? S 2015 0:02 [migration/7]
root 73 0.0 0.0 0 0 ? S 2015 0:17 [ksoftirqd/7]
root 74 0.0 0.0 0 0 ? S 2015 0:14 [kworker/7:0]
root 75 0.0 0.0 0 0 ? S< 2015 0:00 [kworker/7:0H]
root 76 0.0 0.0 0 0 ? S< 2015 0:00 [khelper]
root 77 0.0 0.0 0 0 ? S 2015 0:00 [kdevtmpfs]
root 78 0.0 0.0 0 0 ? S< 2015 0:00 [netns]
root 79 0.0 0.0 0 0 ? S 2015 0:00 [xenwatch]
root 80 0.0 0.0 0 0 ? S 2015 0:00 [xenbus]
root 81 0.0 0.0 0 0 ? S 2015 0:39 [kworker/0:1]
root 82 0.0 0.0 0 0 ? S< 2015 0:00 [writeback]
root 83 0.0 0.0 0 0 ? S< 2015 0:00 [kintegrityd]
root 84 0.0 0.0 0 0 ? S< 2015 0:00 [bioset]
root 86 0.0 0.0 0 0 ? S< 2015 0:00 [kblockd]
root 88 0.0 0.0 0 0 ? S< 2015 0:00 [ata_sff]
root 89 0.0 0.0 0 0 ? S 2015 0:00 [khubd]
root 90 0.0 0.0 0 0 ? S< 2015 0:00 [md]
root 91 0.0 0.0 0 0 ? S< 2015 0:00 [devfreq_wq]
root 92 0.0 0.0 0 0 ? S 2015 0:12 [kworker/1:1]
root 95 0.0 0.0 0 0 ? S 2015 0:10 [kworker/4:1]
root 97 0.0 0.0 0 0 ? S 2015 0:11 [kworker/6:1]
root 99 0.0 0.0 0 0 ? S 2015 0:00 [khungtaskd]
root 100 0.0 0.0 0 0 ? S 2015 7:26 [kswapd0]
root 101 0.0 0.0 0 0 ? SN 2015 0:00 [ksmd]
root 102 0.0 0.0 0 0 ? SN 2015 0:29 [khugepaged]
root 103 0.0 0.0 0 0 ? S 2015 0:00 [fsnotify_mark]
root 104 0.0 0.0 0 0 ? S 2015 0:00 [ecryptfs-kthrea]
root 105 0.0 0.0 0 0 ? S< 2015 0:00 [crypto]
root 117 0.0 0.0 0 0 ? S< 2015 0:00 [kthrotld]
root 119 0.0 0.0 0 0 ? S 2015 0:00 [scsi_eh_0]
root 120 0.0 0.0 0 0 ? S 2015 0:00 [scsi_eh_1]
root 141 0.0 0.0 0 0 ? S< 2015 0:00 [deferwq]
root 142 0.0 0.0 0 0 ? S< 2015 0:00 [charger_manager]
root 199 0.0 0.0 0 0 ? S< 2015 0:00 [kpsmoused]
root 223 0.0 0.0 0 0 ? S< 2015 0:00 [bioset]
root 265 0.0 0.0 0 0 ? S< 2015 0:00 [raid5wq]
root 291 0.0 0.0 0 0 ? S 2015 0:22 [jbd2/xvda1-8]
root 292 0.0 0.0 0 0 ? S< 2015 0:00 [ext4-rsv-conver]
root 445 0.0 0.0 0 0 ? S 2015 0:16 [jbd2/md0-8]
root 446 0.0 0.0 0 0 ? S< 2015 0:00 [ext4-rsv-conver]
root 516 0.0 0.0 19604 564 ? S 2015 0:00 upstart-udev-bridge --daemon
root 522 0.0 0.0 49864 1048 ? Ss 2015 0:00 /lib/systemd/systemd-udevd --daemon
root 671 0.0 0.0 15256 408 ? S 2015 0:00 upstart-socket-bridge --daemon
root 800 0.0 0.0 10220 2900 ? Ss 2015 0:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -l
message+ 1048 0.0 0.0 39224 1048 ? Ss 2015 0:00 dbus-daemon --system --fork
root 1077 0.0 0.0 0 0 ? S Jan04 0:00 [kworker/u30:2]
root 1082 0.0 0.0 43448 1196 ? Ss 2015 0:00 /lib/systemd/systemd-logind
root 1116 0.0 0.0 15272 512 ? S 2015 0:00 upstart-file-bridge --daemon
root 1339 0.0 0.0 14536 412 tty4 Ss+ 2015 0:00 /sbin/getty -8 38400 tty4
root 1344 0.0 0.0 14536 416 tty5 Ss+ 2015 0:00 /sbin/getty -8 38400 tty5
root 1360 0.0 0.0 14536 408 tty2 Ss+ 2015 0:00 /sbin/getty -8 38400 tty2
root 1361 0.0 0.0 14536 416 tty3 Ss+ 2015 0:00 /sbin/getty -8 38400 tty3
root 1363 0.0 0.0 14536 404 tty6 Ss+ 2015 0:00 /sbin/getty -8 38400 tty6
root 1418 0.0 0.0 61364 1296 ? Ss 2015 0:07 /usr/sbin/sshd -D
root 1432 0.0 0.0 23652 552 ? Ss 2015 0:02 cron
daemon 1433 0.0 0.0 19136 180 ? Ss 2015 0:00 atd
root 1461 0.0 0.0 19316 644 ? Ss 2015 1:57 /usr/sbin/irqbalance
root 1518 0.0 0.0 4364 404 ? Ss 2015 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.
root 1521 0.0 0.0 0 0 ? S 2015 0:00 [kworker/5:1]
root 1641 0.0 0.0 0 0 ? S Jan04 0:00 [kworker/u30:1]
root 1863 0.0 0.0 14536 404 tty1 Ss+ 2015 0:00 /sbin/getty -8 38400 tty1
root 1864 0.0 0.0 12784 388 ttyS0 Ss+ 2015 0:00 /sbin/getty -8 38400 ttyS0
ntp 2075 0.0 0.0 31448 1252 ? Ss 2015 1:17 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 10
root 2087 0.0 0.0 0 0 ? S 2015 0:00 [kauditd]
ubuntu 2393 0.0 0.0 105628 2028 ? S Jan04 0:00 sshd: ubuntu@notty
root 2496 0.0 0.0 0 0 ? S Jan04 0:00 [kworker/2:2]
root 2713 0.0 0.0 0 0 ? S 2015 0:00 [kworker/6:2]
root 2722 0.0 0.0 0 0 ? S 2015 0:12 [kworker/5:2]
root 3678 0.0 0.0 0 0 ? S Jan05 0:01 [kworker/0:0]
root 3716 0.0 0.0 0 0 ? S Jan05 0:00 [kworker/3:0]
root 3941 0.0 0.0 0 0 ? S Jan05 0:00 [kworker/2:0]
root 4732 0.0 0.0 0 0 ? S Jan05 0:00 [kworker/1:2]
root 6896 0.0 0.0 105628 4228 ? Ss 08:00 0:00 sshd: ubuntu [priv]
ubuntu 7008 0.0 0.0 105628 1876 ? S 08:00 0:00 sshd: ubuntu@pts/0
ubuntu 7014 0.0 0.0 21308 3908 pts/0 Ss 08:00 0:00 -bash
root 7234 0.0 0.0 63668 2096 pts/0 S 08:10 0:00 sudo su
root 7235 0.0 0.0 63248 1776 pts/0 S 08:10 0:00 su
root 7236 1.0 0.0 21088 3456 pts/0 S 08:10 0:00 bash
root 7248 0.0 0.0 17164 1320 pts/0 R+ 08:10 0:00 ps aux
root 13299 0.0 0.0 0 0 ? S 2015 0:19 [kworker/3:2]
root 19933 0.0 0.0 0 0 ? S 2015 0:00 [kworker/7:1]
root 20305 0.0 0.0 0 0 ? S 2015 0:00 [kworker/4:2]
root 29814 0.0 0.0 0 0 ? S< Jan04 0:00 [kworker/u31:2]
root 30693 0.0 0.0 0 0 ? S< Jan04 0:00 [kworker/u31:1]
Caleb Spare
(211 rep)
Jan 5, 2016, 02:04 AM
• Last activity: Feb 18, 2017, 05:16 AM
6
votes
1
answers
2048
views
Recover from faking /proc/meminfo
So, without really thinking too much, I ran this script: #!/bin/bash SWAP="${1:-512}" NEW="$[SWAP*1024]"; TEMP="${NEW//?/ }"; OLD="${TEMP:1}0" sed "/^Swap\(Total\|Free\):/s,$OLD,$NEW," /proc/meminfo > /etc/fake_meminfo mount --bind /etc/fake_meminfo /proc/meminfo from here: http://linux-problem-solv...
So, without really thinking too much, I ran this script:
#!/bin/bash
SWAP="${1:-512}"
NEW="$[SWAP*1024]"; TEMP="${NEW//?/ }"; OLD="${TEMP:1}0"
sed "/^Swap\(Total\|Free\):/s,$OLD,$NEW," /proc/meminfo > /etc/fake_meminfo
mount --bind /etc/fake_meminfo /proc/meminfo
from here: http://linux-problem-solver.blogspot.com.ee/2013/08/create-fake-swap-in-openvz-vps-if-you-get-swapon-failed-operation-not-permitted-error.html
It worked really well for lying about my swap-space, but now I'd like good old commands like
free -m
to work again, but /proc/meminfo is totally empty and the server doesn't seem to know anything about it's RAM any more, even with atop or somesuch.
Thanks for reading.
Lauri Elias
(163 rep)
Jan 27, 2016, 11:25 PM
• Last activity: Jan 27, 2016, 11:49 PM
1
votes
0
answers
181
views
Investigating Active MemInfo
I am seeing high Active usage when I do a cat /proc/meminfo : MemTotal: 65965328 kB MemFree: 51640992 kB Buffers: 1050332 kB Cached: 8516112 kB SwapCached: 0 kB Active: 11512732 kB Inactive: 1878028 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 65965328 kB LowFree: 51640992 kB SwapTotal: 2096472 kB Sw...
I am seeing high Active usage when I do a cat /proc/meminfo :
MemTotal: 65965328 kB
MemFree: 51640992 kB
Buffers: 1050332 kB
Cached: 8516112 kB
SwapCached: 0 kB
Active: 11512732 kB
Inactive: 1878028 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 65965328 kB
LowFree: 51640992 kB
SwapTotal: 2096472 kB
SwapFree: 2096472 kB
Dirty: 51340 kB
Writeback: 0 kB
AnonPages: 3823896 kB
Mapped: 132288 kB
Slab: 876208 kB
PageTables: 15060 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 35079136 kB
Committed_AS: 4945780 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 264384 kB
VmallocChunk: 34359473967 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
You can see Total Memory is 65 GB while Active usage is around 11 GB. The main java process than is running on the server isn't consuming a lot of memory ( ~ 5 % from output of top ) . I would like to know how I can dig deeper and investigate what is causing this high Active usage. I understand that this value indicates the memory actively used for buffer and page cache that cannot be reclaimed.
I am running RHEL5 kernel version 2.6.18. How can I break the "Active" usage into finer parts and identify what exactly is contributing to this high usage ?
Sirish Chandraa
(11 rep)
Jan 2, 2015, 11:49 PM
3
votes
0
answers
338
views
In meminfo, sometimes Mapped more than Cached
In `/proc/meminfo`, AFAIK > Cached >= Mapped But after `/proc/sys/vm/drop_caches`, it goes to > Cached < Mapped Cached: 66132 kB Mapped: 67792 kB `/proc/meminfo` contents after: MemTotal: 369020 kB MemFree: 34588 kB Buffers: 184 kB Cached: 66132 kB SwapCached: 0 kB Active: 246624 kB Inactive: 29688...
In
/proc/meminfo
, AFAIK
> Cached >= Mapped
But after /proc/sys/vm/drop_caches
, it goes to
> Cached < Mapped
Cached: 66132 kB
Mapped: 67792 kB
/proc/meminfo
contents after:
MemTotal: 369020 kB
MemFree: 34588 kB
Buffers: 184 kB
Cached: 66132 kB
SwapCached: 0 kB
Active: 246624 kB
Inactive: 29688 kB
Active(anon): 209728 kB
Inactive(anon): 9920 kB
Active(file): 36896 kB
Inactive(file): 19768 kB
Unevictable: 12 kB
Mlocked: 12 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 369020 kB
LowFree: 34588 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 4 kB
Writeback: 0 kB
AnonPages: 210040 kB
Mapped: 67792 kB
Shmem: 9644 kB
Slab: 27520 kB
SReclaimable: 7500 kB
SUnreclaim: 20020 kB
KernelStack: 3160 kB
PageTables: 5004 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 184508 kB
Committed_AS: 1832660 kB
VmallocTotal: 499712 kB
VmallocUsed: 2708 kB
VmallocChunk: 417128 kB
Anyone know this issue?
Hwangho Kim
(131 rep)
Jul 1, 2014, 07:51 AM
• Last activity: Jul 1, 2014, 01:20 PM
4
votes
1
answers
3639
views
When is swap triggered or how to calculate swap_tendency?
I'm trying to use Redis for production services and trying to avoiding swapping, which is bad for performance. I had learn that swap is triggered by swap_tendency which is depending on > swap_tendency = mapped_ratio/2 + swappiness + distress How can I get mapped_ratio/distress from `/proc/meminfo` f...
I'm trying to use Redis for production services and trying to avoiding swapping, which is bad for performance.
I had learn that swap is triggered by swap_tendency which is depending on
> swap_tendency = mapped_ratio/2 + swappiness + distress
How can I get mapped_ratio/distress from
/proc/meminfo
for my monitor script?
Or anything parameter that can info me that system is going to swap pages?
Zhuo.M
(143 rep)
Jun 3, 2014, 06:00 AM
• Last activity: Jun 10, 2014, 08:46 AM
Showing page 1 of 20 total questions