Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
4
votes
0
answers
93
views
How to find out what’s using 10GiB of my RAM when ps is only showing ~1GB?
I’ve had mysterious memory usage on a Thinkpad E495 for the longest time. Starting with Ubuntu 20.04, through several Ubuntu versions with default kernels and xanmod kernels and now under openSUSE Leap 15.6. After a few weeks of uptime (using suspend2ram during the night) I end up with excessive mem...
I’ve had mysterious memory usage on a Thinkpad E495 for the longest time. Starting with Ubuntu 20.04, through several Ubuntu versions with default kernels and xanmod kernels and now under openSUSE Leap 15.6.
After a few weeks of uptime (using suspend2ram during the night) I end up with excessive memory usage:
free -m
total used free shared buff/cache available
Mem: 13852 9654 2345 406 2578 4198
Swap: 2047 1144 903
Worst processes according to
sudo ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n
account for not even 1.5GB (I scrolled through the list, there are not 10k 1MiB processes further up the list)
8.75 MB sudo
8.85156 MB /usr/bin/akonadi_sendlater_agent
8.94531 MB /usr/bin/akonadi_indexing_agent
9.08594 MB /usr/sbin/NetworkManager
9.49609 MB /usr/bin/kalendarac
9.98047 MB /usr/bin/X
10.9453 MB /usr/bin/akonadi_pop3_resource
11.2227 MB /usr/lib/systemd/systemd-journald
11.4492 MB /usr/lib/kdeconnectd
12.4375 MB /usr/lib/xdg-desktop-portal
14.1133 MB /usr/bin/Xwayland
14.3477 MB /usr/bin/X
17.3867 MB /usr/lib/xdg-desktop-portal-kde
22.8555 MB /usr/sbin/mysqld
24.6055 MB /usr/bin/kded5
24.8555 MB weechat
27.0703 MB /usr/bin/akonadiserver
92.5195 MB /usr/bin/konsole
113.832 MB /usr/bin/krunner
155.871 MB /usr/bin/kwin_wayland
660.578 MB /usr/bin/plasmashell
If I keep using the laptop when it’s at this stage the out of memory daemon eventually kills my Firefox or plasmashell (after making the laptop freeze for 10 minutes while it does who knows what)
Any ideas on how to find the culprit? At this point I’m almost suspecting some kind of UEFI issue or some kernel module issue.
Edit:
cat /proc/meminfo
as requested:
MemTotal: 14184696 kB
MemFree: 2064340 kB
MemAvailable: 4136252 kB
Buffers: 1076 kB
Cached: 2677164 kB
SwapCached: 4416 kB
Active: 1996252 kB
Inactive: 2999396 kB
Active(anon): 603672 kB
Inactive(anon): 2156904 kB
Active(file): 1392580 kB
Inactive(file): 842492 kB
Unevictable: 32 kB
Mlocked: 32 kB
SwapTotal: 2097148 kB
SwapFree: 924940 kB
Zswap: 0 kB
Zswapped: 0 kB
Dirty: 2140 kB
Writeback: 0 kB
AnonPages: 2275668 kB
Mapped: 577764 kB
Shmem: 443168 kB
KReclaimable: 163648 kB
Slab: 1060872 kB
SReclaimable: 163648 kB
SUnreclaim: 897224 kB
KernelStack: 22320 kB
PageTables: 60920 kB
SecPageTables: 0 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 9189496 kB
Committed_AS: 11361760 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 67168 kB
VmallocChunk: 0 kB
Percpu: 7296 kB
HardwareCorrupted: 0 kB
AnonHugePages: 743424 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
Unaccepted: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 13922424 kB
DirectMap2M: 632832 kB
DirectMap1G: 0 kB
Edit2:
For what it’s worth also the top few lines of a raw sudo ps aux --sort -rss
:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mike 20562 0.1 4.9 6180236 704660 ? Sl 12:14 0:33 /usr/bin/plasmashell
mike 16053 1.9 1.3 2411024 190684 ? Sl Apr16 418:04 /usr/bin/kwin_wayland --wayland-fd 7 --socket wayland-0 --xwayland-fd 8 --xwayland-fd 9 --xwayland-display :2 --xwayland-xauthority /run/user/1000/xauth_CCFtQW --xwayland
mike 17485 0.0 0.8 3702672 117056 ? Ssl Apr16 0:51 /usr/bin/krunner
mike 18262 0.0 0.7 1608452 112084 ? Sl Apr16 16:29 /usr/bin/konsole
mike 16153 0.0 0.2 2224152 38100 ? Ssl Apr16 4:07 /usr/bin/kded5
mike 18307 0.0 0.1 86288 28288 pts/1 S+ Apr16 4:13 weechat
mike 3300 0.0 0.1 190896 27528 ? Sl 15:38 0:06 /usr/lib/kf5/kio_http_cache_cleaner
mike 18440 0.0 0.1 3120756 27224 ? Sl Apr16 0:43 /usr/bin/akonadiserver
root 25945 0.0 0.1 28580 25352 pts/4 S+ 13:18 0:01 /bin/bash
mike 16237 0.0 0.1 1633312 22884 ? Ssl Apr16 0:13 /usr/lib/xdg-desktop-portal-kde
Michael
(190 rep)
May 1, 2025, 10:43 AM
• Last activity: May 1, 2025, 04:38 PM
5
votes
1
answers
2096
views
Xorg taking huge amounts of memory
Lately, there is a lot of memory leak on my (Arch) Linux laptop. A command named ```Xorg -nolisten tcp :0 vt1 --keeptty -auth /tmp/serverauth.mWgFYYiRdF``` is continually taking 27.2 % of my 8GB RAM (also around 2G is consumed of swap). How do I troubleshoot ? (I use no login manager, just ```startx...
Lately, there is a lot of memory leak on my (Arch) Linux laptop. A command named
-nolisten tcp :0 vt1 --keeptty -auth /tmp/serverauth.mWgFYYiRdF
is continually taking 27.2 % of my 8GB RAM (also around 2G is consumed of swap).
How do I troubleshoot ? (I use no login manager, just
Akshat Vats
(489 rep)
Nov 18, 2020, 02:24 PM
• Last activity: Apr 21, 2025, 06:05 PM
1
votes
0
answers
110
views
Unaccouted Swap Memory Usage (leak?)
I'm running GNOME on Debian 12 (Trixie/sid) (ie, the current testing distribution). Packages are up to date, I don't have any super weird configurations (I do run with `--mitigations=off`), and I've been seeing this problem for a few months. My laptop is a Dell 7424, with an Intel i5-8350U as well a...
I'm running GNOME on Debian 12 (Trixie/sid) (ie, the current testing distribution). Packages are up to date, I don't have any super weird configurations (I do run with
I suspect (without evidence) that it is tied somehow to the window manager interacting with multiple displays and/or X windows. I could see the unaccounted space being attributed to shared memory being swapped out, but that raises the question of what processes are sharing that memory.
Please note that the problem is *not* with memory; it is purely with swap usage. Resident memory is irrelvant; the problem is purely that the sum of the per-process swap usage (displayed in the
--mitigations=off
), and I've been seeing this problem for a few months. My laptop is a Dell 7424, with an Intel i5-8350U as well as a discrete Radeon 540 GPU.
I've noticed what seems to be a memory leak into swap; swap space which is not attributed to any process. For instance, a screenshot below shows a total swap usage of **2,000** megabytes, but htop
running as root shows maybe **200** between all processes. No, I don't need to scroll up. This screenshot was taken immediately after I logged out then back in.

SWAP
column of the screenshot) does not correspond to the total swap usage (displayed in the top of the screenshot, as the label on the SWP
meter). The bug is not in htop
; I have confirmed that the values displayed correspond to the data in /proc
Calum McConnell
(200 rep)
Dec 2, 2024, 03:32 PM
• Last activity: Dec 8, 2024, 10:36 PM
0
votes
0
answers
54
views
Puzzlement on memory statistics in Linux
I am working on an embedded Linux system (Kernel-5.10.186) on a SOC platform. Now I am doing long-time testing to collect memory statistics in the system. And I had a shell script to collect memory usage every 5 seconds, and I got following 2 outputs which made me puzzled. ``` ================ No. 3...
I am working on an embedded Linux system (Kernel-5.10.186) on a SOC platform.
Now I am doing long-time testing to collect memory statistics in the system.
And I had a shell script to collect memory usage every 5 seconds, and I got following 2 outputs which made me puzzled.
================ No. 37 =================
total used free shared buff/cache available
Mem: 55096 15864 28336 100 10896 36220
Swap: 0 0 0
Active(anon): 148 kB
Inactive(anon): 3736 kB
AnonPages: 3848 kB
108 dummyx.sh 3292 1056
112 test_proc 178m 5528
136 dbus-daemon 4144 1856
171 adbd 35m 1620
173 udc_daemon 3292 1288
174 sh 3292 1392
175 watchdog 3160 1080
test_proc's RSS: === 5660672
......
================ No. 2584 =================
total used free shared buff/cache available
Mem: 55096 15808 27612 100 11676 36272
Swap: 0 0 0
Active(anon): 132 kB
Inactive(anon): 3256 kB
AnonPages: 3332 kB
108 dummyx.sh 3292 1056
112 test_proc 178m 6220
136 dbus-daemon 4144 1856
171 adbd 43m 1860
173 udc_daemon 3292 1288
174 sh 3292 1392
175 watchdog 3160 1080
test_proc's RSS: === 6369280
I am watching the memory usage of the system and test_proc
, the test_proc
RSS is increased from 5660672 to 6369280 by about 700KB, but the used
and available
memory are almost the same!! Why?
wangt13
(631 rep)
Jul 17, 2024, 06:59 AM
• Last activity: Jul 17, 2024, 07:06 AM
5
votes
1
answers
779
views
nginx reload - effectively memory leak
When running `nginx -s reload`, nginx is meant to soft reload (gradually close existing connections on the Old process, and service new requests on the New process). It does that, except it seems that (perhaps) an active request isn't completing on the old process. Producing a runaway condition wher...
When running
nginx version: nginx/1.24.0
nginx -s reload
, nginx is meant to soft reload (gradually close existing connections on the Old process, and service new requests on the New process).
It does that, except it seems that (perhaps) an active request isn't completing on the old process.
Producing a runaway condition whereby if multiple reload
s are attempted, the server will eventually run out of ram.
Is there a way (c++ module maybe?) to dump what connections nginx is servicing on a **specific** linux pid?
I'm unsure how I can solve this, unless I can find out precisely what isn't allowing nginx to shut itself down.
*(and I can't just turn sites off to divide/conquer, it's a live client server with 200+ websites)*

Barry
(152 rep)
Mar 22, 2024, 04:07 PM
• Last activity: Mar 22, 2024, 07:33 PM
3
votes
1
answers
2320
views
How to fix mbuf exhaustion under FreeBSD 10?
A FreeBSD server which was recently upgraded from FreeBSD 10.3-STABLE to FreeBSD 10.3-RELEASE-p21 is exhibiting mbuf exhaustion. Under the course of normal operation, we see mbuf usage steadily increasing until we reach `kern.ipc.nmbufs` limit, at which point the machine becomes unresponsive over th...
A FreeBSD server which was recently upgraded from FreeBSD 10.3-STABLE to FreeBSD 10.3-RELEASE-p21 is exhibiting mbuf exhaustion. Under the course of normal operation, we see mbuf usage steadily increasing until we reach
kern.ipc.nmbufs
limit, at which point the machine becomes unresponsive over the network (due to lack of mbufs for network access) and the console displays:
cxl0: Interface stopped DISTRIBUTING, possible flapping
cxl1: Interface stopped DISTRIBUTING, possible flapping
[zone: mbuf] kern.ipc.nmbufs limit reached
[zone: mbuf] kern.ipc.nmbufs limit reached
The machine runs pf and acts as a packet filter, router, gateway and DHCP/DNS server. It has two Chelsio NICs in it, and is a CARP master with a secondary. The secondary has identical configuration of hardware and software and does not exhibit this issue.
Given the downtime this causes, we set up our Nagios/Check_MK to graph the output of netstat -m
and alert when mbufs in use
approaches kern.ipc.nmbufs
and we see a steady linear increase in mbuf usage until we reboot:
![stairway to heaven... where servers go when they die ][1]
*mbuf clusters in use* does not change when this happens and increasing mbuf cluster limits has no effect:
![mbuf clusters in use ][2]
This appears to be a kernel bug of some sort to me, looking for advice on further troubleshooting or assistance in resolving this!
---
Helpful (maybe) information:
netstat -m
:
679270/3080/682350 mbufs in use (current/cache/total)
10243/1657/11900/985360 mbuf clusters in use (current/cache/total/max)
10243/1648 mbuf+clusters out of packet secondary zone in use (current/cache)
8128/482/8610/124025 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/36748 9k jumbo clusters in use (current/cache/total/max)
128/0/128/20670 16k jumbo clusters in use (current/cache/total/max)
224863K/6012K/230875K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
vmstat -z|grep -E '^ITEM|mbuf'
:
ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP
mbuf_packet: 256, 1587540, 10239, 1652,84058893, 0, 0
mbuf: 256, 1587540, 671533, 1206,914478880, 0, 0
mbuf_cluster: 2048, 985360, 11891, 9, 11891, 0, 0
mbuf_jumbo_page: 4096, 124025, 8128, 512,15011847, 0, 0
mbuf_jumbo_9k: 9216, 36748, 0, 0, 0, 0, 0
mbuf_jumbo_16k: 16384, 20670, 128, 0, 128, 0, 0
mbuf_ext_refcnt: 4, 0, 0, 0, 0, 0, 0
vmstat -m
:
Type InUse MemUse HighUse Requests Size(s)
NFSD lckfile 1 1K - 1 256
filedesc 103 383K - 1134731 16,32,128,2048,4096,8192,16384,65536
sigio 1 1K - 1 64
filecaps 0 0K - 973 64
kdtrace 292 59K - 1099386 64,256
kenv 121 13K - 125 16,32,64,128,8192
kqueue 14 22K - 5374 256,2048,8192
proc-args 54 5K - 578448 16,32,64,128,256
hhook 2 1K - 2 256
ithread 146 24K - 146 32,128,256
KTRACE 100 13K - 100 128
NFS fh 1 1K - 584 32
linker 207 1052K - 234 16,32,64,128,256,512,1024,2048,4096,8192,16384,65536
lockf 29 3K - 20042 64,128
loginclass 2 1K - 1192 64
devbuf 17205 36362K - 17523 16,32,64,128,256,512,1024,2048,4096,8192,65536
temp 149 51K - 1280113 16,32,64,128,256,512,1024,2048,4096,8192,16384,65536
ip6opt 5 2K - 6 256
ip6ndp 27 2K - 27 64,128
module 230 29K - 230 128
mtx_pool 2 16K - 2 8192
osd 3 1K - 5 16,32,64
pmchooks 1 1K - 1 128
pgrp 30 4K - 2222 128
session 29 4K - 2187 128
proc 2 32K - 2 16384
subproc 211 368K - 1099014 512,4096
cred 204 32K - 6025704 64,256
plimit 19 5K - 3985 256
uidinfo 9 5K - 11892 128,4096
NFSD session 1 1K - 1 1024
sysctl 0 0K - 63851 16,32,64
sysctloid 7196 365K - 7369 16,32,64,128
sysctltmp 0 0K - 17834 16,32,64,128
tidhash 1 32K - 1 32768
callout 5 2184K - 5
umtx 522 66K - 522 128
p1003.1b 1 1K - 1 16
SWAP 2 549K - 2 64
bus 802 86K - 6536 16,32,64,128,256,1024
bus-sc 57 1671K - 2431 16,32,64,128,256,512,1024,2048,4096,8192,16384,65536
newnfsmnt 1 1K - 1 1024
devstat 8 17K - 8 32,4096
eventhandler 116 10K - 116 64,128
kobj 124 496K - 296 4096
acpiintr 1 1K - 1 64
Per-cpu 1 1K - 1 32
acpica 14355 1420K - 216546 16,32,64,128,256,512,1024,2048,4096
pci_link 16 2K - 16 64,128
pfs_nodes 21 6K - 21 256
rman 316 37K - 716 16,32,128
sbuf 1 1K - 41375 16,32,64,128,256,512,1024,2048,4096,8192,16384
sglist 8 8K - 8 1024
GEOM 88 15K - 1871 16,32,64,128,256,512,1024,2048,8192,16384
acpipwr 5 1K - 5 64
taskqueue 43 7K - 43 16,32,256
Unitno 22 2K - 1208250 32,64
vmem 3 144K - 6 1024,4096,8192
ioctlops 0 0K - 185700 256,512,1024,2048,4096
select 89 12K - 89 128
iov 0 0K - 19808992 16,64,128,256,512,1024
msg 4 30K - 4 2048,4096,8192,16384
sem 4 106K - 4 2048,4096
shm 1 32K - 1 32768
tty 20 20K - 499 1024
pts 1 1K - 480 256
accf 2 1K - 2 64
mbuf_tag 0 0K - 291472282 32,64,128
shmfd 1 8K - 1 8192
soname 32 4K - 1210442 16,32,128
pcb 36 663K - 76872 16,32,64,128,1024,2048,8192
CAM CCB 0 0K - 182128 2048
acl 0 0K - 2 4096
vfscache 1 2048K - 1
cl_savebuf 0 0K - 480 64
vfs_hash 1 1024K - 1
vnodes 1 1K - 1 256
entropy 1026 65K - 49107 32,64,4096
mount 64 3K - 140 16,32,64,128,256
vnodemarker 0 0K - 4212 512
BPF 112 20504K - 131 16,64,128,512,4096
CAM path 11 1K - 63 32
ifnet 29 57K - 30 128,256,2048
ifaddr 315 105K - 315 32,64,128,256,512,2048,4096
ether_multi 232 13K - 282 16,32,64
clone 10 2K - 10 128
arpcom 23 1K - 23 16
gif 4 1K - 4 32,256
lltable 155 53K - 551 256,512
UART 6 5K - 6 16,1024
vlan 56 5K - 74 64,128
acpitask 1 16K - 1 16384
acpisem 110 14K - 110 128
raid_data 0 0K - 108 32,128,256
routetbl 516 136K - 101735 32,64,128,256,512
igmp 28 7K - 28 256
CARP 76 30K - 83 16,32,64,128,256,512,1024
ipid 2 24K - 2 8192,16384
in_mfilter 112 112K - 112 1024
in_multi 43 11K - 43 256
ip_moptions 224 35K - 224 64,256
CAM periph 7 2K - 19 16,32,64,128,256
acpidev 128 8K - 128 64
CAM queue 15 5K - 39 16,32,512
encap_export_host 4 4K - 4 1024
sctp_a_it 0 0K - 36 16
sctp_vrf 1 1K - 1 64
sctp_ifa 115 15K - 204 128
sctp_ifn 21 3K - 23 128
sctp_iter 0 0K - 36 256
hostcache 1 32K - 1 32768
syncache 1 64K - 1 65536
in6_mfilter 1 1K - 1 1024
in6_multi 15 2K - 15 32,256
ip6_moptions 2 1K - 2 32,256
CAM dev queue 6 1K - 6 64
kbdmux 6 22K - 6 16,512,1024,2048,16384
mld 26 4K - 26 128
LED 20 2K - 20 16,128
inpcbpolicy 365 12K - 119277 32
secasvar 7 2K - 214 256
sahead 10 3K - 10 256
ipsecpolicy 748 187K - 241562 256
ipsecrequest 18 3K - 72 128
ipsec-misc 56 2K - 1712 16,32,64
ipsec-saq 0 0K - 24 128
ipsec-reg 3 1K - 3 32
pfsync 2 2K - 893 32,256,1024
pf_temp 0 0K - 78 128
pf_hash 3 2880K - 3
pf_ifnet 36 11K - 9510 256,2048
pf_tag 7 1K - 7 128
pf_altq 5 2K - 125 256
pf_rule 964 904K - 17500 128,1024
pf_osfp 1130 115K - 28250 64,128
pf_table 49 98K - 948 2048
crypto 37 11K - 1072 64,128,256,512,1024
xform 7 1K - 1530156 16,32,64,128,256
rpc 12 20K - 304 64,128,512,1024,8192
audit_evclass 187 6K - 231 32
ufs_dirhash 93 18K - 93 16,32,64,128,256,512
ufs_quota 1 1024K - 1
ufs_mount 3 13K - 3 512,4096,8192
vm_pgdata 2 513K - 2 128
UMAHash 5 6K - 10 512,1024,2048
CAM SIM 6 2K - 6 256
CAM XPT 30 3K - 1850 16,32,64,128,256,512,1024,2048,65536
CAM DEV 9 18K - 16 2048
fpukern_ctx 3 6K - 3 2048
memdesc 1 4K - 1 4096
USB 23 33K - 24 16,128,256,512,1024,2048,4096
DEVFS3 136 34K - 2027 256
DEVFS1 108 54K - 594 512
apmdev 1 1K - 1 128
madt_table 0 0K - 1 4096
DEVFS_RULE 55 26K - 55 64,512
DEVFS 12 1K - 13 16,128
DEVFSP 22 2K - 167 64
io_apic 1 2K - 1 2048
isadev 8 1K - 8 128
MCA 15 2K - 15 32,128
msi 30 4K - 30 128
nexusdev 5 1K - 5 16
USBdev 21 8K - 21 32,64,128,256,512,1024,4096
NFSD V4client 1 1K - 1 256
cdev 5 2K - 5 256
cxgbe 41 956K - 44 128,256,512,1024,2048,4096,8192,16384
ipmi 0 0K - 20155 128,2048
htcp data 127 4K - 13675 32
aesni_data 3 3K - 3 1024
solaris 142 12302K - 3189 16,32,64,128,512,1024,8192
kstat_data 6 1K - 6 64
TCP States:

Josh
(8728 rep)
Sep 28, 2017, 02:38 AM
• Last activity: Jan 14, 2024, 04:29 PM
24
votes
5
answers
74917
views
Python programs suddenly get killed
I'm running some python programs that are quite heavy. I've been running this script for several weeks now, but in the past couple of days, the program gets killed with the message: ``` Killed ``` I tried [creating a new swap file][1] with 8 GB, but it kept happening. I also tried using: ``` dmesg -...
I'm running some python programs that are quite heavy. I've been running this script for several weeks now, but in the past couple of days, the program gets killed with the message:
Killed
I tried creating a new swap file with 8 GB, but it kept happening.
I also tried using:
dmesg -T| grep -E -i -B100 'killed process'
which listed out the error:
[Sat Oct 17 02:08:41 2020] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/user@1000.service,task=python,pid=56849,uid=1000
[Sat Oct 17 02:08:41 2020] Out of memory: Killed process 56849 (python) total-vm:21719376kB, anon-rss:14311012kB, file-rss:0kB, shmem-rss:4kB, UID:1000 pgtables:40572kB oom_score_adj:0
I have a strong machine and I tried also not running anything else when running ( Pycharm or terminal) but it keeps happening.
specs:
* Ubuntu 20.04 LTS (64bit)
* 15.4 GiB RAM
* Intel Core i7-105100 CPU @ 1.80 GHz x 8
when running -h t
total used free shared buff/cache available
Mem: 15Gi 2.4Gi 10Gi 313Mi 2.0Gi 12Gi
Swap: 8.0Gi 1.0Gi 7.0Gi
yovel cohen
(341 rep)
Oct 17, 2020, 11:32 AM
• Last activity: Jan 11, 2024, 02:13 PM
0
votes
1
answers
125
views
How does kmemleak in Linux detect the unreferenced memory?
I am working on an embedded Linux system (kernel-5.10.24), and I am trying to understand how does kmemleak work. According to the document, the kmemleak scan the data section to check if there is unreferenced memory, and the kernel code is as follows, `kmemleak_scan()` ```c /* * Struct page scanning...
I am working on an embedded Linux system (kernel-5.10.24), and I am trying to understand how does kmemleak work.
According to the document, the kmemleak scan the data section to check if there is unreferenced memory, and the kernel code is as follows,
kmemleak_scan()
/*
* Struct page scanning for each node.
*/
get_online_mems();
for_each_populated_zone(zone) {
unsigned long start_pfn = zone->zone_start_pfn;
unsigned long end_pfn = zone_end_pfn(zone);
unsigned long pfn;
for (pfn = start_pfn; pfn = max_addr)
continue;
/*
* No need for get_object() here since we hold kmemleak_lock.
* object->use_count cannot be dropped to 0 while the object
* is still present in object_tree_root and object_list
* (with updates protected by kmemleak_lock).
*/
object = lookup_object(pointer, 1);
The pointer to struct page
is casted to unsigned long *
, and de-referenced the unsigned long *
to get the pointer
as the memory address to check.
My puzzlement comes from the _de-reference_ the pointer to struct page
, which is a structure to describe the PFN. Why de-referencing it can get the memory address, instead of the structure page?
In my system, the size of struct page
is 32 bytes, so the page+1
is only page+0x20
instead of being increased by page_size (0x1000).
wangt13
(631 rep)
Jan 5, 2024, 12:12 AM
• Last activity: Jan 5, 2024, 06:37 AM
4
votes
1
answers
1281
views
Does QEMU on Linux Ubuntu 20.04.1 x86_64 have a memory leak?
We have a testbed for an OSv project that runs (5.15.0-72-generic - 20.04.1-Ubuntu - x86_64) the same instances a lot of time. The script for the execution of a single run is very simple and follows: ``` while [ $x -le $t ] do ./scripts/capstan_run.sh "$delay" now="$(date +'%d%m%Y-%H%M%S')" ./script...
We have a testbed for an OSv project that runs (5.15.0-72-generic - 20.04.1-Ubuntu - x86_64) the same instances a lot of time. The script for the execution of a single run is very simple and follows:
while [ $x -le $t ]
do
./scripts/capstan_run.sh "$delay"
now="$(date +'%d%m%Y-%H%M%S')"
./scripts/stats.sh > stats/"$x"_"$delay"_stats_"$now".txt & PID=$!
sleep "$delay" #sleep delay mills for the execution
kill $PID ; wait $PID 2>/dev/null
echo "Delay $delay before fetches"
sleep "$delay" #sleep delay mills before fetch files
./scripts/fetch_files.sh "$delay"
./scripts/shutdown_vm.sh
((x++))
done
capstun_run.sh
initiates the simulation with containers executing on the QEMU virtualization layer. It then sleeps and retrieves files from the instances. The shutdown.sh
script terminates QEMU:
qemu-system-x86_64
**We observe between runs an increment of used memory. It is constant and never decreases.** The server has 126G of RAM and 24 CPUs.
For example we observe that used memory starts from 8% and arrives at 12%, with an increment of 0.1%.
Date Memory Disk CPU
...
07/04/2023-163242 12.03% 27% 15.00%
07/04/2023-163247 12.03% 27% 16.00%
07/04/2023-163252 12.03% 27% 16.00%
07/04/2023-163257 12.03% 27% 16.00%
07/04/2023-163303 12.03% 27% 16.00%
07/04/2023-163308 12.04% 27% 16.00%
07/04/2023-163313 12.03% 27% 16.00%
07/04/2023-163318 12.04% 27% 15.00%
07/04/2023-163323 12.04% 27% 16.00%
07/04/2023-163328 12.04% 27% 16.00%
07/04/2023-163334 12.04% 27% 16.00%
07/04/2023-163339 12.04% 27% 16.00%
07/04/2023-163344 12.06% 27% 16.00%
07/04/2023-163349 12.08% 27% 16.00%
07/04/2023-163354 12.09% 27% 16.00%
07/04/2023-163359 12.09% 27% 15.00%
07/04/2023-163405 12.09% 27% 15.00%
07/04/2023-163410 12.09% 27% 15.00%
07/04/2023-163415 12.09% 27% 15.00%
Is there any memory leak in QEMU?
=== UPDATE ===
The stats.sh computes the used % mem in this manner:
-m | awk 'NR==2{printf "%.2f%%\t\t", $3*100/$2 }
So I think there is an error, because it's the "used/total*100" e does not include the cache.
Is my evaluation correct?
robob
(604 rep)
May 22, 2023, 05:15 AM
• Last activity: Aug 2, 2023, 08:16 AM
0
votes
0
answers
101
views
Kernel memory leak when using a Linux system as router
I am working on two embedded Linux systems (kernel-6.2.10) both based on Toradex colibri IMX6ULL SoM. The first one (system A) is configured to work as a wifi access point (with hostapd), and the second (system B) is connected to this access point (with wpa_supplicant). When I want to make an FTP da...
I am working on two embedded Linux systems (kernel-6.2.10) both based on Toradex colibri IMX6ULL SoM. The first one (system A) is configured to work as a wifi access point (with hostapd), and the second (system B) is connected to this access point (with wpa_supplicant).
When I want to make an FTP data transfer from the system B to my PC through system A, I observe a memory leak into system A. This only happen in this configuration. Transferring data from system A to my PC does not show any memory leak.
It seems to be reproducible with any kind of network communication but easier to reproduce with an FTP data transfer.
Using kmemleak, it seems that the memory leak comes from the Marvell driver mwifiex :
unreferenced object 0x83a2f540 (size 184):
comm "kworker/0:2", pid 43, jiffies 4294947832 (age 162.950s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[] kmem_cache_alloc+0x188/0x2c4
[] __netdev_alloc_skb+0xe8/0x194
[] ieee80211_amsdu_to_8023s+0x1b0/0x498 [cfg80211]
[] mwifiex_11n_dispatch_pkt+0x7c/0x174 [mwifiex]
[] mwifiex_11n_rx_reorder_pkt+0x388/0x3dc [mwifiex]
[] mwifiex_process_uap_rx_packet+0xc0/0x200 [mwifiex]
[] mwifiex_decode_rx_packet+0x1d4/0x224 [mwifiex_sdio]
[] mwifiex_process_int_status+0x850/0xd70 [mwifiex_sdio]
[] mwifiex_main_process+0x124/0xa30 [mwifiex]
[] process_sdio_pending_irqs+0xe4/0x1d8
[] sdio_irq_work+0x3c/0x64
[] process_one_work+0x1d8/0x3e4
[] worker_thread+0x58/0x54c
[] kthread+0xcc/0xe8
[] ret_from_fork+0x14/0x2c
unreferenced object 0x82fa2a40 (size 184):
comm "kworker/0:2", pid 43, jiffies 4294947833 (age 162.940s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[] kmem_cache_alloc_node+0x198/0x2d8
[] __alloc_skb+0x10c/0x168
[] __netdev_alloc_skb+0x3c/0x194
[] mwifiex_alloc_dma_align_buf+0x14/0x40 [mwifiex]
[] mwifiex_process_int_status+0x7f0/0xd70 [mwifiex_sdio]
[] mwifiex_main_process+0x124/0xa30 [mwifiex]
[] process_sdio_pending_irqs+0xe4/0x1d8
[] sdio_irq_work+0x3c/0x64
[] process_one_work+0x1d8/0x3e4
[] worker_thread+0x58/0x54c
[] kthread+0xcc/0xe8
[] ret_from_fork+0x14/0x2c
Before going deeper into kernel driver code, which I am not expert in, I would like to be sure that it is the **only** solution to fix my issue.
Any kind of help would be appreciated !
willy martin
(1 rep)
May 3, 2023, 10:25 AM
36
votes
5
answers
153741
views
How can I find a memory leak of a running process?
Is there a way, I can find the memory leak of a running process? I can use Valgrind for finding memory leaks before the start of a process. I can use GDB to attach it to a running process. How could I debug a memory leaks of a running process?
Is there a way, I can find the memory leak of a running process? I can use Valgrind for finding memory leaks before the start of a process. I can use GDB to attach it to a running process. How could I debug a memory leaks of a running process?
howtechstuffworks
(469 rep)
Apr 15, 2012, 02:12 AM
• Last activity: Feb 20, 2023, 02:03 PM
0
votes
1
answers
406
views
Weird behaviour of OpenSSH server eating slowly all the available memory
I have a machine running a pgbouncer server and an Open SSH server so clients can enstabilish a tunnel to the server and connect to the database. Now all the clients keep the connection alive for around 3 minutes, then the connection is closed. As you can see the following image shows you that runni...
I have a machine running a pgbouncer server and an Open SSH server so clients can enstabilish a tunnel to the server and connect to the database.
Now all the clients keep the connection alive for around 3 minutes, then the connection is closed. As you can see the following image shows you that running the command:
ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
multiple times underline the fact that the number or sshd processes decreases over a shor period of time.
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
158 158 790
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
150 150 750
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
146 146 730
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
140 140 700
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
140 140 700
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
140 140 700
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
140 140 700
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
136 136 680
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
132 132 660
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
132 132 660
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
132 132 660
sosepe@pgbouncer:~$ netstat -nt | grep :22
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
298 298 1490
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
324 324 1620
**But not over a long period of time.**
What I mean is that that after some hours running that command again shows how actually the number of processes raises. That's because some of them never get actually closed.
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
324 324 1620
And after some hours this is the result:
sosepe@pgbouncer:~$ ps -o pid,user,%mem,command ax | sort -b -k3 -r | grep -o sshd | wc
1926 1926 9630
And these are the consequences:
Private + Shared = RAM used Program
4.0 KiB + 22.0 KiB = 26.0 KiB agetty
132.0 KiB + 27.5 KiB = 159.5 KiB vnstatd
128.0 KiB + 45.5 KiB = 173.5 KiB systemd-udevd
136.0 KiB + 65.0 KiB = 201.0 KiB cron
216.0 KiB + 33.0 KiB = 249.0 KiB tail
112.0 KiB + 239.5 KiB = 351.5 KiB systemd-timesyncd
504.0 KiB + 50.0 KiB = 554.0 KiB lvmetad
192.0 KiB + 481.5 KiB = 673.5 KiB vsftpd
908.0 KiB + 92.0 KiB = 1.0 MiB sudo
916.0 KiB + 103.0 KiB = 1.0 MiB pgbouncer
644.0 KiB + 604.0 KiB = 1.2 MiB vmtoolsd
1.3 MiB + 47.0 KiB = 1.3 MiB rsyslogd
1.2 MiB + 247.0 KiB = 1.5 MiB CloudEndure_Age (4)
1.4 MiB + 81.0 KiB = 1.5 MiB dbus-daemon
1.6 MiB + 331.0 KiB = 1.9 MiB su (4)
1.2 MiB + 892.0 KiB = 2.1 MiB VGAuthService
1.7 MiB + 582.0 KiB = 2.2 MiB collectd
2.7 MiB + 268.0 KiB = 3.0 MiB systemd-logind
2.9 MiB + 63.5 KiB = 3.0 MiB bash
4.3 MiB + 231.5 KiB = 4.5 MiB systemd-journald
2.7 MiB + 3.6 MiB = 6.2 MiB php-fpm7.3 (3)
4.8 MiB + 1.8 MiB = 6.6 MiB (sd-pam) (3)
6.5 MiB + 3.3 MiB = 9.8 MiB systemd (4)
5.8 MiB + 4.1 MiB = 9.9 MiB php-fpm7.2 (3)
5.7 MiB + 6.5 MiB = 12.2 MiB php-fpm5.6 (3)
16.1 MiB + 153.0 KiB = 16.3 MiB run_linux_migration_scripts_periodically (2)
17.2 MiB + 166.5 KiB = 17.3 MiB update_onprem_volumes (2)
18.5 MiB + 145.5 KiB = 18.6 MiB tailer (2)
11.8 MiB + 13.1 MiB = 24.9 MiB apache2 (11)
159.3 MiB + 180.0 KiB = 159.5 MiB java
741.6 MiB + 2.1 GiB = 2.8 GiB sshd (4469)
---------------------------------
3.1 GiB
=================================
SSHD ate almost 3 GB of memory, and this goes on until the machine is rebooted.
Any clue about where the problem could be?
Thanks!
P.S.
This is the conf file:
Port 10110
Protocol 1,2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
UsePrivilegeSeparation yes
SyslogFacility AUTHPRIV
LogLevel INFO
PermitRootLogin no
StrictModes yes
RSAAuthentication yes
PubkeyAuthentication yes
MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521
KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521
PermitEmptyPasswords no
PasswordAuthentication yes
ChallengeResponseAuthentication no
UsePAM yes
GatewayPorts yes
ClientAliveInterval 600
PermitTunnel yes
MaxSessions 50
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
Subsystem sftp /usr/libexec/openssh/sftp-server
Federico Loro
(1 rep)
Nov 24, 2021, 03:11 PM
• Last activity: Oct 25, 2022, 05:24 AM
0
votes
2
answers
6189
views
OOM Killer triggered when available memory is high
I have been getting random kswapd0, and OOM killers even though available RAM -100MB. Already gone through many other similar issues, but I could not get why OOM killer triggered in my case. Hoping someone with knowledge can share some insight and set the direction for me to look into. EDIT: From to...
I have been getting random kswapd0, and OOM killers even though available RAM -100MB. Already gone through many other similar issues, but I could not get why OOM killer triggered in my case. Hoping someone with knowledge can share some insight and set the direction for me to look into.
EDIT:
From top, I get this output when OOM killer triggered. Also, I am wondering why kswap triggered though ~100MB available? Our application needs only ~90 max, and already this has ~50MB allocated. So it was trying only ~40MB when this happened.
top - 09:19:06 up 23:57, 0 users, load average: 4.50, 2.61, 1.87
Tasks: 101 total, 2 running, 99 sleeping, 0 stopped, 0 zombie
%Cpu(s): 7.1 us, 62.5 sy, 0.0 ni, 0.0 id, 25.9 wa, 0.0 hi, 4.5 si, 0.0 st
KiB Mem : 507008 total, 99320 free, 355096 used, 52592 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 102308 avail Mem
top - 09:19:09 up 23:57, 0 users, load average: 4.50, 2.61, 1.87
Tasks: 100 total, 1 running, 98 sleeping, 0 stopped, 1 zombie
%Cpu(s): 35.8 us, 45.4 sy, 0.0 ni, 0.0 id, 17.4 wa, 0.0 hi, 1.4 si, 0.0 st
KiB Mem : 507008 total, 162280 free, 288952 used, 55776 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 168376 avail Mem
Following is the back trace I have.
2022-07-29T09:19:09.117931Z,BTL200072600123,3,0,kernel:,[86254.933997] Out of memory: Kill process 25402 (application) score 181 or sacrifice child
2022-07-29T09:19:09.117941Z,BTL200072600123,3,0,kernel:,[86254.934006] Killed process 25402 (application) total-vm:159852kB, anon-rss:75664kB, file-rss:16020kB
2022-07-29T09:19:09.095963Z,BTL200072600123,4,1,kernel:, [86254.932989] acquisition invoked oom-killer: gfp_mask=0x2084d0, order=0, oom_score_adj=0
2022-07-29T09:19:09.096076Z,BTL200072600123,4,1,kernel:, [86254.933012] CPU: 0 PID: 17939 Comm: acquisition Tainted: G O 4.1.46 #5
2022-07-29T09:19:09.096142Z,BTL200072600123,4,1,kernel:, [86254.933019] Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
2022-07-29T09:19:09.096206Z,BTL200072600123,4,1,kernel:, [86254.933025] Backtrace:
2022-07-29T09:19:09.096270Z,BTL200072600123,4,1,kernel:, [86254.933054] [] (dump_backtrace) from [] (show_stack+0x18/0x1c)
2022-07-29T09:19:09.096334Z,BTL200072600123,4,1,kernel:, [86254.933060] r7:00000000 r6:80a83c70 r5:600f0113 r4:00000000
202
2022-07-29T09:19:09.098354Z,BTL200072600123,4,1,kernel:, [86254.933467] Mem-Info:
2022-07-29T09:19:09.098411Z,BTL200072600123,4,1,kernel:, [86254.933485] active_anon:50599 inactive_anon:6859 isolated_anon:0
2022-07-29T09:19:09.098472Z,BTL200072600123,4,1,kernel:, [86254.933485] active_file:136 inactive_file:159 isolated_file:0
2022-07-29T09:19:09.098530Z,BTL200072600123,4,1,kernel:, [86254.933485] unevictable:16 dirty:0 writeback:0 unstable:0
2022-07-29T09:19:09.098589Z,BTL200072600123,4,1,kernel:, [86254.933485] slab_reclaimable:1089 slab_unreclaimable:2343
2022-07-29T09:19:09.098648Z,BTL200072600123,4,1,kernel:, [86254.933485] mapped:5971 shmem:8154 pagetables:534 bounce:0
2022-07-29T09:19:09.098704Z,BTL200072600123,4,1,kernel:, [86254.933485] free:23627 free_pcp:0 free_cma:23127
2022-07-29T09:19:09.098765Z,BTL200072600123,4,1,kernel:, [86254.933518] Normal free:94380kB min:1972kB low:2464kB high:2956kB active_anon:201792kB inactive_anon:27364kB active_file:476kB inactive_file:560kB unevictable:64kB isolated(anon):0kB isolated(file):0kB present:522240kB managed:505984kB mlocked:64kB dirty:0kB writeback:0kB mapped:23728kB shmem:32540kB slab_reclaimable:4356kB slab_unreclaimable:9372kB kernel_stack:1032kB pagetables:2136kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:92508kB writeback_tmp:0kB pages_scanned:6648 all_unreclaimable? yes
2022-07-29T09:19:09.098829Z,BTL200072600123,4,1,kernel:, [86254.933523] lowmem_reserve[]: 0 8 8
2022-07-29T09:19:09.098890Z,BTL200072600123,4,1,kernel:, [86254.933549] HighMem free:128kB min:128kB low:128kB high:132kB active_anon:604kB inactive_anon:72kB active_file:68kB inactive_file:76kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1024kB managed:1024kB mlocked:0kB dirty:0kB writeback:0kB mapped:156kB shmem:76kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:32 all_unreclaimable? no
2022-07-29T09:19:09.098950Z,BTL200072600123,4,1,kernel:, [86254.933555] lowmem_reserve[]: 0 0 0
2022-07-29T09:19:09.099011Z,BTL200072600123,4,1,kernel:, [86254.933564] Normal: 268*4kB (UEMRC) 128*8kB (UERC) 80*16kB (UERC) 8*32kB (RC) 0*64kB 1*128kB (C) 0*256kB 1*512kB (C) 0*1024kB 4*2048kB (C) 20*4096kB (C) = 94384kB
2022-07-29T09:19:09.099068Z,BTL200072600123,4,1,kernel:, [86254.933608] HighMem: 32*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 128kB
2022-07-29T09:19:09.099126Z,BTL200072600123,4,1,kernel:, [86254.933638] 8449 total pagecache pages
2022-07-29T09:19:09.099183Z,BTL200072600123,4,1,kernel:, [86254.933646] 0 pages in swap cache
2022-07-29T09:19:09.099240Z,BTL200072600123,4,1,kernel:, [86254.933652] Swap cache stats: add 0, delete 0, find 0/0
2022-07-29T09:19:09.099297Z,BTL200072600123,4,1,kernel:, [86254.933656] Free swap = 0kB
2022-07-29T09:19:09.099353Z,BTL200072600123,4,1,kernel:, [86254.933661] Total swap = 0kB
2022-07-29T09:19:09.099408Z,BTL200072600123,4,1,kernel:, [86254.933665] 130816 pages RAM
2022-07-29T09:19:09.099464Z,BTL200072600123,4,1,kernel:, [86254.933670] 256 pages HighMem/MovableOnly
2022-07-29T09:19:09.099521Z,BTL200072600123,4,1,kernel:, [86254.933675] 4294905824 pages reserved
2022-07-29T09:19:09.099578Z,BTL200072600123,4,1,kernel:, [86254.933680] 65536 pages cma reserved
Dinesh Kumar Govinda
(1 rep)
Aug 15, 2022, 07:23 PM
• Last activity: Aug 16, 2022, 07:48 AM
1
votes
0
answers
202
views
Linux (Mint) eats all my RAM
I know, there are plenty of "Linux eat my RAM" threads all over the internet, but they can't help me to solve my problem. (I had a try with this question on askubuntu but I'm off-chart with Mint) At home my workstation is a Mint19/Ubuntu18.04/Cinnamon box, used for Java/BigData development. Some tim...
I know, there are plenty of "Linux eat my RAM" threads all over the internet, but they can't help me to solve my problem.
(I had a try with this question on askubuntu but I'm off-chart with Mint)
At home my workstation is a Mint19/Ubuntu18.04/Cinnamon box, used for Java/BigData development.
Some time, not every day, after some hours of work, my Intellij IDE becomes laggy, due to missing RAM.
If I look the "top", I see that only some Mo of the 16G RAM are available and that system is swapping.
I can't understand where the +10G RAM are used for. Some time ago, this happened. To have a better understanding, I CTRL-F1 to a non graphical sessions, then I stopped the LightDM X-Server.
So all the RAM eating graphical apps (skype, slack, chrome etc...) are off. And only the system daemons are alive. After this purge "free" gives me :
Mem: 16130044 11507836 3615496 704 1006712 4287260
Swap: 15625212 541820 15083392
So, the XServer stop made me recover ~4G, but 11G are always missing /proc/meminfo looks like :
MemTotal: 16130044 kB
MemFree: 3613360 kB
MemAvailable: 4285680 kB
Buffers: 109512 kB
Cached: 744668 kB
SwapCached: 31984 kB
Active: 433228 kB
Inactive: 492328 kB
Active(anon): 39192 kB
Inactive(anon): 32920 kB
Active(file): 394036 kB
Inactive(file): 459408 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 15625212 kB
SwapFree: 15083392 kB
Dirty: 220 kB
Writeback: 0 kB
AnonPages: 66456 kB
Mapped: 75056 kB
Shmem: 704 kB
Slab: 583976 kB
SReclaimable: 153108 kB
SUnreclaim: 430868 kB
KernelStack: 8624 kB
PageTables: 16852 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 23690232 kB
Committed_AS: 3990300 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 8152580 kB
DirectMap2M: 8331264 kB
DirectMap1G: 1048576 kB
Missing RAM is not in SLAB (there are some threads about that)
I tried several things :
- various versions of sync; echo 3 > /proc/sys/vm/drop_caches
- kernel upgrade (actually on 4.15.0-173)
- Rootkit analysis...
But nothing helped
Any idea ?
Franck Lefebure
(11 rep)
Apr 26, 2022, 11:44 AM
4
votes
1
answers
3864
views
Xorg consuming 1.1GB, is it a leak?
I am facing a problem wherein Xorg starts to consume more and more memory and finally eats up the whole swap space. As shown below, Xorg's virtual memory is about 1.1GB. My system runs only one GTK application "main_app" and I do not have Gnome, I just have IceWM installed. When this scenario happen...
I am facing a problem wherein Xorg starts to consume more and more memory
and finally eats up the whole swap space. As shown below, Xorg's virtual
memory is about 1.1GB. My system runs only one GTK application "main_app"
and I do not have Gnome, I just have IceWM installed. When this scenario
happens, the system crawls and only a reboot is the way to recover.
top - 00:01:09 up 24 days, 6:51, 6 users, load average: 6.89, 3.63, 2.76
Tasks: 126 total, 1 running, 123 sleeping, 2 stopped, 0 zombie
Cpu(s): 1.0%us, 3.4%sy, 0.0%ni, 0.0%id, 95.6%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 505644k total, 442536k used, 63108k free, 1424k buffers
Swap: 2095096k total, 1246372k used, 848724k free, 16400k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1598 root 19 -1 1145m 39m 1308 S 0 8.0 375:06.84 Xorg
2293 root 20 0 100m 6876 3932 S 2 1.4 747:08.62 main_app
514 root 20 0 53460 324 140 S 0 0.1 70:38.16 net.agent
1998 root 20 0 53460 368 140 S 0 0.1 70:40.18 net.agent
23787 root 20 0 53460 9980 196 D 1 2.0 0:00.21 net.agent
23801 root 20 0 53460 9248 196 D 1 1.8 0:00.19 net.agent
1343 root 20 0 28472 804 564 S 0 0.2 0:03.88 rsyslogd
3179 root 20 0 23712 180 136 S 0 0.0 0:15.82 MSPAgent
As seen below, the /proc/pid/smaps shows that the xorg's heap has all of the 1.1GB.
08231000-4da43000 rw-p 00000000 00:00 0 [heap]
Size: 1138760 kB
Rss: 35444 kB
Pss: 35444 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 1476 kB
Private_Dirty: 33968 kB
Referenced: 26436 kB
Swap: 1103276 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
I ran xrestop, but I see that "main_app" is not the culprit.
xrestop - Display: :0.0
Monitoring 9 clients. XErrors: 0
Pixmaps: 1465K total, Other: 35K total, All: 1500K total
res-base Wins GCs Fnts Pxms Misc Pxm mem Other Total PID Identifier
0e00000 7 30 2 8 27 1378K 3K 1381K 2293 main_app
0c00000 67 8 1 38 840 87K 22K 109K ?
0800000 2 7 6 1 22 0B 6K 6K 1647 uxterm
0000000 1 0 2 0 36 0B 2K 2K ?
0a00000 2 1 0 0 1 0B 96B 96B ?
1000000 1 1 0 0 0 0B 48B 48B ? xrestop
0400000 1 1 0 0 0 0B 48B 48B ?
0600000 0 1 0 0 0 0B 24B 24B ?
0200000 0 1 0 0 0 0B 24B 24B ?
I am not sure why xorg keeps growing. Please give me some pointers on where and
what to look for.
I am on Debian Linux:
debian:~# uname -a
Linux debian 2.6.32-5-686 #1 SMP Tue Mar 8 21:36:00 UTC 2011 i686 GNU/Linux
debian:~#
Intel(R) Atom(TM) CPU N270 @ 1.60GHz
Following is the pmap output for the xorg process. Interestingly, the total is 1.1GB, however, the individual entries do not add up to it.
debian:~# pmap -x 1598 | more
1598: /usr/bin/X :0 -br -nocursor -auth /tmp/serverauth.O1gWpWvWuP
Address Kbytes RSS Dirty Mode Mapping
08048000 0 404 0 r-x-- Xorg
081e3000 0 24 12 rw--- Xorg
081ef000 0 40 24 rw--- [ anon ]
08231000 0 36188 35232 rw--- [ anon ]
b5422000 0 1892 1888 rw--- [ anon ]
b59a1000 0 384 0 rw-s- [ shmid=0x520000 ]
b5bd6000 0 0 0 rw--- [ anon ]
b5e85000 0 0 0 r-x-- libexpat.so.1.5.2
b5ea9000 0 0 0 rw--- libexpat.so.1.5.2
b5eab000 0 0 0 r-x-- evdev_drv.so
b5eb3000 0 0 0 rw--- evdev_drv.so
b5eb4000 0 0 0 r-x-- swrast_dri.so
b60c8000 0 0 0 rw--- swrast_dri.so
b60cd000 0 1876 1876 rw--- [ anon ]
b6f83000 0 3072 0 rw-s- fb0
b7283000 0 8 0 r-x-- libshadow.so
b7288000 0 4 4 rw--- libshadow.so
b7289000 0 56 0 r-x-- libfb.so
b72a6000 0 4 4 rw--- libfb.so
b72a7000 0 0 0 r-x-- libfbdevhw.so
b72ab000 0 0 0 rw--- libfbdevhw.so
b72ac000 0 4 0 r-x-- fbdev_drv.so
b72b0000 0 4 0 rw--- fbdev_drv.so
b72b1000 0 0 0 r-x-- librecord.so
b72b7000 0 0 0 rw--- librecord.so
b72b8000 0 12 0 r-x-- libglx.so
b7307000 0 8 0 rw--- libglx.so
b730a000 0 12 0 r-x-- libselinux.so.1
b7323000 0 0 0 r---- libselinux.so.1
b7324000 0 0 0 rw--- libselinux.so.1
b7325000 0 0 0 r-x-- libextmod.so
b7341000 0 4 4 rw--- libextmod.so
b7343000 0 0 0 r-x-- libdrm.so.2.4.0
b734c000 0 0 0 rw--- libdrm.so.2.4.0
b734d000 0 0 0 r-x-- libdri.so
b7355000 0 0 0 rw--- libdri.so
b7356000 0 0 0 r-x-- libgcc_s.so.1
b7373000 0 0 0 rw--- libgcc_s.so.1
b7374000 0 4 4 rw--- [ anon ]
b7376000 0 0 0 r-x-- libgpg-error.so.0.4.0
b7379000 0 0 0 rw--- libgpg-error.so.0.4.0
b737a000 0 0 0 r-x-- libfontenc.so.1.0.0
b737f000 0 0 0 rw--- libfontenc.so.1.0.0
b7380000 0 0 0 r-x-- libbz2.so.1.0.4
b7390000 0 0 0 rw--- libbz2.so.1.0.4
b7391000 0 0 0 r-x-- libfreetype.so.6.6.0
b7404000 0 0 0 rw--- libfreetype.so.6.6.0
b7408000 0 0 0 r-x-- libz.so.1.2.3.4
b741b000 0 0 0 rw--- libz.so.1.2.3.4
b741c000 0 0 0 rw--- [ anon ]
b741d000 0 128 0 r-x-- libc-2.11.2.so
b755d000 0 4 0 r---- libc-2.11.2.so
b755f000 0 4 0 rw--- libc-2.11.2.so
b7560000 0 8 4 rw--- [ anon ]
b7563000 0 8 0 r-x-- librt-2.11.2.so
b756a000 0 4 0 r---- librt-2.11.2.so
b756b000 0 0 0 rw--- librt-2.11.2.so
b756c000 0 4 0 r-x-- libm-2.11.2.so
b7590000 0 0 0 r---- libm-2.11.2.so
b7591000 0 0 0 rw--- libm-2.11.2.so
b7592000 0 0 0 r-x-- libaudit.so.0.0.0
b75a9000 0 4 0 r---- libaudit.so.0.0.0
b75aa000 0 0 0 rw--- libaudit.so.0.0.0
b75ab000 0 0 0 r-x-- libgcrypt.so.11.5.3
b761c000 0 8 4 rw--- libgcrypt.so.11.5.3
b761f000 0 0 0 r-x-- libXdmcp.so.6.0.0
b7623000 0 0 0 rw--- libXdmcp.so.6.0.0
b7624000 0 0 0 rw--- [ anon ]
b7625000 0 72 0 r-x-- libpixman-1.so.0.16.4
b767c000 0 8 0 rw--- libpixman-1.so.0.16.4
b767e000 0 0 0 r-x-- libXau.so.6.0.0
b7680000 0 0 0 rw--- libXau.so.6.0.0
b7681000 0 8 0 r-x-- libXfont.so.1.4.1
b76b5000 0 0 0 rw--- libXfont.so.1.4.1
b76b7000 0 12 0 r-x-- libpthread-2.11.2.so
b76cc000 0 4 0 r---- libpthread-2.11.2.so
b76cd000 0 0 0 rw--- libpthread-2.11.2.so
b76ce000 0 0 0 rw--- [ anon ]
b76d0000 0 4 0 r-x-- libdl-2.11.2.so
b76d2000 0 4 0 r---- libdl-2.11.2.so
b76d3000 0 0 0 rw--- libdl-2.11.2.so
b76d4000 0 4 0 rw--- [ anon ]
b76d5000 0 0 0 r-x-- libpciaccess.so.0.10.8
b76dc000 0 0 0 rw--- libpciaccess.so.0.10.8
b76dd000 0 4 0 r-x-- libudev.so.0.9.3
b76e9000 0 0 0 r---- libudev.so.0.9.3
b76ea000 0 0 0 rw--- libudev.so.0.9.3
b76eb000 0 0 0 r-x-- libdri2.so
b76ed000 0 0 0 rw--- libdri2.so
b76ee000 0 16 0 r-x-- libdbe.so
b76f2000 0 4 0 rw--- libdbe.so
b76f3000 0 4 0 rw--- [ anon ]
b76f6000 0 4 0 r-x-- [ anon ]
b76f7000 0 8 0 r-x-- ld-2.11.2.so
b7712000 0 0 0 r---- ld-2.11.2.so
b7713000 0 0 0 rw--- ld-2.11.2.so
bfcb7000 0 16 16 rw--- [ stack ]
-------- ------- ------- ------- -------
total kB 1197560 - - -
debian:~#
pmap -d shows the 1.1GB mapped against an anonymous map.
debian:~# pmap -d 1598 | more
1598: /usr/bin/X :0 -br -nocursor -auth /tmp/serverauth.O1gWpWvWuP
Address Kbytes Mode Offset Device Mapping
08048000 1644 r-x-- 0000000000000000 008:00001 Xorg
081e3000 48 rw--- 000000000019b000 008:00001 Xorg
081ef000 44 rw--- 0000000000000000 000:00000 [ anon ]
08231000 1164236 rw--- 0000000000000000 000:00000 [ anon ]
b5422000 3752 rw--- 0000000000000000 000:00000 [ anon ]
b59a1000 384 rw-s- 0000000000000000 000:00004 [ shmid=0x520000 ]
I need a way now to identify the owner of address 08231000.
The Controller is given below..
debian:~# lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Mobile 945GME Express Integrated Graphics Controller (rev 03)
debian:~#
debian:~#
Modules loaded by Xorg are as below..
debian:~# grep -i "Loading" /var/log/Xorg.0.log
(II) Loading /usr/lib/xorg/modules/extensions/libdbe.so
(II) Loading extension DOUBLE-BUFFER
(II) Loading /usr/lib/xorg/modules/extensions/libdri2.so
(II) Loading extension DRI2
(II) Loading /usr/lib/xorg/modules/extensions/libextmod.so
(II) Loading extension SELinux
(II) Loading extension MIT-SCREEN-SAVER
(II) Loading extension XFree86-VidModeExtension
(II) Loading extension XFree86-DGA
(II) Loading extension DPMS
(II) Loading extension XVideo
(II) Loading extension XVideo-MotionCompensation
(II) Loading extension X-Resource
(II) Loading /usr/lib/xorg/modules/extensions/libdri.so
(II) Loading extension XFree86-DRI
(II) Loading /usr/lib/xorg/modules/extensions/libglx.so
(II) Loading extension GLX
(II) Loading /usr/lib/xorg/modules/extensions/librecord.so
(II) Loading extension RECORD
(II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so
(II) Loading sub module "fbdevhw"
(II) Loading /usr/lib/xorg/modules/linux/libfbdevhw.so
(II) Loading sub module "fb"
(II) Loading /usr/lib/xorg/modules/libfb.so
(II) Loading sub module "shadow"
(II) Loading /usr/lib/xorg/modules/libshadow.so
(II) Loading /usr/lib/xorg/modules/input/evdev_drv.so
debian:~#
debian:~# /usr/bin/Xorg -version
X.Org X Server 1.7.7
Release Date: 2010-05-04
X Protocol Version 11, Revision 0
Build Operating System: Linux 2.6.32.29-dsa-ia32 i686 Debian
Current Operating System: Linux debian 2.6.32-5-686 #1 SMP Tue Mar 8 21:36:00 UTC 2011 i686
Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-5-686 root=/dev/sda1 nomodeset
Build Date: 19 February 2011 02:37:36PM
xorg-server 2:1.7.7-13 (Cyril Brulebois )
Current version of pixman: 0.16.4
Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
debian:~#
ReddyGB
(71 rep)
Dec 4, 2014, 06:37 PM
• Last activity: Feb 18, 2022, 02:04 PM
8
votes
1
answers
2214
views
How can I trigger Firefox memory cleanup from the terminal?
Does anyone know how to initiate the garbage collection and memory reduction in Firefox (`about:memory` > Free memory > GC/CC/Minimize memory usage) from the terminal? This browser is using way to much RAM and I found that clicking on "Minimize memory usage" actually cuts the load by about 20-30%. U...
Does anyone know how to initiate the garbage collection and memory reduction in Firefox (
about:memory
> Free memory > GC/CC/Minimize memory usage) from the terminal? This browser is using way to much RAM and I found that clicking on "Minimize memory usage" actually cuts the load by about 20-30%. Unfortunately, this doesn't last very long, but my idea is to create a Bash script and cron
it.
david
(359 rep)
Aug 9, 2020, 09:52 PM
• Last activity: Jan 15, 2022, 06:18 PM
11
votes
2
answers
12610
views
rsyslogd eating up 20+ GB (!) of RAM - what evidence to gather?
I have a Ubuntu 14.04.3 box running kernel 3.13.0-74 with 32GB RAM, which features a rsyslogd process gone mad: $ ps -auxww | grep rsyslog syslog 16212 0.7 64.0 27966168 21070336 ? Ssl Jan04 180:31 rsyslogd -c 5 -x $ free -m total used free shared buffers cached Mem: 32142 31863 278 228 9 363 -/+ bu...
I have a Ubuntu 14.04.3 box running kernel 3.13.0-74 with 32GB RAM, which features a rsyslogd process gone mad:
$ ps -auxww | grep rsyslog
syslog 16212 0.7 64.0 27966168 21070336 ? Ssl Jan04 180:31 rsyslogd -c 5 -x
$ free -m
total used free shared buffers cached
Mem: 32142 31863 278 228 9 363
-/+ buffers/cache: 31490 651
Swap: 16383 11937 4446
I know ps' output cannot be fully relied on etc but surely that's a bit high! I also have two sibling machines with the same s/w (running since the same time) and on both siblings, rsyslogd is behaving better (it's still using about 3.5GB on each).
This is rsyslogd 7.4.4 and I understand that a memory leak was fixed in a newer version .
**My question:** before I rush to upgrade, I'd like to gather some evidence to show that I've indeed hit that leak, if possible. I've left the rsyslogd running for now but it won't be long until it churns all the swap so need to act reasonably soon...
One thing I have collecting evidence is
**final edit:** I had to restart that
atop
. This clearly shows when the leak started occurring (and I don't recall doing anything special to the box at that time). What's interesting is that at the same time as memory starts to grow, disk write activity plummets - though it doesn't stop completely. The filesystem is fine capacity-wise.
$ atop -r atop_20160117 | grep rsyslogd
PID SYSCPU USRCPU VGROW RGROW RDDSK WRDSK ST EXC S CPU CMD
16212 0.03s 0.06s 0K 0K 0K 96K -- - S 0% rsyslogd
16212 0.11s 0.22s 0K 0K 0K 1844K -- - S 2% rsyslogd
16212 0.03s 0.12s 0K 0K 0K 564K -- - S 1% rsyslogd
16212 0.04s 0.06s 0K 0K 0K 96K -- - S 1% rsyslogd
16212 0.08s 0.19s 0K 0K 0K 1808K -- - S 1% rsyslogd
16212 0.04s 0.11s 0K 0K 0K 608K -- - S 1% rsyslogd
16212 0.02s 0.07s 0K 0K 0K 116K -- - S 0% rsyslogd
16212 0.06s 0.04s 0K 2640K 0K 144K -- - S 1% rsyslogd
16212 0.02s 0.02s 0K 1056K 0K 0K -- - S 0% rsyslogd
16212 0.01s 0.01s 0K 264K 0K 0K -- - S 0% rsyslogd
16212 0.02s 0.04s 0K 2904K 0K 0K -- - S 0% rsyslogd
16212 0.02s 0.02s 0K 1056K 0K 0K -- - S 0% rsyslogd
16212 0.02s 0.00s 0K 264K 0K 0K -- - S 0% rsyslogd
16212 0.06s 0.09s 75868K 3532K 208K 0K -- - S 1% rsyslogd
16212 0.02s 0.02s 0K 792K 0K 0K -- - S 0% rsyslogd
16212 0.01s 0.01s 0K 264K 0K 0K -- - S 0% rsyslogd
16212 0.05s 0.03s 0K 3168K 0K 0K -- - S 0% rsyslogd
16212 0.02s 0.02s 0K 1056K 0K 0K -- - S 0% rsyslogd
16212 0.00s 0.01s 0K 264K 0K 0K -- - S 0% rsyslogd
16212 0.03s 0.10s 0K 2904K 0K 0K -- - S 1% rsyslogd
16212 0.02s 0.02s 0K 792K 0K 0K -- - S 0% rsyslogd
16212 0.00s 0.02s 0K 264K 0K 0K -- - S 0% rsyslogd
16212 0.04s 0.03s 0K 2904K 0K 160K -- - S 0% rsyslogd
16212 0.02s 0.02s 0K 792K 0K 0K -- - S 0% rsyslogd
**edit:** here's the free memory graph from Zabbix for that box; the start of the decline at about 9:30 on 17-Jan coincides with atop
's output above.

rsyslogd
; it freed up a whooping 20 GB, confirming - if there was any doubt - that it was the culprit:
free -m
total used free shared buffers cached
Mem: 32142 11325 20817 282 56 473
-/+ buffers/cache: 10795 21347
Swap: 16383 5638 10745
Alas, after running only 12 hours, it's now back to over 4GB. Clearly something's not right; I'll have to try the upgrade path...
sxc731
(461 rep)
Jan 21, 2016, 04:16 PM
• Last activity: Apr 21, 2021, 11:03 AM
3
votes
0
answers
219
views
Opening powershell remote sessions in a loop in linux leaks memory
I am stuck with the following problem: we have an app that runs an continuous loop opening remote connection via powershell (performing some actions and then closing). This works fine in a Windows machine but not in Linux ones (tested on `Ubuntu 16.0.4`). Use the the script below to reproduce the pr...
I am stuck with the following problem: we have an app that runs an continuous loop opening remote connection via powershell (performing some actions and then closing). This works fine in a Windows machine but not in Linux ones (tested on
You will notice the memory consumption will keep growing (by 0.1 percent at each iteration), indefinitely.
Google does return a couple of posts mentioning memory leaks in opening new sessions with powershell, but I could not find in any of them an explanation on why the simple script above would create such an issue - again, in linux, only.
Any ideas in how tackle this issue ?
Ubuntu 16.0.4
).
Use the the script below to reproduce the problem:
$i = 0
while ($i -le 200)
{
$password = "myPassword"
$domain = "domain\computerName"
$computerName = "xxx.xxx.xxx.xxx"
$pwSession = convertto-securestring -AsPlainText -Force $password
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $domain,$pwSession
$session = new-pssession -computername $computerName -credential $cred -Authentication Negotiate
Remove-PSSession -Session $session
Get-PSSession
sleep 3
$i
$i++
}
1. enter a powershell context by running pwsh
2. run the script above (copy + paste)
3. run top -p {process id}
on the pwsh process id (you can run it in a single step with top -p $(ps -aux | grep pwsh | head -n 1 | cut -d' ' -f5)
You will see a window like

Veverke
(378 rep)
Apr 12, 2021, 07:18 AM
1
votes
1
answers
3564
views
Identify high memory allocator / leaking process in Linux causing oom (out of memory) kernel killing processes
I have come across plenty of info on oom in general but not much to identify the root cause of the issue. OOm killer kills processes based on its scoring but the process it kills need not be the one that hogs the memory. In my embedded system there is just a journal log I can rely on this hard to re...
I have come across plenty of info on oom in general but not much to identify the root cause of the issue. OOm killer kills processes based on its scoring but the process it kills need not be the one that hogs the memory. In my embedded system there is just a journal log I can rely on this hard to reproduce issue. How am I to deduce the memory hogging process from this ? How do I make sense of the OOM killer's log dump ?
Jan 16 14:30:41 Esystem kernel: steaming_device_driver invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Jan 16 14:30:41 Esystem kernel: CPU: 0 PID: 386 Comm: steaming_device_driver Tainted: G O 5.4.47-1.3.0-511_02111356_a81cba7f+DEBUG+g5ec03d06f54e #1
Jan 16 14:30:41 Esystem kernel: Hardware name: i.MX8MNano DDR4 board (DT)
Jan 16 14:30:41 Esystem kernel: Call trace:
Jan 16 14:30:41 Esystem kernel: dump_backtrace+0x0/0x140
Jan 16 14:30:41 Esystem kernel: show_stack+0x14/0x20
Jan 16 14:30:41 Esystem kernel: dump_stack+0xb4/0xf8
Jan 16 14:30:41 Esystem kernel: dump_header+0x44/0x1ec
Jan 16 14:30:41 Esystem kernel: oom_kill_process+0x1d4/0x1d8
Jan 16 14:30:41 Esystem kernel: out_of_memory+0x170/0x4e0
Jan 16 14:30:41 Esystem kernel: __alloc_pages_slowpath+0x954/0x9f8
Jan 16 14:30:41 Esystem kernel: __alloc_pages_nodemask+0x21c/0x280
Jan 16 14:30:41 Esystem kernel: alloc_pages_current+0x7c/0xe8
Jan 16 14:30:41 Esystem kernel: __page_cache_alloc+0x80/0xa8
Jan 16 14:30:41 Esystem kernel: pagecache_get_page+0x150/0x300
Jan 16 14:30:41 Esystem kernel: filemap_fault+0x544/0x950
Jan 16 14:30:41 Esystem kernel: ext4_filemap_fault+0x30/0x8b8
Jan 16 14:30:41 Esystem kernel: __do_fault+0x4c/0x188
Jan 16 14:30:41 Esystem kernel: __handle_mm_fault+0xb5c/0x10a0
Jan 16 14:30:41 Esystem kernel: handle_mm_fault+0xdc/0x1a8
Jan 16 14:30:41 Esystem kernel: do_page_fault+0x130/0x460
Jan 16 14:30:41 Esystem kernel: do_translation_fault+0x5c/0x78
Jan 16 14:30:41 Esystem kernel: do_mem_abort+0x3c/0x98
Jan 16 14:30:41 Esystem kernel: do_el0_ia_bp_hardening+0x38/0xb8
Jan 16 14:30:41 Esystem kernel: el0_ia+0x18/0x1c
Jan 16 14:30:41 Esystem kernel: Mem-Info:
Jan 16 14:30:41 Esystem kernel: active_anon:95298 inactive_anon:96 isolated_anon:0
active_file:141 inactive_file:467 isolated_file:0
unevictable:0 dirty:0 writeback:0 unstable:0
slab_reclaimable:2677 slab_unreclaimable:8348
mapped:112 shmem:205 pagetables:846 bounce:0
free:1314 free_pcp:0 free_cma:0
Jan 16 14:30:41 Esystem kernel: Node 0 active_anon:381192kB inactive_anon:384kB active_file:564kB inactive_file:1868kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:448kB dirty:0kB writeback:0kB shmem:820kB shmem_thp: 0kB shmem_pmdmapp
Jan 16 14:30:41 Esystem kernel: Node 0 DMA32 free:5256kB min:7092kB low:7636kB high:8180kB active_anon:381192kB inactive_anon:384kB active_file:564kB inactive_file:1868kB unevictable:0kB writepending:0kB present:491520kB managed:454112kB mlocked:0kB
Jan 16 14:30:41 Esystem kernel: lowmem_reserve[]: 0 0 0
Jan 16 14:30:41 Esystem kernel: Node 0 DMA32: 112*4kB (UMEC) 184*8kB (UE) 117*16kB (UE) 46*32kB (UE) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB 0*8192kB 0*16384kB 0*32768kB = 5264kB
Jan 16 14:30:41 Esystem kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Jan 16 14:30:41 Esystem kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=32768kB
Jan 16 14:30:41 Esystem kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Jan 16 14:30:41 Esystem kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=64kB
Jan 16 14:30:41 Esystem kernel: 812 total pagecache pages
Jan 16 14:30:41 Esystem kernel: 0 pages in swap cache
Jan 16 14:30:41 Esystem kernel: Swap cache stats: add 0, delete 0, find 0/0
Jan 16 14:30:41 Esystem kernel: Free swap = 0kB
Jan 16 14:30:41 Esystem kernel: Total swap = 0kB
Jan 16 14:30:41 Esystem kernel: 122880 pages RAM
Jan 16 14:30:41 Esystem kernel: 0 pages HighMem/MovableOnly
Jan 16 14:30:41 Esystem kernel: 9352 pages reserved
Jan 16 14:30:41 Esystem kernel: 32768 pages cma reserved
Jan 16 14:30:41 Esystem kernel: 0 pages hwpoisoned
Jan 16 14:30:41 Esystem kernel: Tasks state (memory values in pages):
Jan 16 14:30:41 Esystem kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Jan 16 14:30:41 Esystem kernel: [ 292] 0 292 9543 157 98304 0 -250 systemd-journal
Jan 16 14:30:41 Esystem kernel: [ 306] 0 306 3159 191 57344 0 -1000 systemd-udevd
Jan 16 14:30:41 Esystem kernel: [ 317] 993 317 4079 123 57344 0 0 systemd-network
Jan 16 14:30:41 Esystem kernel: [ 331] 0 331 19289 37 45056 0 0 rngd
Jan 16 14:30:41 Esystem kernel: [ 332] 992 332 1926 99 53248 0 0 systemd-resolve
Jan 16 14:30:41 Esystem kernel: [ 333] 991 333 20408 110 65536 0 0 systemd-timesyn
Jan 16 14:30:41 Esystem kernel: [ 339] 0 339 880 65 49152 0 0 auto-update.sh
Jan 16 14:30:41 Esystem kernel: [ 340] 995 340 1210 86 45056 0 0 avahi-daemon
Jan 16 14:30:41 Esystem kernel: [ 345] 0 345 799 31 40960 0 0 klogd
Jan 16 14:30:41 Esystem kernel: [ 347] 995 347 1179 63 45056 0 0 avahi-daemon
Jan 16 14:30:41 Esystem kernel: [ 348] 0 348 799 26 45056 0 0 syslogd
Jan 16 14:30:41 Esystem kernel: [ 350] 996 350 1128 149 49152 0 -900 dbus-daemon
Jan 16 14:30:41 Esystem kernel: [ 352] 0 352 2151 174 61440 0 0 ofonod
Jan 16 14:30:41 Esystem kernel: [ 365] 998 365 926 65 45056 0 0 rpcbind
Jan 16 14:30:41 Esystem kernel: [ 372] 0 372 37478 22 53248 0 0 tee-supplicant
Jan 16 14:30:41 Esystem kernel: [ 381] 0 381 1924 122 57344 0 0 systemd-logind
Jan 16 14:30:41 Esystem kernel: [ 385] 0 385 1037 59 49152 0 0 xsystrack
Jan 16 14:30:41 Esystem kernel: [ 386] 0 386 98589 172 131072 0 0 steaming_device_driver
Jan 16 14:30:41 Esystem kernel: [ 387] 0 387 131366 145 151552 0 0 user_psu_daemon
Jan 16 14:30:41 Esystem kernel: [ 389] 0 389 203209 3492 274432 0 0 xproc_manager.py
Jan 16 14:30:41 Esystem kernel: [ 431] 0 431 511 37 45056 0 0 hciattach
Jan 16 14:30:41 Esystem kernel: [ 434] 0 434 1696 150 57344 0 0 bluetoothd
Jan 16 14:30:41 Esystem kernel: [ 456] 0 456 2524 190 53248 0 0 wpa_supplicant
Jan 16 14:30:41 Esystem kernel: [ 457] 997 457 791 145 45056 0 0 rpc.statd
Jan 16 14:30:41 Esystem kernel: [ 460] 0 460 9884 162 69632 0 0 tcf-agent
Jan 16 14:30:41 Esystem kernel: [ 461] 0 461 648 24 40960 0 0 xinetd
Jan 16 14:30:41 Esystem kernel: [ 466] 0 466 1252 32 45056 0 0 agetty
Jan 16 14:30:41 Esystem kernel: [ 467] 0 467 509 26 45056 0 0 agetty
Jan 16 14:30:41 Esystem kernel: [ 477] 0 477 509 26 40960 0 0 agetty
Jan 16 14:30:41 Esystem kernel: [ 489] 0 489 4640 18 45056 0 0 umtprd
Jan 16 14:30:41 Esystem kernel: [ 541] 0 541 206089 20731 385024 0 0 application1
Jan 16 14:30:41 Esystem kernel: [ 574] 0 574 309918 60424 1114112 0 0 python3
Jan 16 14:30:41 Esystem kernel: [ 658] 0 658 71768 2840 143360 0 0 updater
Jan 16 14:30:41 Esystem kernel: [ 102299] 0 102299 1935 155 57344 0 0 sshd
Jan 16 14:30:41 Esystem kernel: [ 102302] 0 102302 2320 242 65536 0 0 systemd
Jan 16 14:30:41 Esystem kernel: [ 102303] 0 102303 23402 523 69632 0 0 (sd-pam)
Jan 16 14:30:41 Esystem kernel: [ 102312] 0 102312 940 128 45056 0 0 sh
Jan 16 14:30:41 Esystem kernel: [ 102342] 0 102342 26140 94 241664 0 0 journalctl
Jan 16 14:30:41 Esystem kernel: [ 120859] 0 120859 108920 3034 155648 0 0 sch_manage
Jan 16 14:30:41 Esystem kernel: [ 120885] 0 120885 855 51 49152 0 0 sh
Jan 16 14:30:41 Esystem kernel: [ 120886] 0 120886 880 65 49152 0 0 auto-update.sh
Jan 16 14:30:41 Esystem kernel: [ 120888] 0 120888 499 23 36864 0 0 ls
Jan 16 14:30:41 Esystem kernel: [ 120889] 0 120889 480 19 45056 0 0 head
Jan 16 14:30:41 Esystem kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/nw.service,task=python3,pid=574,uid=0
Jan 16 14:30:41 Esystem kernel: Out of memory: Killed process 574 (python3) total-vm:1239672kB, anon-rss:241684kB, file-rss:0kB, shmem-rss:12kB, UID:0 pgtables:1088kB oom_score_adj:0
Jan 16 14:30:41 Esystem kernel: sched: RT throttling activated
Jan 16 14:30:41 Esystem kernel: oom_reaper: reaped process 574 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:4kB
Jan 16 14:30:40 Esystem kernel: [53284.724775] steaming_device_driver invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
How do I interpret this ? Is there any documentation on this?
preetam
(127 rep)
Mar 15, 2021, 09:34 PM
• Last activity: Mar 17, 2021, 05:41 AM
1
votes
2
answers
959
views
Clean RAM after some days of PC usage
I have 8GB RAM, and use my PC (Debian 10, KDE plasma 5.14.5) "normally" but with many programs running in parallel: - Firefox (≈ 250 Tabs) - Chromium (10 Tabs) - Thunderbird - 10x Okular - 2x Pycharm - 5x Konsole - Dolphin - Kile - Element, Telegram, Wikidpad, ... After a fresh restart RAM consumpti...
I have 8GB RAM, and use my PC (Debian 10, KDE plasma 5.14.5) "normally" but with many programs running in parallel:
- Firefox (≈ 250 Tabs)
- Chromium (10 Tabs)
- Thunderbird
- 10x Okular
- 2x Pycharm
- 5x Konsole
- Dolphin
- Kile
- Element, Telegram, Wikidpad, ...
After a fresh restart RAM consumption of my system is at about 4GB. Everything runs smoothly and fast. After several days (with suspend over night) RAM consumption is at about 7.5GB and it takes e.g. 10s to switch from Firefox to Dolphin.
I already tried ([source](https://unix.stackexchange.com/questions/87908/how-do-you-empty-the-buffers-and-cache-on-a-linux-system))
# echo 1 > /proc/sys/vm/drop_caches
# echo 2 > /proc/sys/vm/drop_caches
# echo 3 > /proc/sys/vm/drop_caches
but it had no significant effect.
This is the output of free -m
:
total used free shared buff/cache available
Mem: 7754 5163 950 588 1641 1708
Swap: 19071 704 18367
swapon -s
gives:
Filename Type Size Used Priority
/dev/dm-1 partition 19529724 720896 -2
**Question**: How can I "clean" the RAM to get back the situation after restart (+ automatic program launches) but without doing an actual restart?
Disclaimer: This question got some comments on [askubuntu.com/...](https://askubuntu.com/questions/1321011/clean-ram-after-some-days-of-pc-usage) but was regarded as offtopic for that site.
cknoll
(130 rep)
Mar 4, 2021, 09:55 AM
• Last activity: Mar 8, 2021, 10:42 AM
Showing page 1 of 20 total questions