Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
1
answers
48
views
Obtain memory values in SAR the same way as in FREE
I'd like to know if I can get memory in `sar` the same way I get it from `free`. Currently `free` shows me a memory usage of 47.06% (16956/36027)*100 [used/total x 100]. Whereas `sar` is showing me a usage of almost 100%. The benefit of `sar` is that in my system, I can read values for the last 30 d...
I'd like to know if I can get memory in
sar
the same way I get it from free
.
Currently free
shows me a memory usage of 47.06% (16956/36027)*100 [used/total x 100]. Whereas sar
is showing me a usage of almost 100%. The benefit of sar
is that in my system, I can read values for the last 30 days, whereas free
is just the current moment.
Is there any way to see in sar
values similar to those shown by free
?

Álvaro
(111 rep)
Jun 7, 2025, 09:01 AM
• Last activity: Jun 8, 2025, 05:14 AM
-3
votes
2
answers
77
views
Error parsing memory value in bash script: expected integer expression
I have the following script to check free memory, ```bash #!/bin/bash THRESHOLD="500" FREE_MEM=$(free -mh | awk '/^Mem:/{print $4}') if [ "$FREE_MEM" -lt "$THRESHOLD" ]; then echo "insufficient storage. Available memory is ${FREE_MEM} MB" fi ``` However, I am getting this error: ```none ./memory_mon...
I have the following script to check free memory,
#!/bin/bash
THRESHOLD="500"
FREE_MEM=$(free -mh | awk '/^Mem:/{print $4}')
if [ "$FREE_MEM" -lt "$THRESHOLD" ]; then
echo "insufficient storage. Available memory is ${FREE_MEM} MB"
fi
However, I am getting this error:
./memory_monitor.sh: line 6: [: 135Mi: integer expression expected
Be_developer
(1 rep)
Dec 17, 2024, 09:28 AM
• Last activity: Dec 17, 2024, 11:35 AM
0
votes
0
answers
58
views
Whatever would the cached include except for the file pages?
Environment: a standby/idle running physical system with RHEL 6.9 Symptom: We have done some dd tests in order to observe the change of the buffers/cached size in the free -m cmd. After we had cleaned up the buffers & cache, the buffers were just empty whereas the cached was still with 111MB. And th...
Environment: a standby/idle running physical system with RHEL 6.9
Symptom: We have done some dd tests in order to observe the change of the buffers/cached size in the free -m cmd. After we had cleaned up the buffers & cache, the buffers were just empty whereas the cached was still with 111MB. And then the cached had increased about nearly 60MB when we dd the block devices.
Notes: We know that some storage metadatas e.g. superblocks would be stored in the buffers but not the caches
So my two sub-questions are the followings:
1. Why would the cached column size still have 111MB after cleaning up the buffers & cache?
2. Why would the cached column size "obviously" increase during the dd process with the block devices?
#
#
# free -m
total used free shared buffers cached
Mem: 96646 23673 72973 0 21233 759
-/+ buffers/cache: 1680 94966
Swap: 16383 0 16383
#
#
#
# echo 3 > /proc/sys/vm/drop_caches
#
#
#
# free -m
total used free shared buffers cached
Mem: 96646 1098 95547 0 0 111
-/+ buffers/cache: 986 95659
Swap: 16383 0 16383
#
#
# vmstat 1 6
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 340 97844368 872 114688 1 1 61 22 0 0 1 0 99 0 0
0 0 340 97844496 872 114748 0 0 8 0 1046 282 0 0 100 0 0
0 0 340 97844496 872 114748 0 0 8 1 1041 245 0 0 100 0 0
0 0 340 97844656 872 114748 0 0 8 0 1043 248 0 0 100 0 0
0 0 340 97844656 872 114748 0 0 8 1 1038 255 0 0 100 0 0
0 1 340 97844536 900 114728 0 0 0 88 1038 312 0 0 94 6 0
#
#
#
# free -m
total used free shared buffers cached
Mem: 96646 1097 95548 0 0 111
-/+ buffers/cache: 984 95661
Swap: 16383 0 16383
#
#
#
# vmstat 1 6
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 340 97844032 924 114724 1 1 61 22 0 0 1 0 99 0 0
0 0 340 97844256 924 114744 0 0 8 0 1051 240 0 0 100 0 0
0 0 340 97844320 924 114748 0 0 8 1 1040 262 0 0 100 0 0
0 0 340 97844192 924 114748 0 0 8 0 1085 387 0 0 100 0 0
0 0 340 97844320 924 114748 0 0 8 1 1042 278 0 0 100 0 0
0 0 340 97844336 948 114724 0 0 8 88 1050 382 0 0 100 0 0
#
#
#
#
# nohup dd if=/dev/mapper/vgroot-Lvopenv of=/dev/mapper/vgroot-test_d1 skip=1000 bs=512k &
31265
# nohup: appending output to `nohup.out'
#
#
# vmstat 1 6
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 340 95727896 2003476 175072 1 1 61 22 0 0 1 0 99 0 0
0 1 340 95206576 2511252 175320 0 0 253704 1 2037 2228 0 7 86 7 0
0 1 340 94687136 3017104 175248 0 0 252932 0 2028 2249 0 7 87 5 0
1 0 340 94162088 3528332 175184 0 0 255748 13 2769 3694 0 7 88 5 0
1 1 340 93647632 4029624 175316 0 0 250792 32 2090 2361 0 7 87 6 0
0 1 340 93156560 4507884 175300 0 0 239060 53 3246 4805 0 7 84 9 0
#
#
#
# vmstat 1 16
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 1 340 91163136 6448656 175376 1 1 61 22 0 0 1 0 99 0 0
0 1 340 90679296 6918568 175328 0 0 235016 84 3089 4673 0 7 82 10 0
1 0 340 90181824 7403024 175372 0 0 242312 1 1992 2139 0 7 81 12 0
0 1 340 89652680 7918120 175352 0 0 257416 0 2052 2285 0 7 88 5 0
0 1 340 89132208 8424744 174992 0 0 253448 1 2075 2370 0 7 87 5 0
1 0 340 88643328 8900780 175212 0 0 238088 0 1988 2108 0 7 88 6 0
0 1 340 88139840 9391552 175028 0 0 245132 109 3326 4918 0 7 85 8 0
0 1 340 87653032 9865536 175032 0 0 236928 28 3286 4714 0 7 84 9 0
1 0 340 87162032 10343672 175088 0 0 239168 1 1996 2135 0 7 88 6 0
1 1 340 86628672 10862840 175244 0 0 259464 0 2062 2269 0 7 87 5 0
0 1 340 86111256 11366648 175280 0 0 251912 1 2027 2203 0 7 88 5 0
0 1 340 85647328 11818128 175116 0 0 225672 76 3178 4722 0 7 84 9 0
1 0 340 85119456 12331920 175352 0 0 257036 1 2045 2266 0 7 88 5 0
1 1 340 84605016 12832912 175108 0 0 250628 0 2023 2223 0 7 87 5 0
0 1 340 84103872 13320768 175124 0 0 243976 1 1996 2143 0 7 87 6 0
1 0 340 83574232 13836432 175068 0 0 257800 0 2055 2257 0 7 88 5 0
#
#
#
# free -m
total used free shared buffers cached
Mem: 96646 18136 78509 0 16535 171
-/+ buffers/cache: 1430 95215
Swap: 16383 0 16383
#
#
#
# free -m
total used free shared buffers cached
Mem: 96646 19348 77297 0 17714 171
-/+ buffers/cache: 1463 95182
Swap: 16383 0 16383
#
#
#
# vmstat 1 8
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 1 340 76894032 20335952 175340 1 1 61 22 0 0 1 0 99 0 0
0 1 340 76765144 20462952 175320 0 0 63504 62141 2936 3832 0 3 83 13 0
0 2 340 76542304 20675944 174984 0 0 106504 100072 4100 6171 0 5 86 9 0
0 1 340 76384016 20831592 174540 0 0 77832 86633 3442 4367 0 5 74 21 0
1 2 340 76203216 21006304 174852 0 0 87560 63616 3575 5025 0 4 80 16 0
2 0 340 75999744 21205200 174716 0 0 99464 109757 4353 4825 0 6 69 25 0
0 1 340 75887632 21317888 174624 0 0 56072 66444 2902 3058 0 4 67 30 0
1 0 340 75776488 21427168 174860 0 0 54832 53877 2462 2719 0 3 86 11 0
#
#
#
# free -m
total used free shared buffers cached
Mem: 96646 23456 73190 0 21710 171
-/+ buffers/cache: 1574 95071
Swap: 16383 0 16383
#
#
#
# vmstat 1 8
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 2 340 73209824 23919680 174788 1 1 61 22 0 0 1 0 99 0 0
0 3 340 73024896 24100928 175080 0 0 90640 58949 3819 4529 0 4 70 26 0
1 2 340 72831408 24288564 174660 0 0 93972 116933 4019 5393 0 5 71 23 0
0 5 340 72720272 24396836 175212 0 0 53980 48204 2674 3136 0 3 67 30 0
0 1 340 72571424 24543292 174976 0 0 73224 114093 3475 3974 0 5 66 29 0
0 1 340 72448096 24663100 174936 0 0 59912 62160 2826 3558 0 3 87 10 0
2 0 340 72315760 24791336 174712 0 0 64264 64189 2990 3924 0 3 87 10 0
1 0 340 72151496 24952988 175168 0 0 80784 80852 3604 4461 0 5 78 17 0
#
#
#
# free -m
total used free shared buffers cached
Mem: 96646 26734 69911 0 24901 170
-/+ buffers/cache: 1662 94983
Swap: 16383 0 16383
#
#
#
# vmstat 1 8
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 1 340 64158176 32734544 175056 1 1 61 22 0 0 1 0 99 0 0
1 1 340 63928852 32954576 175112 0 0 110216 91508 4238 6208 0 5 79 16 0
2 2 340 63731188 33147344 174416 0 0 96392 65021 4082 5205 0 5 69 27 0
0 2 340 63559684 33315152 175000 0 0 83720 57344 3594 4304 0 4 73 23 0
2 4 340 63375412 33494368 174696 0 0 89736 53849 3797 4833 0 4 60 37 0
0 2 340 63287548 33580396 175388 0 0 42888 155148 2723 2397 0 5 74 21 0
1 1 340 63205204 33662312 175368 0 0 41096 41445 2235 2548 0 2 75 23 0
0 1 340 63015216 33847664 175244 0 0 92552 94760 3939 5725 0 5 78 18 0
#
#
#
# free -m
total used free shared buffers cached
Mem: 96646 36143 60502 0 34060 170
-/+ buffers/cache: 1912 94733
Swap: 16383 0 16383
#
#
#
# vmstat 1 8
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 3 340 58571068 38169084 175056 1 1 61 22 0 0 1 0 99 0 0
0 1 340 58416972 38323708 174736 0 0 77320 99065 3467 3472 0 5 78 17 0
0 3 340 58350100 38389268 175332 0 0 32776 29044 1946 2020 0 2 62 36 0
0 1 340 58184636 38550036 175160 0 0 80392 78693 3303 4483 0 4 86 10 0
0 1 340 58022008 38707732 175244 0 0 78856 82880 3281 4339 0 4 87 10 0
0 1 340 57873524 38853140 175500 0 0 72712 70449 3134 4027 0 4 87 10 0
0 1 340 57742212 38981140 175248 0 0 64008 62160 2847 3613 0 3 87 10 0
0 1 340 57556304 39161388 174828 0 0 90120 91193 3542 4991 0 4 78 17 0
#
#
#
# free -m
total used free shared buffers cached
Mem: 96646 42214 54431 0 39966 171
-/+ buffers/cache: 2077 94568
Swap: 16383 0 16383
#
#
#
+ Done nohup dd if=/dev/mapper/vgroot-Lvopenv of=/dev/mapper/vgroot-test_d1 skip=1000 bs=512k
#
#
#
# vmstat 1 8
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 340 76560704 20466772 175368 1 1 61 22 0 0 1 0 99 0 0
0 0 340 76560704 20466772 175368 0 0 8 1 1049 249 0 0 93 7 0
0 0 340 76560688 20466772 175368 0 0 8 0 1086 393 0 0 100 0 0
0 0 340 76560432 20466796 175356 0 0 8 85 1053 387 0 0 100 0 0
0 0 340 76560720 20466796 175368 0 0 8 104 1060 256 0 0 100 0 0
0 0 340 76560720 20466796 175368 0 0 8 1 1052 249 0 0 100 0 0
0 0 340 76560856 20466796 175368 0 0 8 0 1042 242 0 0 100 0 0
0 0 340 76560856 20466796 175368 0 0 8 1 1045 280 0 0 100 0 0
#
#
#
# free -m
total used free shared buffers cached
Mem: 96646 21879 74766 0 19987 171
-/+ buffers/cache: 1721 94925
Swap: 16383 0 16383
#
#
lylklb
(285 rep)
Oct 30, 2024, 11:51 PM
• Last activity: Nov 13, 2024, 08:40 AM
0
votes
2
answers
263
views
Why is Linux not using RAM but only Swap?
How can such an output of `free -m` be explained? ``` total used free shared buff/cache available Mem: 32036 1012 225 3 8400 31024 Swap: 32767 24138 8629 ``` I understand having `free` memory to be low is no sign of alarm as Linux uses unused memory for buffers and file system caches (`buff/cache`)....
How can such an output of
free -m
be explained?
total used free shared buff/cache available
Mem: 32036 1012 225 3 8400 31024
Swap: 32767 24138 8629
I understand having free
memory to be low is no sign of alarm as Linux uses unused memory for buffers and file system caches (buff/cache
). What's important to have enough available
memory.
But why is the kernel not swapping in again? Nearly all memory is available
.
I took this output from a continuously log-to-disk I setup as "every minute" cronjob. At that point in time the system was so unresponsive I could not even locally login anymore. After slowly typing username and password, there was a timeout (Login timed out after 60 seconds.
), so I could not reach a shell and had to power-cycle the server to recover.
The journal
is full of take too long
, timeout
and broken pipe
messages as everything on the system is crawling and therefore malfunctioning.
I played around with vm.swappiness
, having the default value of 60
reduced to 10
(to put the kernel more onto "only swap if it's really necessary"), but I have similar results.
I was hesitant to try a swapoff && swapon
to bring the available
memory back into play. Does the oom-killer
take over if not everything fits into RAM? Or does the system crash then?
---
A little more background information about the concrete case:\
I have a Proxmox setup, evaluating how stable everything runs. I really stress the machine having allocated more RAM to the VMs in total than I have. To my unterstanding, this should still work with paying a little price of using swap space, slowing things down.\
I noticed that everything works stabile as I expected. I play around suspending VMs to disk, then starting other VMs. Swap gets used if needed and when VMs are suspended, Swap is being freed again.
But lately I added backup into my evaluation and this really crashes the machine. Over night, when PVE Backup is started RAM gets more and more available
by consuming Swap. Backup speed falls from "1% per few seconds" to "1% per several hours" and eventually no progress at all. The machine gets unresponsive with that memory picture. The VMs are still running, but also their applications are malfunctioning as their system gets errors like interrupt took 2.2s
, Watchdog timeout (limit 3min)!
, CPU stuck for 23s!
. In the morning I find myself an unresponsive host.
theHacker
(181 rep)
May 16, 2024, 08:04 PM
• Last activity: Sep 20, 2024, 03:26 PM
0
votes
0
answers
54
views
Why does "vmstat" disagree with "free" after a while?
> This is not a duplicate of https://unix.stackexchange.com/q/550846/23450 # There is probably some bug or incompatibility in `vmstat` When running `vmstat` for a while it starts to disagree with `free`. This increases over time until the output of `vmstat` is completely unusable crap on the affecte...
> This is not a duplicate of https://unix.stackexchange.com/q/550846/23450
# There is probably some bug or incompatibility in
vmstat
When running vmstat
for a while it starts to disagree with free
. This increases over time until the output of vmstat
is completely unusable crap
on the affected values.
## This leeds to a combination of following questions:
- Can anybody else confirm this?
- Where to find more information about this and what causes it?
- How to repair properly?
> Note that I report this here early, so before I have more information.
> And I do not need help because it is just an annoyance.
## Infos
$ uname -a
Linux (redacted) 6.1.0-20-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.85-1 (2024-04-11) x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 12 (bookworm)
Release: 12
Codename: bookworm
$ dpkg -l procps | cat
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-============-============-=================================
ii procps 2:4.0.2-3 amd64 /proc file system utilities
# dpkg --verify procps
??5?????? c /etc/sysctl.conf
#
## The problem
When started vmstat
and free
agree, but there already can be some slight bias:
$ free; vmstat 3 2; free
total used free shared buff/cache available
Mem: 32650568 26608268 7577824 411648 699696 6042300
Swap: 31457276 3994720 27462556
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
30 6 3994464 7575240 56804 643064 179 189 4028 1874 2317 22945 64 24 7 5 0
5 5 3994464 7575240 56804 643064 19 0 249612 24786 12070 57996 21 31 25 23 0
total used free shared buff/cache available
Mem: 32650568 26661436 7517940 411696 708000 5989132
Swap: 31457276 3994208 27463068
This might be normal on a busy machine, but I think, vmstat
already miscalculates these values, such that after a while, the errors add up until the values become completely out of bounds:
$ vmstat 3
[after an hour or so]
3 5 5745880 3422064 35576 220220 19 0 226691 9334 24105 61618 22 30 29 19 0
4 5 5745880 3422064 35576 220220 16 0 155941 38951 16270 54930 19 24 31 26 0
10 5 5745880 3422064 35576 220220 1 21 253594 11063 14066 50767 19 29 28 24 0
4 2 5745880 3422064 35576 220220 0 0 128184 34534 24439 51341 18 20 32 30 0
While in another terminal (following ran in parallel of the above lines):
$ free; vmstat 3 2
total used free shared buff/cache available
Mem: 32650568 28221708 5778984 401352 950904 4428860
Swap: 31457276 3952224 27505052
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
5 3 3952224 5777424 234584 716932 179 189 4032 1874 2316 22942 64 24 7 5 0
2 3 3952224 5777424 234584 716932 17 0 230215 8339 17784 55440 21 29 29 20 0
Compare the both values
5745880 3422064 35576 220220 (utterly wrong)
3952224 5777424 234584 716932 (probably real)
As you can clearly see, the numbers have no more connection to any reality after some while.
> I tried to figure this out myself, but in my limited time I failed to even find a trace to a hint what might cause this (or how to fix it). So I report it here until I am able to answer this myself unless somebody else nukes my question, what usually happens quite too often. However due to all the current (global) crisis it is very unlikely that I find (excess) time to dive into this deeply enough to find the root cause.
## As a mitigation my recommendation is
- Do not trust the 4 values swpd
, free
, buff
and cache
in vmstat
at all
- Run a while sleep
-loop around the command free
to gather the real values
## Notes
- I do not open a bug on Debian because I currently was unable to reproduce this (as I haven't tried due to lack of time) on another machine to make up a good report
- It is unknown when this effect started, but I think it is relatively new
- I am tracking this 4 weeks now
- Perhaps it is due to the kernel? (it is 2 security patches old)
- The machine only boots into a newer kernel when something else needs a reboot!
- Perhaps it needs a very busy server, perhaps even with a certain workload
- This one runs ZFS, CouchDB and VMs (and more, flawlessly)
- There are no anomalies nor hardware defects known currently
- Except for the vmstat
anomaly
- And some usual badly written standard programs, of course, which are unlikely to even be able to cause this effect
Tino
(1287 rep)
Aug 2, 2024, 11:27 AM
0
votes
1
answers
495
views
Free RAM memory from free command and Mobaxterm
I have several servers running Debian 10 or 11. I use Mobaxterm ([Link][1]) for SSH into the servers to check the status of the services and the RAM usage. I notice that the > free --mega -t shows a amount of free RAM that differs from the one provided by the Mobaxterm (used RAM/total RAM in the red...
I have several servers running Debian 10 or 11.
I use Mobaxterm (Link ) for SSH into the servers to check the status of the services and the RAM usage.
I notice that the
> free --mega -t
shows a amount of free RAM that differs from the one provided by the Mobaxterm (used RAM/total RAM in the red rectangle) in the bar on the bottom of the SSH windows.
This is the situation:
Why Mobaxterm shows 0.64 Gb of RAM are used and the "free" shows 170 Mb are used?
Which is the best way to check the used and the free RAM?
Why there is this difference between the two methods?
Thank you for the help!

Federico
(129 rep)
Apr 3, 2024, 01:24 PM
• Last activity: Apr 3, 2024, 01:53 PM
0
votes
1
answers
43
views
Logging sum of Mem and Swap from free command output
In relation to this: https://unix.stackexchange.com/a/754252/582781 Solution 1: free -g -s2 | sed -u -n 's/^Mem:\s\+[0-9]\+\s\+\([0-9]\+\)\s.\+/\1/p' >> memory.log Is there a way to add Swap to this, so that I would log the sum of used Mem and Swap?
In relation to this:
https://unix.stackexchange.com/a/754252/582781
Solution 1:
free -g -s2 | sed -u -n 's/^Mem:\s\+[0-9]\+\s\+\([0-9]\+\)\s.\+/\1/p' >> memory.log
Is there a way to add Swap to this, so that I would log the sum of used Mem and Swap?
Aleksander
(5 rep)
Oct 28, 2023, 07:10 AM
• Last activity: Oct 28, 2023, 10:49 AM
0
votes
1
answers
101
views
Low memory vs total memory in "free" command
I am trying to understand the output of the `free` command on an AWS Linux server. For example, `free -h` gives: total used free shared buff/cache available Mem: 15G 2.2G 4.0G 16M 9.0G 12G Swap: 0B 0B 0B Whereas `free -hl` gives: total used free shared buff/cache available Mem: 15G 2.2G 4.0G 16M 9.0...
I am trying to understand the output of the
free
command on an AWS Linux server. For example, free -h
gives:
total used free shared buff/cache available
Mem: 15G 2.2G 4.0G 16M 9.0G 12G
Swap: 0B 0B 0B
Whereas free -hl
gives:
total used free shared buff/cache available
Mem: 15G 2.2G 4.0G 16M 9.0G 12G
Low: 15G 11G 4.0G
High: 0B 0B 0B
Swap: 0B 0B 0B
How can the "Low" used memory (11G) exceed the total used memory (2.2G)? From everything I'm reading, -l
is supposed to split the totals into totals for "low" and "high" memory, which isn't even a distinction that's supposed to exist on 64-bit systems. What does the 11G actually mean then?
EugeneO
(103 rep)
Oct 23, 2023, 06:56 PM
• Last activity: Oct 23, 2023, 07:47 PM
4
votes
2
answers
5468
views
Why does swappiness not work?
We have a RHEL 7 machine, with only 2G of available RAM: free -g total used free shared buff/cache available Mem: 31 28 0 0 1 2 Swap: 15 9 5 so we decided to increase the swappiness to the maximum with `vm.swappiness = 100` in `/etc/sysctl.conf` instead of 10, and used `sysctl -p` to apply the setti...
We have a RHEL 7 machine, with only 2G of available RAM:
free -g
total used free shared buff/cache available
Mem: 31 28 0 0 1 2
Swap: 15 9 5
so we decided to increase the swappiness to the maximum with
vm.swappiness = 100
in /etc/sysctl.conf
instead of 10, and used sysctl -p
to apply the setting.
After some time we checked the status again:
free -g
total used free shared buff/cache available
Mem: 31 28 0 0 2 2
Swap: 15 9 5
as we can see despite the new swappiness setting, we see from free -g
that the available RAM stays at 2G. Why? What is wrong here?
We expected to see 15G of **used** swap.
We also checked:
cat /proc/sys/vm/swappiness
100
so everything should work according to the new settings BUT free
shows the same situation. What is going here?
yael
(13936 rep)
Jan 9, 2019, 11:00 AM
• Last activity: Sep 8, 2023, 11:13 AM
0
votes
2
answers
537
views
Getting only used memory from free command every few seconds
It was explained e.g. here: https://unix.stackexchange.com/questions/68526/get-separate-used-memory-info-from-free-m-command how to cut the output of `free` command. But I want to do this every few seconds and log it to a file. So I tried: free -g -s2 | sed -n 's/^Mem:\s\+[0-9]\+\s\+\([0-9]\+\)\s.\+...
It was explained e.g. here:
https://unix.stackexchange.com/questions/68526/get-separate-used-memory-info-from-free-m-command
how to cut the output of
free
command. But I want to do this every few seconds and log it to a file. So I tried:
free -g -s2 | sed -n 's/^Mem:\s\+[0-9]\+\s\+\([0-9]\+\)\s.\+/\1/p' >> memory2.log
but the file stays empty. Why is that and how to fix it?
Aleksander
(5 rep)
Aug 17, 2023, 09:54 AM
• Last activity: Aug 17, 2023, 12:23 PM
1
votes
0
answers
156
views
DELL R730XD 256GB memory but just showed 251GB on linux with command `free -h` and `htop` , why?
A DELL R730XD server with 32GiB * 8, totaling 256GB of memory, is correctly recognized as 256GB in the BIOS. When checking the memory information using "dmidecode -t memory" in Linux, the total of 256GB is also reported correctly. However, the "free -h" command consistently shows a total of 251GB on...
A DELL R730XD server with 32GiB * 8, totaling 256GB of memory, is correctly recognized as 256GB in the BIOS. When checking the memory information using "dmidecode -t memory" in Linux, the total of 256GB is also reported correctly. However, the "free -h" command consistently shows a total of 251GB on both CentOS and Arch Linux. same for "htop" command. Why?
Some told me that's cause the different between GB and GiB, It's wrong, 256GB != 251GiB.
there some information about:
memory info on bios/iDRAC
memory info on windows server 2019



BreakingGood
(21 rep)
Aug 13, 2023, 09:42 AM
• Last activity: Aug 13, 2023, 02:45 PM
0
votes
0
answers
4271
views
linux drop_cache using "echo 3 > /proc/sys/vm/drop_caches" not working as expected
In our production environment we are running drop cache command `echo 3 > /proc/sys/vm/drop_caches` to free the RAM. But also what I found is dropping caches is not a good practice and also it won't help much because when a process require more memory and if that much of memory is not free then OS w...
In our production environment we are running drop cache command
echo 3 > /proc/sys/vm/drop_caches
to free the RAM. But also what I found is dropping caches is not a good practice and also it won't help much because when a process require more memory and if that much of memory is not free then OS will clear the cache and allocate memory accordingly.
Also technically value of **available** memory before and after running drop_caches
command should be same.
But when I ran few tests found that dropping caches really helped me in getting some free RAM
free -h
- **Before dropping cache**
total used free shared buff/cache available
Mem: 7.6G 4.2G 524M 306M 2.9G 2.8G
Swap: 4.0G 439M 3.6G
free-h
- **After dropping cache**
total used free shared buff/cache available
Mem: 7.6G 3.3G 3.8G 306M 452M 3.8G
Swap: 4.0G 439M 3.6G
As we can see **available** memory increased by 1GB, what more interesting is **used** memory almost reduced by 1GB. This is where I'm very much confused, isn't drop_cache
command should have only cleared cached memory
? why it is even clearing memory used by some running process(i.e **used** memory)
Can someone please explain
1. how echo 3 > /proc/sys/vm/drop_caches
command is working ?
2. how available memory is calculated in free command (breakdown of
available memory)
----------
----------
## After edit ##
One of my questions is still not answered
>Also technically value of available memory before and after running drop_caches command should be same
As we dicussed, **available** memory includes reclaimable slab objects
>Right, since reclaimable slab objects isn’t included in either free, buffers, or cache, that means that it ends up in used.
>The definition of “available” explicitly includes it so it is included there too.
But when I ran echo 2 > /proc/sys/vm/drop_caches
- To free reclaimable slab objects (includes dentries and inodes)
**Before dropping cache**
total used free shared buff/cache available
Mem: 7.6G 4.2G 397M 331M 3.0G 2.7G
Swap: 4.0G 429M 3.6G
**After dropping cache**
total used free shared buff/cache available
Mem: 7.6G 3.3G 3.8G 331M 483M 3.8G
Swap: 4.0G 429M 3.6G
As we can see here **available** memory has increased after dropping cache, but since I've used echo 2
, it only free reclaimable slab objects
. As **available** memory always includes reclaimable slab objects
running echo 2 > /proc/sys/vm/drop_caches
shouldn't have changed value of **available** memory.
But it is not true in my case.
Swastik
(103 rep)
Dec 5, 2022, 04:00 PM
• Last activity: Dec 6, 2022, 07:24 AM
0
votes
0
answers
377
views
Why the numbers of "free" don't add up?
This is a side question when I try to figure out why my Ubuntu18.04 server keeps using up memory and mysql gets killed from time to time. total used free shared buff/cache available Mem: 3.9G 1.2G 142M 2.0G 2.5G 346M Swap: 511M 510M 2.0M [![enter image description here][1]][1] The above is the resul...
This is a side question when I try to figure out why my Ubuntu18.04 server keeps using up memory and mysql gets killed from time to time.
total used free shared buff/cache available
Mem: 3.9G 1.2G 142M 2.0G 2.5G 346M
Swap: 511M 510M 2.0M
The above is the result of running "free" command.
I've been trying to figure out how the numbers can be added up to the total(3.9g) to no avail.
Can someone explain please?

shenkwen
(143 rep)
Sep 21, 2022, 07:41 PM
• Last activity: Sep 23, 2022, 12:58 PM
0
votes
1
answers
3805
views
what is the different between Buffer Cache that displayed from free command VS the available memory
we have 463 RHEL 7.6 machines in the cluster most of then are HDFS machines ( datanode ) from `free -g` command we can that usually **buff/cach**e is around 30-50 when total memory is 256G as I know - a buffer is an area of memory used to temporarily store data while being moved from one place to an...
we have 463 RHEL 7.6 machines in the cluster most of then are HDFS machines ( datanode )
from
free -g
command we can that usually **buff/cach**e is around 30-50 when total memory is 256G
as I know - a buffer is an area of memory used to temporarily store data while being moved from one place to another
but the available memory is also memory that used for application
so I am little confused , what is the diff between **buff/cache** to **available** memory ?
yael
(13936 rep)
Jul 11, 2022, 04:47 PM
• Last activity: Jul 14, 2022, 07:58 AM
2
votes
2
answers
2567
views
Zero free swap but 56GiB free memory?
I have a CentOS7.9 system showing 56GiB free RAM (free -m) but only 1MiB free swap, and it's been in this state for three days. The original problem report was that a large (EE simulation) app keeps crashing. Can anyone help me understand what could put the memory into this condition?
I have a CentOS7.9 system showing 56GiB free RAM (free -m) but only 1MiB free swap, and it's been in this state for three days. The original problem report was that a large (EE simulation) app keeps crashing.
Can anyone help me understand what could put the memory into this condition?
David C.
(21 rep)
Jun 13, 2022, 03:58 PM
• Last activity: Jun 13, 2022, 04:19 PM
0
votes
1
answers
1009
views
need RAM & SWAP Memory monitoring Threshold
I wish to setup monitoring alerts upon used memory percentage. I had setup 0-80% used RAM as Green (good) 81-90% as yellow(acceptable) 91-95% as orange(warning) 96+ as Red(critical) However, i see that my current usage of RAM is 99% yet everything seems working smoothly and that makes everyone feel...
I wish to setup monitoring alerts upon used memory percentage.
I had setup 0-80% used RAM as Green (good)
81-90% as yellow(acceptable)
91-95% as orange(warning)
96+ as Red(critical)
However, i see that my current usage of RAM is 99% yet everything seems working smoothly and that makes everyone feel 96+ as Red(critical) is not the right criteria to Alert for critical.
I noticed that despite 99% usage of RAM the swap memory was 100% free.
$ free -m
total used free shared buff/cache available
Mem: 15883 1672 273 57 13938 13766
Swap: 2047 0 2047
Thus, my query is should I also check for swap memory or only the swap memory to send out Alerts and what would that appropriate threshold be for RAM as well as swap memories ?
Ashar
(527 rep)
Oct 7, 2021, 09:52 AM
• Last activity: Oct 7, 2021, 12:22 PM
0
votes
2
answers
88
views
When to upgrade RAM based on free output
I have a java application that runs on a Linux server with physical memory(RAM) allocated as 12GB where I would see the normal utilization over a period of time as below. ``` sys> free -h total used free shared buff/cache available Mem: 11G 7.8G 1.6G 9.0M 2.2G 3.5G Swap: 0B 0B 0B ``` Recently on inc...
I have a java application that runs on a Linux server with physical memory(RAM) allocated as 12GB where I would see the normal utilization over a period of time as below.
sys> free -h
total used free shared buff/cache available
Mem: 11G 7.8G 1.6G 9.0M 2.2G 3.5G
Swap: 0B 0B 0B
Recently on increasing the load of the application, I could see the RAM utilization is almost full, and available space is very less where I could face some slowness but still application continues to work fine.
sys> free -h
total used free shared buff/cache available
Mem: 11G 11G 134M 17M 411M 240M
Swap: 0B 0B 0B
sys> free -h
total used free shared buff/cache available
Mem: 11G 11G 145M 25M 373M 204M
Swap: 0B 0B 0B
I referred to https://www.linuxatemyram.com/ where it suggested the below point.
**Warning signs** of a genuine low memory situation that you may want to look into:
- available memory (or "free + buffers/cache") is close to zero
- swap used increases or fluctuates.
- dmesg | grep oom-killer shows the OutOfMemory-killer at work
From the above points, I don't see any OOM issue at the application level and the swap was also disabled. so neglecting the two points.
One point which troubles me was available memory is less than zero where I need a clarification
**Questions:**
In case available is close to 0, will it end up in a System crash?
Does it mean I need to upgrade the RAM when available memory goes less?
On what basis the RAM memory should be allocated/increased?
Do we have any official recommendations/guidelines that need to follow for RAM memory allocation?
ragul rangarajan
(143 rep)
Sep 27, 2021, 07:29 AM
• Last activity: Sep 28, 2021, 08:11 AM
0
votes
0
answers
238
views
htop command displays 996GB instead of 1024GB for RAM
I have a little issue that I can't understand. I have 8x128GB sticks of RAM, which makes 1024GB of RAM. But the command `htop` displays `996G` and not `1024G` as expected. You can see the following illustration below : [![htop image][1]][1] How to interpret this result ? **UPDATE:** This is a strang...
I have a little issue that I can't understand.
I have 8x128GB sticks of RAM, which makes 1024GB of RAM.
But the command
How to interpret this result ?
**UPDATE:** This is a strange result since on my MacOS Macbook, I have
officially 64GB of RAM and when I type the command htop, I get the following result :
As you can see, it displays all the amount of official memory, i.e 64GB of RAM unlike to Debian 10 Linux. I don't understand this difference.
Could anyone explain me the reason ?
htop
displays 996G
and not 1024G
as expected.
You can see the following illustration below :


youpilat13
(1 rep)
Jul 23, 2021, 01:20 AM
• Last activity: Jul 23, 2021, 01:53 AM
2
votes
1
answers
2120
views
Why the difference in neofetch and free (RAM) output?
wadewayne@Cheetah:~$ neofetch MMMMMMMMMMMMMMMMMMMMMMMMMmds+. wadewayne@Cheetah MMm----::-://////////////oymNMd+` ----------------- MMd /++ -sNMd: OS: Linux Mint 19.3 Tricia x86_64 MMNso/` dMM `.::-. .-::.` .hMN: Host: Inspiron 15-3567 ddddMMh dMM :hNMNMNhNMNMNh: `NMm Kernel: 5.4.0-77-generic NMm dMM...
wadewayne@Cheetah:~$ neofetch
MMMMMMMMMMMMMMMMMMMMMMMMMmds+. wadewayne@Cheetah
MMm----::-://////////////oymNMd+` -----------------
MMd /++ -sNMd: OS: Linux Mint 19.3 Tricia x86_64
MMNso/
Hi. I don't understand why
dMM
.::-. .-::.` .hMN: Host: Inspiron 15-3567
ddddMMh dMM :hNMNMNhNMNMNh: `NMm Kernel: 5.4.0-77-generic
NMm dMM .NMN/-+MMM+-/NMN` dMM Uptime: 9 mins
NMm dMM -MMm `MMM dMM. dMM Packages: 3130
NMm dMM -MMm `MMM dMM. dMM Shell: bash 4.4.20
NMm dMM .mmd `mmm yMM. dMM Resolution: 1366x768
NMm dMM ..
... ydm. dMM WM: i3
hMM- +MMd/-------...-:sdds dMM Theme: Arc-Dark [GTK2/3]
-NMm- :hNMNNNmdddddddddy/` dMM Icons: Pop [GTK2/3]
-dMNs-`-::::-------.
` dMM Terminal: terminator
`/dMNmy+/:-------------:/yMMM CPU: Intel i3-6006U (4) @ 2.000GHz
./ydNMMMMMMMMMMMMMMMMMMMMM GPU: Intel HD Graphics 520
.MMMMMMMMMMMMMMMMMMM Memory: 367MiB / 3801MiB
wadewayne@Cheetah:~$ free -h
total used free shared buff/cache available
Mem: 3.7G 325M 2.3G 39M 1.1G 3.1G
Swap: 521M 0B 521M

neofetch
and free -h
output different result for RAM usage. Which one's more accurate?
Wade Wayne
(121 rep)
Jul 3, 2021, 11:32 AM
• Last activity: Jul 3, 2021, 12:27 PM
0
votes
1
answers
239
views
The reason why buffers in "free -h" command output is increased
I did two experiments. **The first experiment (Ubuntu 20.04, ext4 filesystem):** 1. Run command `free -h -w`: ``` $ free -h -w total used free shared buffers cache available Mem: 30Gi 2,6Gi 25Gi 106Mi 126Mi 2,1Gi 27Gi ``` 2. Run command `sudo find / | grep something` 3. Run command `free -h -w` agai...
I did two experiments.
**The first experiment (Ubuntu 20.04, ext4 filesystem):**
1. Run command
free -h -w
:
$ free -h -w
total used free shared buffers cache available
Mem: 30Gi 2,6Gi 25Gi 106Mi 126Mi 2,1Gi 27Gi
2. Run command sudo find / | grep something
3. Run command free -h -w
again and observe significant (about 1G) increasing of "buffers" column, and increasing of "cache" column as well (about 500M):
$ free -h -w
total used free shared buffers cache available
Mem: 30Gi 2,6Gi 24Gi 106Mi 1,2Gi 2,6Gi 27Gi
**The second experiment (same PC):**
1. Run command free -h -w
:
$ free -h -w
total used free shared buffers cache available
Mem: 30Gi 2,6Gi 24Gi 106Mi 1,2Gi 2,6Gi 27Gi
2. Run command dd if=/dev/nvme0n1p2 of=/dev/null bs=1M count=500
- your disk would be another here
3. Run command free -h -w
again and observe 500M increasing of buffers:
$ free -h -w
total used free shared buffers cache available
Mem: 30Gi 2,6Gi 24Gi 115Mi 1,7Gi 2,6Gi 27Gi
So the question is: why buffers
column was increased in the first and why in the second case? I've read this https://unix.stackexchange.com/questions/34422/what-is-the-buffers-column-in-the-output-from-free but answers here are not appropriate for me.
They tell "buffers column contains metadata about files" - but it is wrong, because it is the "cache" column that counts slabs for inode, dentry and buffer_head (which are actually metadata of files). man free
also tells us that cache
column contains SReclaimable
.
They also tell "buffers column contains cache of blocks from block devices" - and it looks more like the truth, it explains why buffers
increased when I ran dd
, but it does not explain why buffers
column increased when I ran find
command. And even in case of dd
- why we need it if we already have file cache? Nobody read/write directly from/to block devices except of DVD disks.
OSintegrator
(1 rep)
Jun 18, 2021, 05:13 AM
• Last activity: Jun 20, 2021, 10:49 PM
Showing page 1 of 20 total questions