Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
9
votes
1
answers
739
views
When suspending to disk, does memory cache got dumped to disk?
My system has 64GiB of memory. I noticed it usually uses about 20GiB for cache. I wonder if I do "suspend to disk", does the cached part get dumped to disk as well, or is it written to disk before the system is suspended to disk? I want to reduce the actual total used memory before suspending to dis...
My system has 64GiB of memory. I noticed it usually uses about 20GiB for cache. I wonder if I do "suspend to disk", does the cached part get dumped to disk as well, or is it written to disk before the system is suspended to disk?
I want to reduce the actual total used memory before suspending to disk so that the system can wake up faster.
Thanks
David S.
(5823 rep)
Jul 2, 2025, 10:32 AM
• Last activity: Jul 3, 2025, 09:05 AM
3
votes
1
answers
70
views
Is it a good idea to have inode size of 2048 bytes + inline_data on an ext4 filesystem?
I only recently "discovered" the `inline_data` feature of ext4, although it seems to have been around for 10+ years. I ran a few statistics on various of my systems (desktop/notebook + server), specifically on the root filesystems, and found out that: - Around 5% of all files are < 60 bytes in size....
I only recently "discovered" the
inline_data
feature of ext4, although it seems to have been around for 10+ years.
I ran a few statistics on various of my systems (desktop/notebook + server), specifically on the root filesystems, and found out that:
- Around 5% of all files are < 60 bytes in size. The 60 byte threshold is relevant, because that's how much inline data you can fit in a standard 256 byte inode
- Another ~20-25% of files are between 60 and 896 bytes in size. Again, the "magic number" 896 is how much you fit in a 1KB inode
- Further 20% are in the 896-1920 byte range (you guess it - 1920 is what you fit into a 2KB inode)
- That percentage is even more stunning for directories - 30-35% are below 60 bytes, and further 60% are below 1920 bytes.
This means that with an inode size of 2048 bytes you can ***inline roughly half of all files and 95% of all directories on an average root filesystem***! This came as quite a shocker to me...
Now, of course since inodes are preallocated and fixed for the lifetime of a filesystem, large inodes lead to a lot of "wasted" space, if you have a lot of them (i.e. a low inode_ratio
setting). But then again, allocating a 4KB block for a 5 byte file is also a waste of space. And according to above statistic, half of all files on the filesystem and virtually all directories can't even fill half of a 4KB block, so that wasted space is not insignificant. The only difference between wasting that space in the inode table and in the data blocks is that you have one more level of indirection, plus potential for fragmentation, etc.
The advantages I see in that setup are:
- When the kernel loads an inode, it reads at least one page size (4KB) from disk, no matter if the inode is 128 bytes or 2KB, so you have zero overhead in terms of raw disk IO...
- ... but you have the data preloaded as soon as you stat
the file, no additional IO needed to read the contents
- The kernel caches inodes more aggressively than data blocks, so inlined data is more likely to stay longer in cache
- Inodes are stored in a fixed, contiguous region of the partition, so you can't ever have fragmentation there
- Inlining is especially useful for directories, a) since such a high portion of them are small, and b) because you're very likely to need the contents of the directory, so having it preloaded makes a lot of sense
What do you think about this setup? Am I missing something here, and are there some potential risks I don't see?
I stress again that I'm talking about a root filesystem, hosting basically the operating system, config files, and some caches and logs. Obviously the picture would be quite different for a /home
partition hosting user directories, and even more different for a fileserver, webserver, mailserver, etc.
(I know there are a few threads describing some corner cases where inline_data
does not play well with journaling, but those are 5+ years old, so I hope those issues have been sorted out.)
**EDIT**: Since there are doubts expressed in the comments if directory inlining works - it does. I have already implemented the setup described here, and the machine I'm writing on right now actually is running on a root filesystem with 2KB inodes with inlining. Here's what /usr
looks like in ls
:
`
# ls -l /usr
total 160
drwxr-xr-x 2 root root 36864 Jul 1 00:35 bin
drwxr-xr-x 2 root root 60 Mar 4 13:20 games
drwxr-xr-x 4 root root 1920 Jun 16 21:32 include
drwxr-xr-x 64 root root 1920 Jun 25 21:16 lib
drwxr-xr-x 2 root root 1920 Jun 9 01:48 lib64
drwxr-xr-x 16 root root 4096 Jun 22 02:58 libexec
drwxr-xr-x 11 root root 1920 Jun 9 00:10 local
drwxr-xr-x 2 root root 12288 Jun 26 20:22 sbin
drwxr-xr-x 191 root root 4096 Jun 26 20:22 share
drwxr-xr-x 2 root root 60 Mar 4 13:20 src
`
And if you dive even deeper and use debuge2fs
to examine those directories, the ones having 60 or 1920 byte size have 0 allocated data blocks, while those having 4096 and more do have data blocks.
Mike
(477 rep)
Jul 1, 2025, 02:05 PM
• Last activity: Jul 2, 2025, 04:28 PM
130
votes
7
answers
239691
views
Can I safely remove /var/cache?
I am running out of disk space and noted that I have a large `/var/cache` directory. Can I safely remove this? (using Arch Linux, BTW).
I am running out of disk space and noted that I have a large
/var/cache
directory. Can I safely remove this? (using Arch Linux, BTW).
user11780
Oct 23, 2011, 07:34 PM
• Last activity: Jul 1, 2025, 05:41 PM
5
votes
1
answers
2441
views
Disable writeback cache throttling - tuning vm.dirty_ratio
I have a workload with extremely high write burst rates for short periods of times. The target disks are rather slow, but I have plenty of RAM and very tolerant to instantaneous data loss. I've tried tuning vm.dirty_ratio to maximize the use of free RAM space to be used for dirty pages. # free -g to...
I have a workload with extremely high write burst rates for short periods of times. The target disks are rather slow, but I have plenty of RAM and very tolerant to instantaneous data loss.
I've tried tuning vm.dirty_ratio to maximize the use of free RAM space to be used for dirty pages.
# free -g
total used free shared buff/cache available
Mem: 251 7 213 3 30 239
Swap: 0 0 0
# sysctl -a | grep -i dirty
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 5
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 90000
vm.dirty_ratio = 90
However, it seems I'm still encountering some writeback throttling based on the underlying disk speed. How can I disable this?
# dd if=/dev/zero of=/home/me/foo.txt bs=4K count=100000 oflag=nonblock
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 10.2175 s, 40.1 MB/s
As long as there is free memory and the dirty ratio has not yet been exceeded - I'd like to write at full speed to the page cache.
Linux Questions
(51 rep)
Nov 29, 2018, 04:55 AM
• Last activity: Jun 25, 2025, 04:07 PM
2
votes
1
answers
2450
views
How to run a web page with cache refresh using chromium-browser and command line?
I can start a web page with command line like that `cromium-browser http://some-page.com`. How can I start a web page with command line and with hard refresh cache option (equivalent of `CTRL-F5` if I tried to manually opened it with cromium)? I think there must be an argument option to do that but...
I can start a web page with command line like that
cromium-browser http://some-page.com
.
How can I start a web page with command line and with hard refresh cache option (equivalent of CTRL-F5
if I tried to manually opened it with cromium)? I think there must be an argument option to do that but I couldn't find it.
Mykhailo Seniutovych
(121 rep)
Oct 25, 2018, 06:03 AM
• Last activity: Jun 20, 2025, 02:07 PM
1
votes
1
answers
4841
views
How to clear DNS cache on Fedora on any other Linux distro
I've just changed the hosting for my Domain the name got propagated (24 hours passed) I have new page (without SSL because I didn't added it yet on new hosting) on my android phone. But when I open the page in Chromium or Fedora I see old redirect to https. How can I flush/clear my local DNS so I'll...
I've just changed the hosting for my Domain the name got propagated (24 hours passed) I have new page (without SSL because I didn't added it yet on new hosting) on my android phone. But when I open the page in Chromium or Fedora I see old redirect to https.
How can I flush/clear my local DNS so I'll see new page and can do something with new site.
For both my phone and my laptop I use same WiFi so it's not cache in router.
In this question https://unix.stackexchange.com/questions/387292/how-to-flush-the-dns-cache-in-debian first answer don't work and second is for server that have Bind, I don't have bind, it's not a server.
My
/etc/resolv.conf
look like this:
# Generated by NetworkManager
nameserver 91.239.248.21
nameserver 8.8.4.4
nameserver fe80::1%wlp3s0
jcubic
(10310 rep)
Aug 7, 2019, 11:02 AM
• Last activity: Jun 12, 2025, 11:07 PM
9
votes
1
answers
2607
views
How can I find out what is making my SLAB Unreclaimable Memory Grow without bound
My SLAB unreclaimable memory (SUnreclaim) grows without bounds and this appears to be the reason why my system eventually runs out of RAM and starts trying to swap until it dies. Here's a graph of my SUreclaim over a couple of days. My typical RAM usage is about 5GB on a 16GB server. When the SUrecl...
My SLAB unreclaimable memory (SUnreclaim) grows without bounds and this appears to be the reason why my system eventually runs out of RAM and starts trying to swap until it dies.
Here's a graph of my SUreclaim over a couple of days. My typical RAM usage is about 5GB on a 16GB server. When the SUreclaim gets to about 10.xGB the endless swapping starts.
These graphs show it growing endlessly and me rebooting it to free up the RAM, in these 2 cases, before it causes my system to swap itself to death.
Here's a partial slabtop just before the 2nd reboot.
---------------------------------- 20180730164416 ----------------------------------
Active / Total Objects (% used) : 34014938 / 35150125 (96.8%)
Active / Total Slabs (% used) : 1098114 / 1098114 (100.0%)
Active / Total Caches (% used) : 120 / 147 (81.6%)
Active / Total Size (% used) : 7332279.93K / 7831039.90K (93.6%)
Minimum / Average / Maximum Object : 0.01K / 0.22K / 22.88K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
8433792 8349318 98% 0.06K 131778 64 527112K pid
4253942 4250995 99% 0.09K 92477 46 369908K anon_vma
3011640 2929311 97% 0.20K 150582 20 602328K vm_area_struct
2994831 2908345 97% 0.19K 142611 21 570444K dentry
2068096 2033715 98% 0.03K 16157 128 64628K kmalloc-32
1953024 1932838 98% 0.02K 7629 256 30516K kmalloc-16
1820128 1618465 88% 0.25K 113758 16 455032K filp
1149438 1149438 100% 0.04K 11269 102 45076K pde_opener
1014336 891822 87% 0.06K 15849 64 63396K kmalloc-64
954051 953969 99% 0.19K 45431 21 181724K cred_jar
757224 752612 99% 0.10K 19416 39 77664K buffer_head
627368 627368 100% 0.07K 11203 56 44812K eventpoll_pwq
564900 535453 94% 0.09K 13450 42 53800K kmalloc-96
372690 336229 90% 0.13K 12423 30 49692K kernfs_node_cache
362528 362365 99% 0.12K 11329 32 45316K seq_file
329937 327195 99% 1.06K 21455 30 686560K signal_cache
task_struct is also normally very high and is normally about 1.5GB before I have to reboot.
A couple of questions:
1) How do I work out which SLAB caches contain the unreclaimable RAM?
2) Is there anything else I can do to work out why the RAM is unreclaimable?

user3421823
(131 rep)
Jul 30, 2018, 11:09 AM
• Last activity: Jun 10, 2025, 07:03 AM
2
votes
0
answers
121
views
How to get the stats of an lvm cache?
I am using an lvm cache in the most usual combination (small, fast ssd before a huge, slow hdd). It is simply awesome. However, I have not found a way to know, how many blocks are actually cached and how. What is particularly interesting: 1. Size of the read cache. These are the blocks on the ssd wh...
I am using an lvm cache in the most usual combination (small, fast ssd before a huge, slow hdd). It is simply awesome.
However, I have not found a way to know, how many blocks are actually cached and how. What is particularly interesting:
1. Size of the read cache. These are the blocks on the ssd which are the same as the corresponding hdd block.
2. Size of the write cache. These blocks are the result if a write operation to the merged device, they are different on the ssd as on the hdd, and once (as there will be resources for that) will need to be synced out.
My research found that lvm-cache is using device mapper below, more exactly the dm-cache driver. A
table
command is enough to get some numbers, but there is no way to know, exactly which number means what. I think, there should exist some lvm-level solution for the task.
peterh
(10448 rep)
May 25, 2025, 02:24 PM
0
votes
1
answers
36
views
Why does the program that reads files still run very fast after clearing the page cache?
Test environment: Virtual Machine(Windows 11, VMware Workstation Pro), Ubuntu 22.04, mechanical hard drive. Use the following command to generate 1GB of test data: dd if=/dev/urandom of=test.data count=1M bs=1024 Clear the cache: sync; echo 3 > /proc/sys/vm/drop_caches Run: time cat test.data > /dev...
Test environment: Virtual Machine(Windows 11, VMware Workstation Pro), Ubuntu 22.04, mechanical hard drive.
Use the following command to generate 1GB of test data:
dd if=/dev/urandom of=test.data count=1M bs=1024
Clear the cache:
sync; echo 3 > /proc/sys/vm/drop_caches
Run:
time cat test.data > /dev/null
Execution time:
real 0m14.814s
user 0m0.011s
sys 0m4.034s
Clear the cache again:
sync; echo 3 > /proc/sys/vm/drop_caches
Run:
time cat test.data > /dev/null
Execution time:
real 0m1.761s
user 0m0.020s
sys 0m1.679s
Run immediately again (without clearing cache):
time cat test.data > /dev/null
Execution time:
real 0m0.227s
user 0m0.009s
sys 0m0.218s
After clearing the cache, I checked /proc/meminfo to confirm that the cache was indeed cleared.
Why is the second run still significantly faster even after clearing the cache?
Screenshot of the execution process:

沈小伟
(1 rep)
May 22, 2025, 02:42 AM
• Last activity: May 23, 2025, 04:02 PM
3
votes
1
answers
5227
views
device-mapper: reload ioctl on cache1 failed: Device or resource busy
When I run the below command while setting up dm-cache on my CentOS system, I receive the error: device-mapper: reload ioctl on cache1 failed: Device or resource busy Command failed Command is: dmsetup create 'cache1' --table '0 195309568 cache /dev/sdb /dev/sda 512 1 writethrough default 0' Does an...
When I run the below command while setting up dm-cache on my CentOS system, I receive the error:
device-mapper: reload ioctl on cache1 failed: Device or resource busy
Command failed
Command is:
dmsetup create 'cache1' --table '0 195309568 cache /dev/sdb /dev/sda 512 1 writethrough default 0'
Does anyone have idea about this error or have faced this error while setting up dm-cache?
My dmesg output is
[1907480.058991] device-mapper: table: 253:3: cache: Error opening metadata device
[1907480.058996] device-mapper: ioctl: error adding target to table
arpit joshi
(445 rep)
Apr 18, 2016, 11:08 PM
• Last activity: May 17, 2025, 06:06 PM
0
votes
1
answers
2620
views
How to increase cache disk space in CentOS?
I have been facing an insufficient space issue in the `/var/cache/yum` directory. Due to this problem, my CentOS graphical user interface crashes. I want to increase the size of the cache directory and also find out the actual size of the cache directory on the servers.
I have been facing an insufficient space issue in the
/var/cache/yum
directory. Due to this problem, my CentOS graphical user interface crashes. I want to increase the size of the cache directory and also find out the actual size of the cache directory on the servers.
Asad Abdullah
(1 rep)
Nov 2, 2015, 06:27 AM
• Last activity: May 14, 2025, 10:08 PM
0
votes
1
answers
2179
views
htcacheclean with "-A" flag only lists 64 cache entries, where are the rest?
I have a webserver running Apache. It has caching enabled via ```cache_disk_module```. CacheRoot "/var/cache/httpd/mod_cache" CacheDirLevels 1 CacheDirLength 1 I would like to list the URLs of the objects in the cache. If I use the bundled ```htcacheclean``` command with the ```-A``` flag to query t...
I have a webserver running Apache. It has caching enabled via
.
CacheRoot "/var/cache/httpd/mod_cache"
CacheDirLevels 1
CacheDirLength 1
I would like to list the URLs of the objects in the cache. If I use the bundled
command with the -A
flag to query the cache, it only returns 64 objects:
htcacheclean -A -p/var/cache/httpd/mod_cache
Output : 64 lines, each looking like this example:
http:// 823 102014 200 0 1603846099818215 1603849699818215 1603846099807137 1603846099818215 1 0
The entries which **do** get output, look correct, and contain the expected URLs.
However, if I run a "find" command to count up the number of .header
files, I get far more than 64:
# find /var/cache/httpd/mod_cache -name '*.header' | wc -l
30440
#
Apache version is the one currently provided with the CentOS 7.8 distro: version 2.4.6 with various patches backported.
From the man page:
-A List the URLs currently stored in the cache, along with their
attributes in the following order: url, header size, body size, status,
entity version, date, expiry, request time, response time, body
present, head request.
...
LISTING URLS IN THE CACHE
By passing the -a or -A options to htcacheclean, the URLs within the
cache will be listed as they are found, one URL per line. The -A option
dumps the full cache entry after the URL, with fields in the following
order:
...
Can anyone provide clues as to what is happening? How can I dump the full list of cached object URLs using
?
Ralph Wahrlich
(146 rep)
Oct 28, 2020, 01:51 AM
• Last activity: May 13, 2025, 11:07 AM
4
votes
3
answers
3703
views
How to force thumbnail refresh in PCmanFM
How can I force `pcmanfm` to refresh its thumbnails. I have a directory of photos in JPG format, (taken with iphone). I have rotated some of these using Ubuntu Image Viewer. When I rotate the image the thumbnail does not update. How can I force it to update? I have tried deleting all thumbnails form...
How can I force
pcmanfm
to refresh its thumbnails. I have a directory of photos in JPG format, (taken with iphone). I have rotated some of these using Ubuntu Image Viewer. When I rotate the image the thumbnail does not update. How can I force it to update?
I have tried deleting all thumbnails form ~/.cache/thumbnails
and selecting "reload folder" in pcmanfm
but no joy. Any suggestions? Where are the thumbnails actually stored?
Using pcmanfm 1.2.4 on Ubuntu 16.04.
Ron
(141 rep)
Dec 29, 2017, 03:30 PM
• Last activity: May 2, 2025, 12:02 AM
4
votes
2
answers
2442
views
HTTP response header for Cache-Control not working in Apache httpd
I have set Cache-Control in apache for 1 week for my JS Files but when i check in the browser Cache-Control shows no-cache. Where i am missing the configuration ? Below is my configuration in apache Header set Cache-Control "max-age=604800, public" Request Header in Browser Request URL:http://test.c...
I have set Cache-Control in apache for 1 week for my JS Files but when i check in the browser Cache-Control shows no-cache. Where i am missing the configuration ?
Below is my configuration in apache
Header set Cache-Control "max-age=604800, public"
Request Header in Browser
Request URL:http://test.com/Script.js?buildInfo=1.1.200
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
**Cache-Control:no-cache**
Connection:keep-alive
Host:test.com
Pragma:no-cache
Referer:http://test.com/home.jsp
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/37.0.2062.120 Safari/537.36
Query String Parametersview sourceview URL encoded
buildInfo:1.1.200
Response Headersview source
Cache-Control:max-age=2592000
Connection:keep-alive
Content-Encoding:gzip
Content-Type:text/javascript
Date:Sun, 12 Oct 2014 16:17:46 GMT
Expires:Tue, 11 Nov 2014 16:17:46 GMT
Last-Modified:Tue, 07 Oct 2014 13:28:08 GMT
Server:Apache
Transfer-Encoding:chunked
Vary:Accept-Encoding
Huzefa
(89 rep)
Oct 12, 2014, 04:32 PM
• Last activity: Apr 27, 2025, 08:03 PM
8
votes
2
answers
609
views
fork() Causes DMA Buffer in Physical Memory to Retain Stale Data on Subsequent Writes
I'm working on a C++ application on Ubuntu 20.04 that uses PCIe DMA to transfer data from a user-space buffer to hardware. The buffer is mapped to a fixed 1K physical memory region via a custom library (plib->copyDataToBuffer). After calling fork() and running a child process (which just calls an ex...
I'm working on a C++ application on Ubuntu 20.04 that uses PCIe DMA to transfer data from a user-space buffer to hardware. The buffer is mapped to a fixed 1K physical memory region via a custom library (plib->copyDataToBuffer). After calling fork() and running a child process (which just calls an external program and exits), I notice that subsequent writes to the buffer by the parent process do not reflect in physical memory—the kernel still sees the old data from before the fork.
**Key Details:**
The 1K buffer is mapped specifically for DMA; it’s pinned and mapped to a known physical address.
Before the fork(), a call to plib->copyDataToBuffer correctly updates physical memory.
After the fork(), the parent process calls plib->copyDataToBuffer again with new data, and msync returns success, but the physical memory contents remain unchanged.
The child process does not touch the buffer; it only runs an unrelated command via execvp.
**My Assumptions & Concerns:**
fork() causes COW (Copy-on-Write), but since only the parent writes to the buffer, I expected the updated contents to reflect in physical memory.
Could the COW behavior or memory remapping post-fork be interfering with DMA-mapped memory regions?
I confirmed that plib->copyDataToBuffer performs the write correctly from a software perspective, but the actual physical memory contents (verified from kernel space) remain stale.
**Question:**
Why does the physical memory backing my DMA buffer retain stale data after a fork() + exec in a child process, even though the parent writes new data afterward?
What are the best practices to ensure consistent physical memory updates for DMA buffers across fork() calls?
Nungesser Mcmindes
(83 rep)
Apr 18, 2025, 01:28 AM
• Last activity: Apr 18, 2025, 01:55 PM
2
votes
1
answers
52
views
How to prioritize SSD page cache eviction over HDD with slower speed?
I have a large slow HDD and a small fast SSD. This is about reads not [RAID](https://unix.stackexchange.com/q/471551/524752). My desktop grinds to a near-halt when switching back to Firefox or man pages after (re/un)-loading 12+ GiB of Linux kernel build trees and 39 GiB total of different LLMs on t...
I have a large slow HDD and a small fast SSD. This is about reads not [RAID](https://unix.stackexchange.com/q/471551/524752) . My desktop grinds to a near-halt when switching back to Firefox or man pages after (re/un)-loading 12+ GiB of Linux kernel build trees and 39 GiB total of different LLMs on the SSD while I only have 31 GiB of RAM:
$ free -h
total used free shared buff/cache available
Mem: 31Gi 10Gi 2.4Gi 1.0Gi 19Gi 20Gi
Swap: 0B 0B 0B
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1.7G 0 part /boot
└─sda3 8:3 0 1.8T 0 part
└─sda3_crypt 254:0 0 1.8T 0 crypt
├─vgubuntu-root 254:1 0 1.8T 0 lvm /
└─vgubuntu-swap_1 254:2 0 1.9G 0 lvm
nvme0n1 259:0 0 953.9G 0 disk
└─nvme0n1p1 259:1 0 100G 0 part
└─luks-... 254:3 0 100G 0 crypt /media/...
$ sysctl vm.swappiness
vm.swappiness = 60
The SSD is fast, so I'd rather Linux evict the SSD's page-cached files first. Its uncached read time is seconds anyway. What should stop is eviction of any file under /usr
or /home
. My man bash
and dpkg -S bin/bash
return instantly from the page cache, but uncached they take half a minute after exiting the LLMs. More severely, Firefox needs my ~/.mozilla
folder for history and cache; with it uncached, waiting for the address bar to work takes minutes.
I am looking for an configuration option. systemd-run
could set MemoryMax for ktorrent
, but I frequently restart llama-server
to switch between the ~6 GiB LLMs, and I don't want a separate daemon to keep the cgroup alive. The man
and dpkg
problems will be fixed when my /
moves to the SSD once I sort out fscrypt
fears; in the meantime, /usr
on tmpfs
would leave insufficient available RAM and overlayfs
is too much complexity. The LLM workload could, but shouldn't, remount the SSD as a workaround. That leaves the nice
d kernel build workload still evicting my web browsing one's cache.
I looked in /sys/block
but couldn't find the right config. [Cgroups v2](https://docs.kernel.org/admin-guide/cgroup-v2.html) has per-device options but only for parallel write workloads (io.max
) not for controlling how sequential workloads affect the cache. A [2011 patch](https://lore.kernel.org/lkml/4DFE987E.1070900@jp.fujitsu.com/T/) and a [2023 question](https://unix.stackexchange.com/q/755527/524752) don't see any userspace interface. Which setting can be used to force the SSD's page cache to be evicted before that of the HDD's?
Daniel T
(195 rep)
Apr 13, 2025, 10:44 PM
• Last activity: Apr 14, 2025, 02:07 PM
0
votes
1
answers
41
views
What is the difference and relation between the "--default-cache-ttl" and "--max-cache-ttl" options?
About GPG is mentioned the `gpg-agent` and I read the following answer: * [gpg does not ask for password](https://unix.stackexchange.com/a/395876/383045) Where is mentioned the `--default-cache-ttl` and `--max-cache-ttl` options. So I found this official source: * [man - GPG-AGENT(1)](https://www.gn...
About GPG is mentioned the
gpg-agent
and I read the following answer:
* [gpg does not ask for password](https://unix.stackexchange.com/a/395876/383045)
Where is mentioned the --default-cache-ttl
and --max-cache-ttl
options. So I found this official source:
* [man - GPG-AGENT(1)](https://www.gnupg.org/documentation/manuals/gnupg24/gpg-agent.1.html)
--default-cache-ttl n
Set the time a cache entry is valid to n seconds. The default is 600 seconds.
Each time a cache entry is accessed, the entry's timer is reset.
To set an entry's maximum lifetime, use max-cache-ttl
Note that a cached passphrase may not be evicted immediately from memory if
no client requests a cache operation. This is due to an internal housekeeping
function which is only run every few seconds.
--max-cache-ttl n
Set the maximum time a cache entry is valid to n seconds.
After this time a cache entry will be expired even if it
has been accessed recently or has been set using gpg-preset-passphrase.
The default is 2 hours (7200 seconds).
Therefore consider the **main question** as follows:
* What is the difference and relation between the --default-cache-ttl
and --max-cache-ttl
options?
And as secondary questions the following:
* What is exactly the cache entry
?
* What is the criteria of the gpg-agent
to know when consider/apply the --default-cache-ttl
and --max-cache-ttl
options?
Therefore I want clearly understand the points/scenarios/criteria about when and why was considered the 600 seconds (10 minutes) and 7200 seconds (2hrs) according with each option
Manuel Jordan
(2108 rep)
Apr 7, 2025, 01:05 AM
• Last activity: Apr 7, 2025, 07:09 AM
5
votes
2
answers
2373
views
/var/cache on temporary filesystem
Due to flash degradation concerns, I would like to lower the amount of unnecessary disk writes on a headless light-duty 24/7 system, as much as sensible. In case it matters this is a Debian-flavored system, but I think the issue might be of relevance for a wider audience. In order to achieve this, I...
Due to flash degradation concerns, I would like to lower the amount of unnecessary disk writes on a headless light-duty 24/7 system, as much as sensible. In case it matters this is a Debian-flavored system, but I think the issue might be of relevance for a wider audience.
In order to achieve this, I am already using **tmpfs** for
/tmp
and /var/log
in addition to the defaults. At this point, by monitoring idle IO activity with various tools like *fatrace*, I get that after long periods one of the most prominent directories in number of write accesses is /var/cache
, especially /var/cache/man
related to *man-db*. Note that I don't have automatic package updates in this system, so I don't get any writes for /var/cache/apt
, but for others that might be relevant too.
The question is, could it cause any trouble if **tmpfs** would be used for /var/cache
? On startup I would populate it with data from disk, and possibly *rsync* it back from time to time.
Of course the elevated RAM usage might be an issue on some systems, but it would be interesting to hear your opinions whether it would be problematic for some of the common systems using the cache, to have data absent in the early boot process, or generally be in a slightly outdated state (after a crash for example)?
rockfort
(85 rep)
Aug 24, 2020, 03:15 PM
• Last activity: Mar 18, 2025, 08:29 PM
1
votes
1
answers
737
views
Is it safe to add LVM cache device to existing LVM volume group while filesystem is mounted?
I have an existing LVM volume group with a 10 TB logical volume mounted as an ext4 system which is actively in use. **Is it safe to run the command `lvconvert --type cache --cachepool storage/lvmcache-data storage/data` while ext4 filesystem is already mounted on `storage/data`?** (The `storage/lvmc...
I have an existing LVM volume group with a 10 TB logical volume mounted as an ext4 system which is actively in use.
**Is it safe to run the command
lvconvert --type cache --cachepool storage/lvmcache-data storage/data
while ext4 filesystem is already mounted on storage/data
?** (The storage/lvmcache-data
has been previously configured with lvconvert --type cache-pool --cachemode writeback --poolmetadata storage/lvmcache-metadata storage/lvmcache-data
in case it makes a difference.)
I would assume yes, it's safe to add cache on-the-fly to online volume with mounted filesystem, but I couldn't find documentation either way.
Mikko Rantalainen
(4399 rep)
Mar 7, 2024, 02:55 PM
• Last activity: Feb 12, 2025, 05:44 PM
1
votes
0
answers
68
views
Alternate way to get the CPU cache size
I have a project where I need to get the size of the cache on my Linux machine. I don't know the linux distro, and the /etc/os-release file does not exist. I only know the kernel and architecture: Kernel: Linux 6.1.20-rt8 Architecture: arm64 I cannot install libraries onto the machine. To get the si...
I have a project where I need to get the size of the cache on my Linux machine. I don't know the linux distro, and the /etc/os-release file does not exist.
I only know the kernel and architecture:
Kernel: Linux 6.1.20-rt8
Architecture: arm64
I cannot install libraries onto the machine.
To get the size of the cache, I tried the following methods, all of which failed:
$ lscpu | grep cache
L1d cache: unknown size
L1i cache: unknown size
L2 cache: unknown size
$ cat /proc/cpuinfo | grep cache
$
$ ls /sys/devices/system/cpu/cpu*/cache/index*/size
ls: /sys/devices/system/cpu/cpu*/cache/index*/size: No such file or directory
dmesg | grep -i cache
[ 0.000000] Detected VIPT I-cache on CPU0
[ 0.000000] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[ 0.000000] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[ 0.405602] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
[ 0.405623] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
[ 0.490323] Detected VIPT I-cache on CPU1
[ 0.508655] Detected VIPT I-cache on CPU2
[ 0.526850] Detected VIPT I-cache on CPU3
[ 0.547628] Detected PIPT I-cache on CPU4
[ 0.591655] Detected PIPT I-cache on CPU5
[ 0.762359] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 22.216417] Detected PIPT I-cache on CPU4
[ 22.412738] Detected PIPT I-cache on CPU5
The /proc/cpu info file:
$ cat /proc/cpuinfo
processor : 0
BogoMIPS : 16.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 4
processor : 1
BogoMIPS : 16.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 4
processor : 2
BogoMIPS : 16.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 4
processor : 3
BogoMIPS : 16.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 4
processor : 4
BogoMIPS : 16.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd08
CPU revision : 2
processor : 5
BogoMIPS : 16.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd08
CPU revision : 2
The full lscpu:
$ lscpu
Architecture: aarch64
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: ARM
Model: 4
Model name: Cortex-A53
Stepping: r0p4
BogoMIPS: 16.00
L1d cache: unknown size
L1i cache: unknown size
L2 cache: unknown size
NUMA node0 CPU(s): 0-5
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
Plat00n
(111 rep)
Feb 6, 2025, 08:48 AM
• Last activity: Feb 6, 2025, 12:28 PM
Showing page 1 of 20 total questions