Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
1
answers
862
views
gnome-disks write caching
in **RHEL/CentOS 7.9** anyway, when running `gnome-disks` which is under the Applications-Utilities-Disks menu, for a recognized SSD it offers the enabling of **write-cache**. [![enter image description here][1]][1] [1]: https://i.sstatic.net/pM5lX.png I would like to know what technically is happen...
in **RHEL/CentOS 7.9** anyway, when running
I would like to know what technically is happening when turning this on, that wasn't already happening.
*I was under the impression, whether it was an SSD or a conventional spinning hard disk, that linux inherently does **disk caching**. This impression mainly comes from reading that www.linuxatemyram.com page years ago.*
gnome-disks
which is under the Applications-Utilities-Disks menu, for a recognized SSD it offers the enabling of **write-cache**.

ron
(8647 rep)
Feb 6, 2022, 03:15 AM
• Last activity: May 8, 2025, 09:01 PM
4
votes
2
answers
2442
views
HTTP response header for Cache-Control not working in Apache httpd
I have set Cache-Control in apache for 1 week for my JS Files but when i check in the browser Cache-Control shows no-cache. Where i am missing the configuration ? Below is my configuration in apache Header set Cache-Control "max-age=604800, public" Request Header in Browser Request URL:http://test.c...
I have set Cache-Control in apache for 1 week for my JS Files but when i check in the browser Cache-Control shows no-cache. Where i am missing the configuration ?
Below is my configuration in apache
Header set Cache-Control "max-age=604800, public"
Request Header in Browser
Request URL:http://test.com/Script.js?buildInfo=1.1.200
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
**Cache-Control:no-cache**
Connection:keep-alive
Host:test.com
Pragma:no-cache
Referer:http://test.com/home.jsp
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/37.0.2062.120 Safari/537.36
Query String Parametersview sourceview URL encoded
buildInfo:1.1.200
Response Headersview source
Cache-Control:max-age=2592000
Connection:keep-alive
Content-Encoding:gzip
Content-Type:text/javascript
Date:Sun, 12 Oct 2014 16:17:46 GMT
Expires:Tue, 11 Nov 2014 16:17:46 GMT
Last-Modified:Tue, 07 Oct 2014 13:28:08 GMT
Server:Apache
Transfer-Encoding:chunked
Vary:Accept-Encoding
Huzefa
(89 rep)
Oct 12, 2014, 04:32 PM
• Last activity: Apr 27, 2025, 08:03 PM
2
votes
1
answers
211
views
Temporarily disable bcache read caching to run a scrub
I use btrfs atop of bcache. It caches small reads and that works great. When running a scrub of the btrfs however, I'd like those reads to go through to the backing devices because their contents are the ground truth; what I want to verify. One way to achieve that would be to detach the individual b...
I use btrfs atop of bcache. It caches small reads and that works great.
When running a scrub of the btrfs however, I'd like those reads to go through to the backing devices because their contents are the ground truth; what I want to verify.
One way to achieve that would be to detach the individual bcache devices from the bcache but doing so necessarily also detaches the bcache state which means it gets lost. It'd have to rebuild the cache every time I run a scrub.
Is it possible to somehow achieve that without losing the cache? I want bcache to keep doing all of its regular state tracking; updating and invalidating the cache. When it comes to utilising the cache though, it should simply read directly from the backing drives and ignore any cached data.
Atemu
(857 rep)
May 8, 2024, 01:47 AM
• Last activity: Oct 16, 2024, 11:41 AM
3
votes
1
answers
832
views
PKCS#11 provider in OpenSSH: Is it possible to cache PIN?
I use a RSA key on a smartcard with an OpenSSH client. The smartcard is read by a smartcard reader with a pinpad. The key is protected with a PIN. Is it possible to cache the PIN somehow? I don't really like the need to write the PIN using the card reader keyboard every time I use ssh... It's not on...
I use a RSA key on a smartcard with an OpenSSH client. The smartcard is read by a smartcard reader with a pinpad. The key is protected with a PIN.
Is it possible to cache the PIN somehow? I don't really like the need to write the PIN using the card reader keyboard every time I use ssh... It's not only annoying but it also makes IMHO too many possibilities for other people's eyes.
My setup is Debian/Devuan + OpenSC + the typical "PKCS11Provider /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so" in .ssh/config.
I tried to add to opensc.conf, framework pkcs15 following lines, but with no effect:
use_pin_caching = true;
pin_cache_counter = 64;
pin_cache_ignore_user_consent = true;
I use the same configuration on OpenBSD, and it's the same.
As a smart card I use Aventra MyEID 4.5.5. As I am trying to learn as much as possible before using the technology in production, I have different card readers I can try: Cherry, Gemalto (now Thales) and SCM/Identiv.
d.c.
(907 rep)
Mar 20, 2023, 09:51 PM
• Last activity: Feb 12, 2024, 06:49 PM
1
votes
1
answers
398
views
Is it possible to avoid trashing hard disk while using /tmp as a RAM device?
# Origin I need to implement some features to my GDB helper scripts but I have to stick with an older version of GDB (5.3, in this case). Since older versions lack so many features, I need to workaround the required features by redirecting some strings to a file and then `source` them. # Problem I d...
# Origin
I need to implement some features to my GDB helper scripts but I have to stick with an older version of GDB (5.3, in this case). Since older versions lack so many features, I need to workaround the required features by redirecting some strings to a file and then
source
them.
# Problem
I don't want to trash my hard disk with intensive amount of temporary files written to /tmp
.
# Assumption
Since my /tmp
folder is mounted with tmpfs
, I assume that it's actually placed on RAM and it's swapped out to swap area when filled:
$ mount | grep /tmp
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noatime,size=524288k)
# Question
Can I guarantee that any small file (max 30 characters long strings) I wrote into /tmp
(which I'll delete almost immediately) will only live in RAM in its lifetime and won't go into hard disk, even the frequency is as high as 100 writes per second?
ceremcem
(2451 rep)
Oct 8, 2020, 09:04 AM
• Last activity: Sep 11, 2023, 02:51 PM
2
votes
2
answers
9126
views
systemctl enable tmp.mount
Consider the practice of mounting the /tmp directory on a tmpfs memory based filesystem, as can be done with: `systemctl enable tmp.mount` And consider the following: > **one justification:** The use of separate file systems for different paths can protect the system from failures resulting from a f...
Consider the practice of mounting the /tmp directory on a tmpfs memory based filesystem, as can be done with:
systemctl enable tmp.mount
And consider the following:
> **one justification:** The use of separate file systems for different paths can protect the system from failures resulting from a file system becoming full or failing.
-
> **another justification:** Some applications writing files in the /tmp directory can see huge improvements when memory is used instead of disk.
Is disk caching always in effect? By that I mean when you write to **any** folder (not just /tmp
) you are probably writing to RAM anyway until such time it gets flushed to disk... the kernel handles all this under the hood and it is my opinion I don't need to go meddling tweaking things. So does doing systemctl enable tmp.mount
has any real value, if so what?
Also (in CentOS-7.6) I am testing this to try and understand what's happening I am experiencing:
- CentOS 7.6 installed on one 500GB SSD with simple disk partitioning as
- 1GB /dev/sda1
as /boot
- 100MB /dev/sda2
as /boot/efi
- 475GB /dev/sda3
as /
- PC has 8GB of DDR-4 RAM
- if I do just systemctl enable tmp.mount
I then get
- 3.9GB tmpfs
as /tmp
How is this tmpfs /tmp at 3.9GB
any better than the default way which would (a) first have up to ~8GB based on RAM thanks to disk caching and (b) then when disk caching at capacity based on 8GB of RAM there is > 400GB of disk available to use ?
ron
(8647 rep)
Apr 12, 2019, 04:34 PM
• Last activity: May 5, 2023, 06:18 PM
1
votes
0
answers
74
views
Is there a seamless way to cache the file table of an HDD to prevent unnecessary spin ups?
I'd like to be able to browse the file system of my external HDD without having it spin up. Is there an easy way to cache its file table in a way that allows me to use normal tools like Caja/Gnome Files/`ls`/`cd` to navigate folders and list files without it spinning up?
I'd like to be able to browse the file system of my external HDD without having it spin up. Is there an easy way to cache its file table in a way that allows me to use normal tools like Caja/Gnome Files/
ls
/cd
to navigate folders and list files without it spinning up?
Dan
(11 rep)
Apr 23, 2023, 08:19 PM
0
votes
1
answers
149
views
Does cygserver improve Cygwin performance? If so, with what tasks?
I've read that cygserver [can improve performance in some circumstances](https://superuser.com/a/1183283) but I'm not really clear on how - or on how to know whether it's applicable to our use case. Can anyone provide any insights?
I've read that cygserver [can improve performance in some circumstances](https://superuser.com/a/1183283) but I'm not really clear on how - or on how to know whether it's applicable to our use case.
Can anyone provide any insights?
Slbox
(313 rep)
Jan 1, 2023, 07:19 PM
• Last activity: Mar 8, 2023, 02:15 PM
0
votes
1
answers
87
views
Is there any OS-level caching for bare block devices? If so, how do I bypass it?
If I read and write directly to a block device (e.g. `/dev/sda1`), is there any OS-level caching involved on Linux? If so, how do I bypass it, is opening with `O_DIRECT` enough? I'm writing a simple benchmark script to characterize the behavior of a shingled magnetic recording (SMR) drive I have, so...
If I read and write directly to a block device (e.g.
/dev/sda1
), is there any OS-level caching involved on Linux? If so, how do I bypass it, is opening with O_DIRECT
enough?
I'm writing a simple benchmark script to characterize the behavior of a shingled magnetic recording (SMR) drive I have, so I don't want to bypass any drive-level caching or reordering, only anything the OS is doing.
Searching for the relevant terms gives lots of results that do not address this specific question, though I did learn that Solaris and FreeBSD have both block and character devices for disks, with the block devices being buffered. On my Linux I only see block devices for disks.
JanKanis
(1421 rep)
Nov 18, 2022, 10:29 PM
• Last activity: Nov 19, 2022, 07:27 AM
1
votes
1
answers
826
views
Is it possible to disable page caching on Ubuntu?
I am running an application on Ubuntu 20.04 which requires me to clear the page cache everytime I run it. Currently, I just run `echo 1 | sudo tee /proc/sys/vm/drop_caches` everytime before I run the application. I'll need to run this application multiple times. Is there anyway to disable page cachi...
I am running an application on Ubuntu 20.04 which requires me to clear the page cache everytime I run it. Currently, I just run
echo 1 | sudo tee /proc/sys/vm/drop_caches
everytime before I run the application. I'll need to run this application multiple times. Is there anyway to disable page caching on my system?
Sathvik Swaminathan
(111 rep)
Sep 24, 2022, 04:11 AM
• Last activity: Sep 24, 2022, 10:51 AM
1
votes
1
answers
4121
views
What exactly is being cached when opening/querying a SQLite database
I was asked to improve existing code to query SQLite databases. The original code made a lot of separate calls to the database and filtered the results in Python. Instead, I opted to re-write the database creation and put the filtering logic in the SQL query. After running benchmarks on a databases...
I was asked to improve existing code to query SQLite databases. The original code made a lot of separate calls to the database and filtered the results in Python. Instead, I opted to re-write the database creation and put the filtering logic in the SQL query.
After running benchmarks on a databases of different sizes. While comparing with the original implementation I found that the average query time for
n=3
of a query was a lot faster in the new implementation (3s vs. 46 **minutes**). I suspected that this was a caching issue, but I wasn't sure of its origin. Between every query I closed the database connection and deleted any lingering Python variables and ran gc
but the out-of-this-world persisted. Then I found that it was likely the system that was caching something. Indeed, when I clear the system's cache after every iteration with echo 3 > /proc/sys/vm/drop_caches
the performance is much more in line with what I expected (2-5x speed increase compared to 80.000x speed increase).
The almost philosophical issue that I have now is what I should report as an improvement: the cached performance (as-is) or the non-cached performance (explicitly deleting cache before queries). (I'll likely report both but I am still curious about what is being cached.) I think it comes down to the question what is actually being cached. In other words: does the caching represent a real-world scenario or doesn't it at all.
I would think that if the database or its indices are cached, then the fast default performance is a good representation of the real world as it would be applicable to new, unseen queries. However, if specific queries are cached instead, then the cached performance does not reflect on unseen queries.
Note: this might be an unimportant detail but I have found that the impact of this caching is especially noticeable when using fts5 virtual tables!
Tl;dr: when the system is caching queries to SQLite, what exactly is it caching, and does that positively impact new, unseen queries?
If it matters: Ubuntu 20.04 with sqlite3.
Bram Vanroy
(183 rep)
Aug 7, 2022, 12:02 PM
• Last activity: Aug 7, 2022, 12:30 PM
0
votes
1
answers
70
views
Using excess video memory for disk caching?
On newer Ryzen 5000-based laptops, the integrated graphics controller always gets at least 1GB of shared video memory. In the BIOS settings, 1GB is the minimum. For people who don't need 1GB for video stuff because they use 2D graphics mainly, is there a way to grab some of that shared video memory...
On newer Ryzen 5000-based laptops, the integrated graphics controller always gets at least 1GB of shared video memory. In the BIOS settings, 1GB is the minimum. For people who don't need 1GB for video stuff because they use 2D graphics mainly, is there a way to grab some of that shared video memory and let the Linux or BSD kernel use it for disk file caching?
rubunu
(1 rep)
Jun 25, 2022, 03:30 AM
• Last activity: Jun 25, 2022, 08:32 AM
2
votes
1
answers
874
views
Optimize adding extra packages to the USB-media for alpine linux offline install?
We would like to make an automated installer for Alpine Linux for running our own application on an embedded x86 pc. Our application setup requires packages not present on the downloadable media and we need it to run self-contained and offline. I have implemented the functionality we need, but that...
We would like to make an automated installer for Alpine Linux for running our own application on an embedded x86 pc. Our application setup requires packages not present on the downloadable media and we need it to run self-contained and offline. I have implemented the functionality we need, but that requires "main" and "community" repositories present on the USB-stick.
I have solved this so far by burning the ISO-image to the USB-stick using Rufus in ISO mode (making it writable) and then essentially rsync'ing a mirror to the USB-stick (to
/media/usb/alpine
) and manually adding this directory to /etc/apk/repositories
as needed. This works well.
Unfortunaly this is an almost 20 GB download meaning that the manual step to copy to the USB-stick takes a very long time on the USB-sticks I have available right now (2 hours at the moment). A SSD USB-disk takes about 20 minutes.
I have therefore been looking at setup-apkcache
and found that we only need less than 100 MB of packages, but that it appears from my experiments that setup-disk
installing to the local harddisk in "sys"-mode (which runs lbu package -
under the covers) does not use the packages in the cache, but expects all packages to be found from one of the repositories listed in /etc/apk/repositories
.
Is using a apk cache the way to go? Or am I barking up the wrong tree?
Thorbjørn Ravn Andersen
(1064 rep)
Sep 13, 2021, 09:05 AM
• Last activity: Feb 27, 2022, 01:37 PM
2
votes
0
answers
631
views
Why are RAM cached NFS file contents corrupted?
We have an NFS server and several clients (Ubuntu 18.04). Sometimes (rarely) a client sees corrupted data after a file was changed from another client. The server and all other clients see the file correctly. The context is a simple everyday developer workflow situation: I edit the Python source cod...
We have an NFS server and several clients (Ubuntu 18.04). Sometimes (rarely) a client sees corrupted data after a file was changed from another client. The server and all other clients see the file correctly.
The context is a simple everyday developer workflow situation: I edit the Python source code of some program in my IDE (PyCharm) on my client machine, then in a terminal window I SSH to a different (more performant) computer to execute the program. The symptom: null bytes are read at the end of the file, or the file appears truncated (and results in a SyntaxError in case of Python code execution.)
I suspect this is due to a bug in caching. Specifically, the affected file contents are still incorrectly seen as the old version, but **all metadata is seen correctly** as the new version.
Therefore, if the file was made smaller by the other client, it will be seen by our faulty client as if it had the old content, but abruptly stopped at the new size. If it was made larger by the other client, the faulty client sees the old content, but with some garbage content appended at the end, mostly zero bytes.
I have read the NFS man page on cache consistency but the problem is not with attribute caching, and there is not much in the man page about content caching perhaps because it's not handled by NFS, but a generic file system caching layer.
The **attributes (inode number, size, modification times) are all correctly seen, but the content is not.**
Note that this is NOT simply about the configurable attribute caching time limits. **The bad file content remains there indefinitely and the client never notices that anything is wrong. I repeat, the client gets stuck with the bad content (e.g. containing spurious null bytes) for *DAYS* (and more) unless the cache is manually dropped.**
What could be the reason for it? Is there a workaround? I know it can be temporarily fixed by dropping the caches, but it's a manual intervention and the failure can even be silent if the size happens to stay the same, just some bytes are changed on another client.
Several years on, the problem persists and there seems to be no solution anywhere (I've read FAQ's, book chapters on NFS, dug into mailing lists, read that [post](https://about.gitlab.com/blog/2018/11/14/how-we-spent-two-weeks-hunting-an-nfs-bug/) about how GitLab hunted down some caching bug in two weeks of debugging etc., but none of these actually address my issues.)
The most relevant info on the web comes from [this mailing list discussion from 2015](https://www.spinics.net/lists/linux-nfs/msg49775.html) but ultimately the developers claim there is no problem at all, which seems baffling to me. If this is the case, how can anyone use NFS in HPC cluster setups where one edits the code on a login node and submits it for execution on a compute node? You would regularly get issues that the compute node sees a corrupted version of the source code. More importantly, regardless of the philosophy behind a possible WONTFIX/NOBUG claim from the NFS people, what is an actionable fix in my scenario?
isarandi
(541 rep)
Nov 7, 2019, 01:03 PM
• Last activity: Jan 28, 2022, 04:31 PM
1
votes
0
answers
133
views
how to get Linux to automatically release old pagecache pages?
Ever since upgrading to Linux kernel 5.8 and later, I've been having problems with my system freezing up from running out of RAM, and it's all going to the pagecache. I have a program that reorganizes the data from the OpenStreetMap planet_latest.osm.pbf file into a structure that's more efficient f...
Ever since upgrading to Linux kernel 5.8 and later, I've been having problems with my system freezing up from running out of RAM, and it's all going to the pagecache.
I have a program that reorganizes the data from the OpenStreetMap planet_latest.osm.pbf file into a structure that's more efficient for my usage. However, because this file is larger than the amount of RAM on my system (60GB file versus 48GB RAM), the page cache fills up. Before kernel 5.8, the cache would reach full, and then keep chugging along (with an increase in disk thrashing). Since 5.8, the system freezes because it won't ever automatically release a page from the page cache (such as 30GB earlier in my sequential read of the planet_latest.osm.pbf file). I don't need to use my reorganizing program to hang the system; I found the following unprivileged command would do it:
cat planet_latest.osm.pbf >/dev/null
I have tried using the fadvise64() system service to manually force releases of pages in the planet file I have already passed; it helps, but doesn't entirely solve the problem with the various output files my program creates (especially when those temporary output files are randomly read back later).
So, what does it take to get the 5.8 through 5.10 Linux kernel to actually automatically release old pages from the page cache when system RAM gets low?
To work around the problem, I have been using a script to monitor cache size and write to /proc/sys/vm/drop_caches when the cache gets too large, but of course that also releases new pages I am currently using along with obsolete pages.
while true ; do
H=
free | head -2 | tail -1 | awk '{print $6}'
if [ $H -gt 35000000 ]; then
echo -n $H " @ " ; date
echo 1 >/proc/sys/vm/drop_caches
sensors | grep '°C'
H=free | head -2 | tail -1 | awk '{print $6}'
echo -n $H " @ "; date
fi
sleep 30
done
(the sensors stuff is to watch out for CPU overheating in stages of my program that are multi-threaded CPU-intensive rather than disk-intensive).
I have filed a bug report at kernel.org, but they haven't looked at it yet.
Andrew
(11 rep)
Mar 5, 2021, 09:12 PM
0
votes
1
answers
1584
views
Enable sssd caching and linux to honor that cache on CentOS 5
I wanted to enable sssd caching on CentOs and not to use nscd caching, because sssd itself is giving that option. However, authconfig is not giving the option for --enablecachecreds even on the highest version of authconfig available on Centos 5.8. Is caching enabled via pam? If so, where to look up...
I wanted to enable sssd caching on CentOs and not to use nscd caching, because sssd itself is giving that option. However, authconfig is not giving the option for --enablecachecreds even on the highest version of authconfig available on Centos 5.8. Is caching enabled via pam? If so, where to look up for more details, or how can I do it?
nohup
(431 rep)
Oct 27, 2015, 11:37 AM
• Last activity: Oct 21, 2020, 02:03 PM
1
votes
0
answers
333
views
How can I combine LVM cache and writecache?
Can I attach both a cache (in modus Writethrough) as well as a writecache to a logical volume in LVM? If yes, how? I am well able to attach a "normal" cache. But when trying to also add a writecache I fail.
Can I attach both a cache (in modus Writethrough) as well as a writecache to a logical volume in LVM? If yes, how?
I am well able to attach a "normal" cache. But when trying to also add a writecache I fail.
ingli
(2029 rep)
Aug 16, 2020, 11:09 PM
• Last activity: Sep 1, 2020, 10:51 PM
1
votes
0
answers
185
views
Apache configuration so images are cached for a long time
I'm asking here since no matter what I try I keep noticing images keep being reloaded after a while. Basically I want images to be cached for 11 months, but if anything they seem to be cached for minutes or hours. This is a node.js application with static files served by apache. The images are insid...
I'm asking here since no matter what I try I keep noticing images keep being reloaded after a while. Basically I want images to be cached for 11 months, but if anything they seem to be cached for minutes or hours. This is a node.js application with static files served by apache. The images are inside the public/static dir. This is what I have:
NameVirtualHost *:443
ServerName app.site.com
ProxyPreserveHost On
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
RequestHeader set "X-Forwarded-SSL" expr=%{HTTPS}
Alias "/static/" "/home/node/app/public/static/"
Options FollowSymLinks
AllowOverride None
Require all granted
ExpiresActive On
ExpiresByType image/jpeg "access plus 11 months"
ExpiresByType image/jpg "access plus 11 months"
ExpiresByType image/png "access plus 11 months"
ExpiresByType image/gif "access plus 11 months"
RewriteEngine On
RewriteCond %{HTTP:Upgrade} =websocket [NC]
RewriteRule /(.*) ws://localhost:3210/$1 [P,L]
RewriteCond %{HTTP:Upgrade} !=websocket [NC]
RewriteRule ^/(?!static/)(.*)$ http://localhost:3210/$1 [P,L]
SSLCertificateFile /root/.lego/certificates/_.site.com.crt
SSLCertificateKeyFile /root/.lego/certificates/_.site.com.key
Having the console open and reloading to catch network activity shows this on a random image:
200 OK
Response header:
HTTP/1.1 200 OK
Date: Wed, 12 Aug 2020 10:27:50 GMT
Server: Apache
Last-Modified: Mon, 01 Jun 2020 19:03:16 GMT
ETag: "3313-5a70a727f2500"
Accept-Ranges: bytes
Content-Length: 13075
Cache-Control: max-age=28512000
Expires: Thu, 08 Jul 2021 10:27:50 GMT
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: image/png
Request header:
GET /static/img/profile_5a914158141c9367e38c4d8a.png?ver=74 HTTP/1.1
Host: hue.merkoba.com
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0
Accept: image/webp,*/*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Referer: https://app.site.com/
Cookie: connect.sid=s%3AUZb7Zopuy6LId0iMeMYVaKDtfuppIIxL.8F7Hx%2FeqrodOsS2fHPZjOqRDKTBM4RoBwcKiUu0PUUg; io=fbMMZAHDTJzbnoVvAAAo
Cache-Control: max-age=0
If-Modified-Since: Mon, 01 Jun 2020 19:03:16 GMT
If-None-Match: "3313-5a70a727f2500"
Some curl test:
curl -vso /dev/null
* Trying 196.83.125.124:443...
* TCP_NODELAY set
* Connected to app.site.com (192.81.135.159) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
* TLSv1.3 (IN), TLS handshake, Server hello (2):
{ [112 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [2391 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [147 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [37 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-ECDSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=*.site.com
* start date: May 14 20:51:05 2020 GMT
* expire date: Aug 12 20:51:05 2020 GMT
* subjectAltName: host "app.site.com" matched cert's "*.site.com"
* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
* SSL certificate verify ok.
} [5 bytes data]
> GET /static/img/profile_5f192733805c646004ecf4c8.png?ver=1 HTTP/1.1
> Host: app.site.com
> User-Agent: curl/7.68.0
> Accept: */*
>
{ [5 bytes data]
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Wed, 12 Aug 2020 10:30:40 GMT
< Server: Apache
< Last-Modified: Thu, 23 Jul 2020 06:01:07 GMT
< ETag: "16f27-5ab15950eeec0"
< Accept-Ranges: bytes
< Content-Length: 93991
< Cache-Control: max-age=28512000
< Expires: Thu, 08 Jul 2021 10:30:40 GMT
< Content-Type: image/png
<
{ [5 bytes data]
* Connection #0 to host app.site.com left intact

madprops
(200 rep)
Aug 12, 2020, 10:35 AM
• Last activity: Aug 12, 2020, 03:23 PM
2
votes
0
answers
80
views
Automatically store based on file size - linux - bcache?
Hoping to get answers to this on **Debian** based distros: When saving a file, from **any** program, I'd wish to know if it is possible for the OS to automatically save based on file size. For example, imagine a system with a "big" HDD and a "small" SSD (the OS being installed on another SSD) so, If...
Hoping to get answers to this on **Debian** based distros:
When saving a file, from **any** program, I'd wish to know if it is possible for the OS to automatically save based on file size. For example, imagine a system with a "big" HDD and a "small" SSD (the OS being installed on another SSD) so, If a create, copy or move command is issued on a file larger than say, 100 MB, save that file to the HDD, otherwise to the SSD.
I know there are caching solutions that place frequently used files to the cache storage, but that's not what I want. I want all files smaller than some amount to be stored in the "small" SSD **without user intervention**, no matter from what program the file is created, as long as that file is created by the user.
**Can this be done in Bcache** (or any other solution)?
P.S. This requirement is only for **user data** files (documents, photos, movies, downloaded files). I'm not interested in OS, program or system generated files... let those decide their own location in the system mount drive. Also, I would like the solution **not** to "make a copy" on the cache, I do not want to have duplicates. A separate backing solution would do this.
J. Ramos
(21 rep)
Feb 28, 2020, 09:00 PM
• Last activity: Feb 28, 2020, 09:14 PM
1
votes
1
answers
1096
views
Testing bcache with raspberry pi 4 on ubuntu
I was testing bache on raspberry pi 4 with ubuntu. The reason I choose ubuntu that I found standard raspbian got some issues with bcache as kernel module not properly loaded. I tried to troubleshoot bit but then I move to ubuntu and it works straight away My setup is like this. 1 x 1TB HGST 5400RPM...
I was testing bache on raspberry pi 4 with ubuntu. The reason I choose ubuntu that I found standard raspbian got some issues with bcache as kernel module not properly loaded. I tried to troubleshoot bit but then I move to ubuntu and it works straight away
My setup is like this.
1 x 1TB HGST 5400RPM 2.5 laptop hard disk
1 x 256GB WD Green 2.5 SSD
Raspberry pi 4 4GB model with large heat-sink for cooling and 4A power.
I hooked up both HDD and SSD to the raspberry pi (both externally powered) using USB 3.0 ports and boot to ubuntu. First I tested the the under-voltage errors and found all normal.
SSD -> /dev/sda
HDD -> /dev/sdb
Then I create 1 partition on both drives and create the bcache as follows.
make-bcache -B /dev/sdb1
make-bcache -C /dev/sda1
then I mount the /dev/bcache0 on /datastore
then I attached the cache device as follows
echo MYUUID > /sys/block/bcache0/bcache/attach
Then I enabled write-back cache
echo writeback > /sys/block/bcache0/bcache/cache_mode
Then I installed vsftpd server and make the root ftp dir as my bcache0 mount point and I started testing. First few tests I can upload files 113MBps and I notices most of the files directly write in to the backing device even if the cache is attached.
when I tested the status using bcache-status script https://gist.github.com/damoxc/6267899 I saw most of the writes misses cache and directly writing to backing device and the 113MBps is directly from the mechanical hard drive :-O ?
Then I started to fine tune. As suggested on Troubleshooting performance part of this https://www.kernel.org/doc/Documentation/bcache.txt document
first I set sequential_cutoff to zero by executing this command
echo 0 > /sys/block/bcache0/bcache/sequential_cutoff
After this I can instantly see SSD device cache hits are increased. And at the same time I was running iostat continuously. And I was able to see from the iostat SSD is directly being accessed. But after few minutes my filezilla client hangs and I cannot restart the FTP upload stream. And when I try to access the bcache0 mount it's really slow. cache status was showing as "dirty"
Then I restart the pi and again attached the device. and set below stetting
echo 0 > /sys/fs/bcache/MYUUID/congested_read_threshold_us
echo 0 > /sys/fs/bcache/MYUUID/congested_write_threshold_us
According to https://www.kernel.org/doc/Documentation/bcache.txt article this is for avoid bcache track backing device latency. But even after this option. my FTP upload stream continuously crashing. Then I set all back to default. Still with large number of file uploads it crashes
And I noticed within the test pi CPU is not fully utilized.
The maximum throughput I can get using pi 4 1Gbps Ethernet is 930Mbps, which is extremely good. The HGST drive when I tested with crystal disk mark with NTFS able to write up to 90MBps. It seems I can get 113MBps on pi since the file system is ext4.
If I can get more than 80MBps ftp upload speed I'm ok with that. My questions are
Why FTP stream keep crashing when using with bcache and why bcache mount getting slow overtime.
why there is very low cache usage even with sequential_cutoff set to 0
has anyone tested bcache before with Raspberry PI 4 ? if yes how can I use the SSD for caching properly
And finally can someone explain more about how bcache actually works when It is on writeback mode. I only use this for archival data and I don't need access hot data on SSD kind of setup.
sameera
(304 rep)
Dec 9, 2019, 04:33 AM
• Last activity: Dec 10, 2019, 03:59 AM
Showing page 1 of 20 total questions