Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
3
votes
2
answers
4231
views
Cannot access HDD files through Linux Mint Live USB
I have valuable info on my Ubuntu partition, but it crashed, and I tried to get to it through Live USB with Mint 14, but it says it's read only. Can I make it writable too? So I can put it on my flash drive?
I have valuable info on my Ubuntu partition, but it crashed, and I tried to get to it through Live USB with Mint 14, but it says it's read only. Can I make it writable too? So I can put it on my flash drive?
Brandon laizure
(31 rep)
Mar 22, 2013, 01:42 AM
• Last activity: Aug 7, 2025, 05:01 AM
0
votes
2
answers
2793
views
How do I mount a disk on /var/log directory even if I have process writing on it?
I would like to mount a disk on /var/log, the thing is, there are some process/services writing into it, such as openvpn, or system logs. Is there a way to mount a filesystem without having to restart the machine, or stopping the service? Many thanks
I would like to mount a disk on /var/log, the thing is, there are some process/services writing into it, such as openvpn, or system logs. Is there a way to mount a filesystem without having to restart the machine, or stopping the service?
Many thanks
LinuxEnthusiast
(1 rep)
Aug 10, 2020, 10:10 AM
• Last activity: Aug 1, 2025, 11:02 PM
1
votes
2
answers
5195
views
Trying to install Fedora Silverblue, media check keeps failing at around 4%, what am I doing wrong?
I am trying to boot Fedora Silverblue onto my laptop VIA a USB drive and I keep getting ``` [FAILED] Failed to start checkisomd5@dev-sdb.service - Media check on /dev/sdb. dracut-initqueue[1423]: Job for checkisomd5@dev-sdb.service failed because the control process exited with error code. dracut-in...
I am trying to boot Fedora Silverblue onto my laptop VIA a USB drive and I keep getting
[FAILED] Failed to start checkisomd5@dev-sdb.service - Media check on /dev/sdb.
dracut-initqueue: Job for checkisomd5@dev-sdb.service failed because the control process exited with error code.
dracut-initqueue: See "systemct1 status checkisomd5@dev-sdb.service" and "journalct1 -xeu checkisomd5@dev-sdb.service" for details
dracut: FATAL: CD check failed!
dracut: Refusing to continue
I tried running the ISO I downloaded directly on a VM through VBox and it worked fine, any clue why I keep getting this error?
YungSheldon
(11 rep)
Nov 30, 2022, 10:11 AM
• Last activity: Jul 31, 2025, 07:06 AM
377
votes
11
answers
1033497
views
How can I monitor disk io?
I'd like to do some general disk io monitoring on a debian linux server. What are the tools I should know about that monitor disk io so I can see if a disk's performance is maxed out or spikes at certain time throughout the day?
I'd like to do some general disk io monitoring on a debian linux server. What are the tools I should know about that monitor disk io so I can see if a disk's performance is maxed out or spikes at certain time throughout the day?
camomileCase
(3995 rep)
Nov 8, 2012, 06:42 PM
• Last activity: Jul 10, 2025, 05:27 AM
5
votes
1
answers
2442
views
Disable writeback cache throttling - tuning vm.dirty_ratio
I have a workload with extremely high write burst rates for short periods of times. The target disks are rather slow, but I have plenty of RAM and very tolerant to instantaneous data loss. I've tried tuning vm.dirty_ratio to maximize the use of free RAM space to be used for dirty pages. # free -g to...
I have a workload with extremely high write burst rates for short periods of times. The target disks are rather slow, but I have plenty of RAM and very tolerant to instantaneous data loss.
I've tried tuning vm.dirty_ratio to maximize the use of free RAM space to be used for dirty pages.
# free -g
total used free shared buff/cache available
Mem: 251 7 213 3 30 239
Swap: 0 0 0
# sysctl -a | grep -i dirty
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 5
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 90000
vm.dirty_ratio = 90
However, it seems I'm still encountering some writeback throttling based on the underlying disk speed. How can I disable this?
# dd if=/dev/zero of=/home/me/foo.txt bs=4K count=100000 oflag=nonblock
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 10.2175 s, 40.1 MB/s
As long as there is free memory and the dirty ratio has not yet been exceeded - I'd like to write at full speed to the page cache.
Linux Questions
(51 rep)
Nov 29, 2018, 04:55 AM
• Last activity: Jun 25, 2025, 04:07 PM
55
votes
1
answers
65075
views
Why is most the of disk IO attributed to jbd2 and not to the process that is actually using the IO?
When monitoring disk IO, most of the IO is attributed to jbd2, while the original process that caused the high IO is attributed a much lower IO percentage. Why? Here's `iotop`'s example output (other processes with IO<1% omitted): [![enter image description here][1]][1] [1]: https://i.sstatic.net/T6...
When monitoring disk IO, most of the IO is attributed to jbd2, while the original process that caused the high IO is attributed a much lower IO percentage. Why?
Here's
iotop
's example output (other processes with IO<1% omitted):

Sparkler
(1109 rep)
Feb 8, 2017, 11:24 PM
• Last activity: Jun 24, 2025, 08:01 AM
1
votes
1
answers
744
views
Constant hdd write but iotop shows nothing
The disk activity monitor widget in KDE (Debian) shows constant HDD write around 12 MiB/s, when I run `iotop`, there is nothing that would be constantly using HDD. When I run `atop`, at first `PAG` is red and blinking but after about 3 seconds disappears, when i run `free -h`, I get: total used free...
The disk activity monitor widget in KDE (Debian) shows constant HDD write around 12 MiB/s, when I run
iotop
, there is nothing that would be constantly using HDD. When I run atop
, at first PAG
is red and blinking but after about 3 seconds disappears, when i run free -h
, I get:
total used free shared buff/cache available
Mem: 7.7Gi 2.2Gi 3.0Gi 1.1Gi 2.5Gi 4.2Gi
Swap: 7.9Gi 0.0Ki 7.9Gi
Any idea what can be causing this or how to find out?
Also, i tried to clear the cache, it cleared to 1.5 Gi but after less than 5 minutes it was back to 2.5 Gi as shown above. Also i am thinking that Debian is using quite a lot of memory given that only firefox with the stackexchange window is open.
atapaka
(675 rep)
Sep 3, 2022, 02:56 PM
• Last activity: Jun 9, 2025, 05:57 PM
1
votes
1
answers
2956
views
how to run fio verify reliably in linux
I am using `fio` over disks exposed through iscsi. I am giving these params in `fio` ``` fio --name=randwrite --ioengine=libaio --iodepth=64 --rw=randrw --rwmixread=50 --bs=4k-2M --direct=1 -filename=data --numjobs=1 --runtime 36000 --verify=md5 --verify_async=4 --verify_backlog=100000 --verify_dump...
I am using
fio
over disks exposed through iscsi. I am giving these params in fio
fio --name=randwrite --ioengine=libaio --iodepth=64 --rw=randrw --rwmixread=50 --bs=4k-2M --direct=1 -filename=data --numjobs=1 --runtime 36000 --verify=md5 --verify_async=4 --verify_backlog=100000 --verify_dump=1 --verify_fatal=1 --time_based --group_reporting
With the above parameters can fio
send overlap concurrent writes of size more than page size.
If yes, then how does fio
verify the checksum because atomicity of io is not guaranteed across page size.
rishabh mittal
(31 rep)
Jul 27, 2019, 04:52 AM
• Last activity: Jun 5, 2025, 02:06 AM
0
votes
0
answers
37
views
RAID6 Array Inactive, Multiple Missing Drives
Alright, hopefully someone is able to help me out here: My RAID6 array in mdadm just failed today. Sequence of events as follows: 1) PLEX server is not functioning, though it appeared that everything else was working fine... seemed to be an network issue, so I restarted the server/comp. 2) While res...
Alright, hopefully someone is able to help me out here:
My RAID6 array in mdadm just failed today. Sequence of events as follows:
1) PLEX server is not functioning, though it appeared that everything else was working fine... seemed to be an network issue, so I restarted the server/comp.
2) While restarting, the computer seemed to hang up on boot up... I let it run, went out of the room and when I came back, my kid had turned off the power, said it looked "frozen"... 6 year olds...
3) Restarted again, booted up fine, PLEX server connected, everything seemed fine. Fast forward several hours, in use, no issues.
4) Problem with Plex again, server not finding files, I look, >80% of files are missing now (Plex itself can still connect to devices, so seems the original issue may be unrelated to RAID problem).
5) Stupidly shut down and attempt reboot, during shut down a bunch of error messages pop up, but before I can take a picture or read them clearly, computer completes shutdown and screen goes black.
6) Restart computer and the RAID6 array is gone.
My guess this is not directly related to the earlier issues, other than maybe that the "freeze" and hard shutdown might have exacerbated a drive on the edge.
What I have been able to ascertain at this point:
1) All 7 drives in the array show up under lsblk, ran smartctl and they all seem okay (though definitely old).
2) On running cat /proc/mdstat I find two arrays, 1 is my old array which is functioning fine, and the other is called md127 which is an incorrect number. The correct should be md111 (I believe).
3) I can find under md127 that it is a 7 drive array and only 4 devices are connected, which are 4 of the drives from the lost array.
I did check cable connections (do not have an extra known good set unfortunately), but on rebooting, the 4 listed drives connected to MD127 have changed to other drives in the array (E C B G instead of C A D B)
Lastly, I can see that there was something that happened this evening around 17:10. Using mdadm --examine, the Update Time for 1 drive (sdc) is back in February, for two other drives (sde sdg) at 17:10:41, and then at 17:10:56 for the last 4 drives (sdb, sdd, sdf, sdh). The sdc has 9184 events, sde and sdg have 64641, and the other 4 all have 64644 events.
Sorry for the wall of text, but I will freely admit that I am utterly lost at this point. Any help or direction would be greatly appreciated. The only lead that I have been able to find is to attempt to either run/create the array again, but not sure if that would work and I am concerned about data loss (which I realize may already be a foregone conclusion, but grasping at straws). I suspect that I need to add the missing drives back to the array, but again am not sure how to do so (especially since I am not clear on what exact order they should be added).
Thank you all again for any help.
Update:
On another reboot trying to problem solve, the md127 array is now showing all 7 disks as part of it:
cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid6 sdi sdo sdl sdm sdj sdk sdn 19534430720 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/7] [UUUUUUU] bitmap: 0/30 pages [0KB], 65536KB chunk md127 : inactive sdc(S) sdh(S) sdg(S) sdd(S) sdf(S) sde(S) sdb(S) 54697261416 blocks super 1.2 unused devices:The other one, md0, is unrelated other array, it is working fine. Not sure where to go from here. I believe the [S] after each drive means it is treating it as a spare? I also tried the following:
sudo mdadm --run /dev/md127 mdadm: failed to start array /dev/md/snothplex:111: Input/output errorEdit #2... Fixed-ish? Here is output of --detail and --examine:
sudo mdadm --detail --scan --verbose INACTIVE-ARRAY /dev/md127 level=raid6 num-devices=7 metadata=1.2 name=snothplex:111 UUID=58f4414e:13ba3568:b83070b2:b3f561d2 devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde ARRAY /dev/md/snothplex:0 level=raid6 num-devices=7 metadata=1.2 name=snothplex:0 UUID=1b0a0747:27bc69f4:f298f7ae:591d022e devices=/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdm,/dev/sdn,/dev/sdo
sudo mdadm --examine --scan --verbose ARRAY /dev/md/111 level=raid6 metadata=1.2 num-devices=7 UUID=58f4414e:13ba3568:b83070b2:b3f561d2 name=snothplex:111 devices=/dev/sdh,/dev/sdf,/dev/sdb,/dev/sdd,/dev/sdc,/dev/sde,/dev/sdg ARRAY /dev/md/0 level=raid6 metadata=1.2 num-devices=7 UUID=1b0a0747:27bc69f4:f298f7ae:591d022e name=snothplex:0 devices=/dev/sdo,/dev/sdn,/dev/sdk,/dev/sdj,/dev/sdi,/dev/sdl,/dev/sdmI attempted to do --assemble --force:
sudo mdadm --assemble --force /dev/md111 mdadm: /dev/md111 not identified in config file.
sudo mdadm --assemble --force /dev/md127 mdadm: Found some drive for an array that is already active: /dev/md/snothplex:111 mdadm: giving up.I then stopped the array (again referencing the incorrect md127):
samuel3940@snothplex:~$ sudo mdadm --stop /dev/md127 mdadm: stopped /dev/md127And then tried assemble again:
samuel3940@snothplex:~$ sudo mdadm --assemble --force /dev/md127 mdadm: Fail create md127 when using /sys/module/md_mod/parameters/new_array mdadm: forcing event count in /dev/sdf(1) from 64641 upto 64644 mdadm: forcing event count in /dev/sdg(3) from 64641 upto 64644 mdadm: clearing FAULTY flag for device 1 in /dev/md127 for /dev/sdf mdadm: clearing FAULTY flag for device 6 in /dev/md127 for /dev/sdg mdadm: Marking array /dev/md127 as 'clean' mdadm: /dev/md127 has been started with 6 drives (out of 7).And it works. Sorta. Obviously the oldest failed drive is not initialized, but the files are all back. So currently I am pulling off any crucial data. It has also gone into a resync, unsurprisingly, but I figure doing read is fine (just no write). Otherwise, I suppose it is time to get a new drive or two, wait for the resync to finish, and cross my fingers it doesn't fail again before I can get an alternative setup. Thank you again and will update if anything changes/how the resync goes.
Samuel Nothnagel
(1 rep)
Jun 2, 2025, 03:17 AM
• Last activity: Jun 2, 2025, 03:25 PM
-5
votes
1
answers
68
views
Why does the Use% shown by the df command not reflect the correct value?
On our RHEL 7.9 systems, we have noticed some strange behavior. Each machine has 4 disks, each with an 8TB capacity, but we are only using about 1.9TB on a partition. What doesn’t make sense is that the Use% value appears incorrect. As shown below, the usage of `/data/sde` is reported as 15%, but th...
On our RHEL 7.9 systems, we have noticed some strange behavior.
Each machine has 4 disks, each with an 8TB capacity, but we are only using about 1.9TB on a partition.
What doesn’t make sense is that the Use% value appears incorrect. As shown below, the usage of
/data/sde
is reported as 15%, but the actual used space is only 267MB, while the /dev/sdd1
partition size is 1.9TB.
Filesystem Size Used Avail Use% Mounted on
/dev/sdd1 1.9G 267M 1.6G 15% /data/sde
Could this be related to how the partition was created, or is there something else causing this discrepancy?
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdg1 1.9G 5.7M 1.8G 1% /data/sdh
/dev/sdf1 1.9G 5.7M 1.8G 1% /data/sdg
/dev/sdb1 1.9G 44M 1.8G 3% /data/sdc
/dev/sdd1 1.9G 267M 1.6G 15% /data/sde
yael
(13936 rep)
May 27, 2025, 12:28 PM
• Last activity: May 27, 2025, 12:56 PM
0
votes
2
answers
5055
views
Does FSCK repair / mark bad sectors as it scans and, is it possible to resume a scan from an offset?
**System**: macOS 10.14.6 **Overview**: One of the HDD in the system was giving issues and I suspected the old disk was dying. I wanted to check for bad sectors on it. It uses the Mac OS Extended (Journaled) filesystem. So I started a scan of the disk with fsck_hfs: bash-3.2# fsck_hfs -S -E /dev/dis...
**System**: macOS 10.14.6
**Overview**:
One of the HDD in the system was giving issues and I suspected the old disk was dying. I wanted to check for bad sectors on it. It uses the Mac OS Extended (Journaled) filesystem. So I started a scan of the disk with fsck_hfs:
bash-3.2# fsck_hfs -S -E /dev/disk0
But even after more than 12-13 hours overnight it had only scanned around 66% of the 1TB drive:
** /dev/rdisk0 (NO WRITE)
Scanning entire disk for bad blocks
Scanning offset 6615812001408 of 1000204886016 (66%)
and I had to interrupt it as the system was needed.
**Questions**:
1. Does FSCK mark the bad sectors as it scans for it (or does it do it only do this after the scan is complete?)
2. If the first case is true, is there any option to resume scanning from the offset specified in the status message (i.e from block 6615812001408)?
3. Is there any better system tools to scan disks for bad sectors which supports resume if the operation has to be interrupted?
sfxedit
(113 rep)
Apr 21, 2020, 08:36 AM
• Last activity: May 23, 2025, 07:05 PM
2
votes
1
answers
3498
views
How can extend /home partition size
On my work computer my **/home** folder keep filling. There are enough space in ssd but i don't know how to use it to make my **/home** partition bigger. Can i take space from root and give it to home? And there is **200gb free space** but i can not extend **Partition 2** because of **EFI** in the m...
On my work computer my **/home** folder keep filling. There are enough space in ssd but i don't know how to use it to make my **/home** partition bigger.
Can i take space from root and give it to home?
And there is **200gb free space** but i can not extend **Partition 2** because of **EFI** in the middle:
How can i add them to home?
user553770
Dec 20, 2022, 08:43 AM
• Last activity: May 19, 2025, 02:21 PM
5
votes
1
answers
3282
views
How to recover data from EBS volume showing no partition or filesystem?
I restored an EBS volume and attached it to a new EC2 instance. When I `lsblk`, I can see it under the name `/dev/nvme1n1`. More specifically, the output of `lsblk` is: ``` NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 25M 1 loop /snap/amazon-ssm-agent/4046 loop1 7:1 0 55.4M 1 loop /snap/core1...
I restored an EBS volume and attached it to a new EC2 instance. When I
lsblk
, I can see it under the name /dev/nvme1n1
.
More specifically, the output of lsblk
is:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 25M 1 loop /snap/amazon-ssm-agent/4046
loop1 7:1 0 55.4M 1 loop /snap/core18/2128
loop2 7:2 0 61.9M 1 loop /snap/core20/1169
loop3 7:3 0 67.3M 1 loop /snap/lxd/21545
loop4 7:4 0 32.5M 1 loop /snap/snapd/13640
loop5 7:5 0 55.5M 1 loop /snap/core18/2246
loop6 7:6 0 67.2M 1 loop /snap/lxd/21835
nvme0n1 259:0 0 8G 0 disk
└─nvme0n1p1 259:1 0 8G 0 part /
nvme1n1 259:2 0 100G 0 disk
As you can see, nvme1n1
has no partitions. As a result, when I try to mount it on a folder with:
sudo mkdir mount_point
sudo mount /dev/nvme1n1 mount_point/
I get
mount: /home/ubuntu/mount_point: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
The volume has data inside:
ubuntu@ubuntu:~$ sudo file -s /dev/nvme1n1
/dev/nvme1n1: data
`
Using sudo mkfs -t xfs /dev/nvme1n1
to create a filesystem is not an option as Amazon states that:
> **Warning**
> Do not use this command if you're mounting a volume that already has data on it (for example, a volume that was created from a snapshot). Otherwise, you'll format the volume and delete the existing data.
Indeed, I tried it with a second dummy EBS snapshot that I recovered, and all I got was a dummy lost+found
linux folder .
This EBS recovered snapshot has useful data inside. How can I mount it without destroying it?
---
# parted -l /dev/nvme1n1 print
Model: Amazon Elastic Block Store (nvme)
Disk /dev/nvme0n1: 8590MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 8590MB 8589MB primary ext4 boot
Error: /dev/nvme1n1: unrecognised disk label
Model: Amazon Elastic Block Store (nvme)
Disk /dev/nvme1n1: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
dmesg | grep nvme1n1
[ 68.475368] EXT4-fs (nvme1n1): VFS: Can't find ext4 filesystem
[ 96.604971] EXT4-fs (nvme1n1): VFS: Can't find ext4 filesystem
[ 254.674651] EXT4-fs (nvme1n1): VFS: Can't find ext4 filesystem
[ 256.438712] EXT4-fs (nvme1n1): VFS: Can't find ext4 filesystem
$ sudo fsck /dev/nvme1n1
fsck from util-linux 2.34
e2fsck 1.45.5 (07-Jan-2020)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/nvme1n1
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
or
e2fsck -b 32768
HelloWorld
(1785 rep)
Nov 8, 2021, 12:08 PM
• Last activity: May 19, 2025, 08:41 AM
29
votes
5
answers
52811
views
Is there a good drive torture test tool?
I have been having odd and rare filesystem corruption lately that I suspect is the fault of my SSD. I am looking for a good drive torture test tool. Something that can write to the whole disk, then go back and read it looking for flying writes, corrupted blocks, blocks reverted to older revisions, a...
I have been having odd and rare filesystem corruption lately that I suspect is the fault of my SSD. I am looking for a good drive torture test tool. Something that can write to the whole disk, then go back and read it looking for flying writes, corrupted blocks, blocks reverted to older revisions, and other errors. This would be much more than what
badblocks
does. Is there such a tool?
Note I am *not* looking for a performance benchmark and already checked the SMART status; says healthy and no bad blocks reported.
psusi
(17622 rep)
Apr 15, 2013, 11:43 PM
• Last activity: May 16, 2025, 04:07 PM
1
votes
1
answers
862
views
gnome-disks write caching
in **RHEL/CentOS 7.9** anyway, when running `gnome-disks` which is under the Applications-Utilities-Disks menu, for a recognized SSD it offers the enabling of **write-cache**. [![enter image description here][1]][1] [1]: https://i.sstatic.net/pM5lX.png I would like to know what technically is happen...
in **RHEL/CentOS 7.9** anyway, when running
I would like to know what technically is happening when turning this on, that wasn't already happening.
*I was under the impression, whether it was an SSD or a conventional spinning hard disk, that linux inherently does **disk caching**. This impression mainly comes from reading that www.linuxatemyram.com page years ago.*
gnome-disks
which is under the Applications-Utilities-Disks menu, for a recognized SSD it offers the enabling of **write-cache**.

ron
(8647 rep)
Feb 6, 2022, 03:15 AM
• Last activity: May 8, 2025, 09:01 PM
2
votes
1
answers
1930
views
How to free space in /tmp and then move that free space to /usr?
When I install Fedora, I was sure the space I gave to `/usr` was more than enough, but at the end, after I installed some development packages, I have used about 65%, which is unexpectedly much. When I run `df -h -T`: ```none Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 3.9G 0 3...
When I install Fedora, I was sure the space I gave to
/usr
was more than enough, but at the end, after I installed some development packages, I have used about 65%, which is unexpectedly much.
When I run df -h -T
:
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs tmpfs 3.9G 183M 3.7G 5% /dev/shm
tmpfs tmpfs 3.9G 1.9M 3.9G 1% /run
/dev/mapper/fedora_localhost--live-root ext4 24G 2.0G 21G 9% /
/dev/mapper/fedora_localhost--live-usr ext4 11G 6.7G 3.7G 65% /usr
/dev/mapper/fedora_localhost--live-var ext4 11G 961M 9.1G 10% /var
/dev/mapper/fedora_localhost--live-tmp ext4 5.9G 27M 5.5G 1% /tmp
/dev/mapper/fedora_localhost--live-home ext4 23G 4.1G 18G 20% /home
/dev/sda5 ext4 1.1G 283M 722M 29% /boot
tmpfs tmpfs 791M 256K 790M 1% /run/user/1000
I was having problem with /tmp
filling up in the past, so I increase it a lot, but now it seem just being used very little. So I want to take the unused disk space on /tmp
and move it to /usr
. Is that possible?
Here is when I run vgdisplay fedora_localhost-live
:
VG Name fedora_localhost-live
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 6
Max PV 0
Cur PV 1
Act PV 1
VG Size <84.46 GiB
PE Size 4.00 MiB
Total PE 21621
Alloc PE / Size 21621 / <84.46 GiB
Free PE / Size 0 / 0
Pocket
(121 rep)
Sep 28, 2020, 08:35 AM
• Last activity: Apr 27, 2025, 08:01 AM
0
votes
1
answers
161
views
Journaling block device (jbd2) hammering my SSD
Ubuntu MATE 24.04 LTS System monitor in panel shows something like [![System monitor disk activity][1]][1] all day long. Hovering the mouse over the image shows disk usage hitting 83.3% at those peaks. When I run `sudo iotop`, an item that is regularly at the top is `[jbd2/dm-1-8]`, which would appe...
Ubuntu MATE 24.04 LTS
System monitor in panel shows something like
all day long. Hovering the mouse over the image shows disk usage hitting 83.3% at those peaks.
When I run

sudo iotop
, an item that is regularly at the top is [jbd2/dm-1-8]
, which would appear to be the *Journaling block device* doing its thing with my system disk /
(which is a LUKS encrypted SSD with LVM).
According to The Linux Kernel – Journal (jbd2)
> …the ext4 filesystem employs a journal to protect the filesystem against metadata inconsistencies in the case of a system crash. Up to 10,240,000 file system blocks can be reserved inside the filesystem as a place to land “important” data writes on-disk as quickly as possible …The effect of this is to guarantee that the filesystem does not become stuck midway through a metadata update.
So it's important.
A comment from the questioner to the answer to 'Jdb2 constantly writes to HDD' suggests that jdb2 can be 'calmed down' (my term) by disabling a re-enabling journaling:
sudo tune2fs -O ^has_journal
followed by
sudo tune2fs -O has_journal
.
Is this a good idea?
Is there a bug in jbd2?
Is there a better fix?
Or is this just to be expected on a system using ext4?
---
**Update following @ComputerDruid's comment**
>It may be helpful to include the output of findmnt / to show the mount options you are using
$ findmnt /
TARGET
SOURCE FSTYPE OPTIONS
/ /dev/mapper/ubuntu--vg-ubuntu--lv ext4 rw,relatime
AlanQ
(97 rep)
Apr 15, 2025, 01:20 PM
• Last activity: Apr 16, 2025, 03:31 AM
2
votes
1
answers
52
views
How to prioritize SSD page cache eviction over HDD with slower speed?
I have a large slow HDD and a small fast SSD. This is about reads not [RAID](https://unix.stackexchange.com/q/471551/524752). My desktop grinds to a near-halt when switching back to Firefox or man pages after (re/un)-loading 12+ GiB of Linux kernel build trees and 39 GiB total of different LLMs on t...
I have a large slow HDD and a small fast SSD. This is about reads not [RAID](https://unix.stackexchange.com/q/471551/524752) . My desktop grinds to a near-halt when switching back to Firefox or man pages after (re/un)-loading 12+ GiB of Linux kernel build trees and 39 GiB total of different LLMs on the SSD while I only have 31 GiB of RAM:
$ free -h
total used free shared buff/cache available
Mem: 31Gi 10Gi 2.4Gi 1.0Gi 19Gi 20Gi
Swap: 0B 0B 0B
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 1.7G 0 part /boot
└─sda3 8:3 0 1.8T 0 part
└─sda3_crypt 254:0 0 1.8T 0 crypt
├─vgubuntu-root 254:1 0 1.8T 0 lvm /
└─vgubuntu-swap_1 254:2 0 1.9G 0 lvm
nvme0n1 259:0 0 953.9G 0 disk
└─nvme0n1p1 259:1 0 100G 0 part
└─luks-... 254:3 0 100G 0 crypt /media/...
$ sysctl vm.swappiness
vm.swappiness = 60
The SSD is fast, so I'd rather Linux evict the SSD's page-cached files first. Its uncached read time is seconds anyway. What should stop is eviction of any file under /usr
or /home
. My man bash
and dpkg -S bin/bash
return instantly from the page cache, but uncached they take half a minute after exiting the LLMs. More severely, Firefox needs my ~/.mozilla
folder for history and cache; with it uncached, waiting for the address bar to work takes minutes.
I am looking for an configuration option. systemd-run
could set MemoryMax for ktorrent
, but I frequently restart llama-server
to switch between the ~6 GiB LLMs, and I don't want a separate daemon to keep the cgroup alive. The man
and dpkg
problems will be fixed when my /
moves to the SSD once I sort out fscrypt
fears; in the meantime, /usr
on tmpfs
would leave insufficient available RAM and overlayfs
is too much complexity. The LLM workload could, but shouldn't, remount the SSD as a workaround. That leaves the nice
d kernel build workload still evicting my web browsing one's cache.
I looked in /sys/block
but couldn't find the right config. [Cgroups v2](https://docs.kernel.org/admin-guide/cgroup-v2.html) has per-device options but only for parallel write workloads (io.max
) not for controlling how sequential workloads affect the cache. A [2011 patch](https://lore.kernel.org/lkml/4DFE987E.1070900@jp.fujitsu.com/T/) and a [2023 question](https://unix.stackexchange.com/q/755527/524752) don't see any userspace interface. Which setting can be used to force the SSD's page cache to be evicted before that of the HDD's?
Daniel T
(195 rep)
Apr 13, 2025, 10:44 PM
• Last activity: Apr 14, 2025, 02:07 PM
9
votes
4
answers
2048
views
How to compress disk swap
I am looking for a way to compress swap on disk. *I am not looking for wider discussion of alternative solutions. See discussion at the end.* I have tried... Using compressed zfs zvol for swap is NOT WORKING. The setup of it works, swapon works, swapping does happen somewhat, so i guess one could ar...
I am looking for a way to compress swap on disk. *I am not looking for wider discussion of alternative solutions. See discussion at the end.*
I have tried...
Using compressed zfs zvol for swap is NOT WORKING. The setup of it works, swapon works, swapping does happen somewhat, so i guess one could argue it's *technically* a working solution, but exactly how "working" is your working solution, if it's slower than floppy disk access, and causes your system to completely freeze forever 10 times out of 10? Tried it several times -- soon as system enters memory pressure conditions, everything just freezes. I tried to use it indirectly as well, using losetup, and even tried using zfs zvol as a backing device for zram. No difference, always same results -- incredibly slow write/read rates, and system inevitably dies under pressure.
BTRFS. Only supports *uncompressed* swapfiles. Apparently, only supports uncompressed loop images as well, because i tried dd-ing an empty file, formatting it with regular ext2, compressing it, mounting as a loop device, and creating a swapfile inside of it. Didn't work, even when i mounted btrfs with forced compression enabled -- compsize showed the ext2 image compression ratio of exactly 1.00 .
Zswap -- it's just a buffer between ram and regular disk swap. The regular disk swap keeps on being the regular disk swap, zswap uncompresses pages before writing them on there.
Zram -- has a backing device option since it's inception as compcache, and one would think, is a perfect candidate to have had compressed disk swap for years. No such luck. While you can do writeback of compressed in-ram pages to disk at will, the pages get decompressed before they're written. Unlike zswap, doesn't write same- and zero-filled pages though, which both saves i\o, slightly improves throughput, and warrants the use of loop-mounted sparse files as backing_dev. So far, this is the best option I found for swap optimization on low-end devices, despite it still lacking disk compression.
----
Any ideas what else I can try? Maybe there's some compressed block device layer, that I don't know of, that can compress anything written to it, no filesystem required? Maybe there's some compressed overlay I could make use of? Not done in FUSE though, as FUSE itself is a subject to swapping, unless you know a way to prevent it from being swapped out.
Since i don't see this being explored much, you're welcome to suggest any madness you like. Please, let's throw stuff at the wall and see what sticks.
For experts -- if any of you have read, or even written, any part of linux sourse code that relates to this problem, please describe in as much detail as possible, why do you think this hasn't been implemented yet, and how do you think it *could* be implemented, if you have any idea. And obviously, please do implement that if you can, that'll be awesome.
----------
*Discussion*
Before you mark it as a duplicate -- I'm aware there have been a few questions like that around stackexchange, but none i saw had a working answer, and few had any further feedback. So I'll attempt to describe details, sort of aggregate everything, here, in hopes that someone smarter than me can figure this out. I'm not a programmer, just a user and a script kiddie, so that should be a pretty low bar to jump over.
>just buy more ram, it's cheap
>get an ssd
>swap is bad
>compression is slow anyway, why bother
If all you have to say, are any of the above quotes -- go away. Because the argument is **optimization**. However cheap RAM is these days, it's not free. Swap is always needed, the fact that it's good for the system to have it, has been established for years now. And compression is nothing, even "heavy" algorithms perform stupidly fast on any processors made in the last decade. And lastly, sure, compression might actually become a bottleneck if you're using an ssd, but not everyone prioritizes speed over disk space usage, and hdd drives, which DO benefit grossly from disk compression, are still too popular and plentiful to dismiss.
linuxlover69
(119 rep)
Mar 20, 2023, 08:34 AM
• Last activity: Apr 11, 2025, 03:31 PM
-1
votes
1
answers
103
views
Root filesystem is completely full
everyone. I have a completely filled root filesystem, but I can't figure out what it is. sudo df -h / /dev/nvme0n1p2 49G 49G 0 100% / sudo du -h --max-depth=1 --exclude="media" --exclude="proc" --exclude="run" --exclude="home" / | sort -rh | head -10 25G / Help me figure it out. Thank you all for yo...
everyone.
I have a completely filled root filesystem, but I can't figure out what it is.
sudo df -h /
/dev/nvme0n1p2 49G 49G 0 100% /
sudo du -h --max-depth=1 --exclude="media" --exclude="proc" --exclude="run" --exclude="home" / | sort -rh | head -10
25G /
Help me figure it out.
Thank you all for your answers, but I'll rephrase the question.
We have an SSD
In gparted, we see that the root partition is 50 GB in size and it is all used. But du shows that only 21 GB is occupied in /. Where (and what is it occupied by) did 29 GB go?

Raalgepis
(21 rep)
Apr 7, 2025, 05:23 PM
• Last activity: Apr 9, 2025, 03:16 PM
Showing page 1 of 20 total questions