Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

3 votes
2 answers
2284 views
Btrfs, checksum corruption
I have Btrfs setup on 3 disks with metadata and data in RAID1. But now I have a checksum error which it cannot recover. The checksum is the same on both copies and only differs from the expected checksum by one flipped bit. Therefore I suspect there was a bitflip on the checksum before it was writte...
I have Btrfs setup on 3 disks with metadata and data in RAID1. But now I have a checksum error which it cannot recover. The checksum is the same on both copies and only differs from the expected checksum by one flipped bit. Therefore I suspect there was a bitflip on the checksum before it was written to the disks (the computer does not have ECC RAM). I have a copy of the actual file on another computer from before it was written to this filesystem but as shown below I cannot read out the data due to I/O error from the filesystem so I cannot compare them. How should I go on to fix this error? Some details: $ uname -a Linux stan 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux $ btrfs --version btrfs-progs v4.15.1 $ sudo btrfs fi usage /media/btrfs/ Overall: Device size: 7.28TiB Device allocated: 3.91TiB Device unallocated: 3.36TiB Device missing: 0.00B Used: 3.83TiB Free (estimated): 1.72TiB (min: 1.72TiB) Data ratio: 2.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Data,RAID1: Size:1.95TiB, Used:1.91TiB /dev/sdb 1.95TiB /dev/sdc 998.00GiB /dev/sdd 1001.00GiB Metadata,RAID1: Size:4.00GiB, Used:2.63GiB /dev/sdb 4.00GiB /dev/sdc 3.00GiB /dev/sdd 1.00GiB System,RAID1: Size:64.00MiB, Used:304.00KiB /dev/sdb 64.00MiB /dev/sdc 64.00MiB Unallocated: /dev/sdb 1.68TiB /dev/sdc 861.95GiB /dev/sdd 861.02GiB Scrub: $ sudo btrfs scrub status /media/btrfs/ scrub status for xxxxxx scrub started at Mon Aug 24 11:23:27 2020 and finished after 03:41:54 total bytes scrubbed: 3.81TiB with 2 errors error details: csum=2 corrected errors: 0, uncorrectable errors: 2, unverified errors: 0 Dmesg error after scrub. $ dmesg ... 196755.786038] BTRFS warning (device sdb): checksum error at logical 3099310968832 on dev /dev/sdb, physical 1300730499072, root 5223, inod e 6521311, offset 7614464, length 4096, links 1 (path: users/joachim/Bilder/Canon/270CANON/IMG_7003.CR2) [196755.786168] BTRFS warning (device sdb): checksum error at logical 3099310968832 on dev /dev/sdb, physical 1300730499072, root 5303, inod e 6521311, offset 7614464, length 4096, links 1 (path: users/joachim/Bilder/Canon/270CANON/IMG_7003.CR2) [196755.786245] BTRFS warning (device sdb): checksum error at logical 3099310968832 on dev /dev/sdb, physical 1300730499072, root 5302, inod e 6521311, offset 7614464, length 4096, links 1 (path: users/joachim/Bilder/Canon/270CANON/IMG_7003.CR2) ... [196755.788274] BTRFS error (device sdb): bdev /dev/sdb errs: wr 0, rd 0, flush 0, corrupt 2, gen 0 [196755.814044] BTRFS error (device sdb): unable to fixup (regular) error at logical 3099310968832 on dev /dev/sdb Inspect-internal on block: $ sudo btrfs inspect-internal logical-resolve -v 3099310968832 /media/btrfs/ ioctl ret=0, total_size=4096, bytes_left=3456, bytes_missing=0, cnt=78, missed=0 ioctl ret=0, bytes_left=4023, bytes_missing=0, cnt=1, missed=0 /media/btrfs//snapshots/stansafe.20200601T032501+0200/users/joachim/Bilder/Canon/270CANON/IMG_7003.CR2 ioctl ret=0, bytes_left=4023, bytes_missing=0, cnt=1, missed=0 /media/btrfs//snapshots/stansafe.20200910T032501+0200/users/joachim/Bilder/Canon/270CANON/IMG_7003.CR2 ioctl ret=0, bytes_left=4023, bytes_missing=0, cnt=1, missed=0 /media/btrfs//snapshots/stansafe.20200909T032502+0200/users/joachim/Bilder/Canon/270CANON/IMG_7003.CR2 ... Trying to verify the file: $ sha256sum /media/btrfs//stansafe/users/joachim/Bilder/Canon/270CANON/IMG_7003.CR2 sha256sum: /media/btrfs//stansafe/users/joachim/Bilder/Canon/270CANON/IMG_7003.CR2: Input/output error $ dmesg ... [1642985.509498] BTRFS warning (device sdb): csum failed root 259 ino 6521311 off 7614464 csum 0x151ad4ce expected csum 0x150ad4ce mirror 1 [1642985.509942] BTRFS warning (device sdb): csum failed root 259 ino 6521311 off 7614464 csum 0x151ad4ce expected csum 0x150ad4ce mirror 2
jlublin (31 rep)
Sep 10, 2020, 05:59 AM • Last activity: Aug 6, 2025, 02:06 AM
7 votes
2 answers
467 views
Btrfs read-only file system and corruption errors
# Goal I am trying to figure out why my file system has become read-only so I can address any potential hardware or security issues (main concern) and maybe fix the issue without having to reinstall everything and migrate my files from backup (I might lose some data but probably not much). According...
# Goal I am trying to figure out why my file system has become read-only so I can address any potential hardware or security issues (main concern) and maybe fix the issue without having to reinstall everything and migrate my files from backup (I might lose some data but probably not much). According to the manual of btrfs check: > Do not use --repair unless you are advised to do so by a developer or an experienced user, and then only after having accepted that no fsck successfully repair all types of filesystem corruption. Eg. some other software or hardware bugs can fatally damage a volume. I am thinking of trying the --repair option or btrfs scrub but want input from a more experienced user. # What I’ve tried I first noticed a read-only file system when trying to update my system in the terminal. I was told: Cannot open log file: (30) - Read-only file system [/var/log/dnf5. log] I have run basic checks (using at least 3 different programs) of my SSD without anything obviously wrong. The SSD and everything else in my computer is about 6 and a half years old, so maybe something is failing. Here is the SMART Data section of the output from sudo smartctl -a /dev/nvme0n1:
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02, NSID 0x1)
Critical Warning: 0x00
Temperature: 31 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 1%
Data Units Read: 33,860,547 [17.3 TB]
Data Units Written: 31,419,841 [16.0 TB]
Host Read Commands: 365,150,063
Host Write Commands: 460,825,882
Controller Busy Time: 1,664
Power Cycles: 8,158
Power On Hours: 1,896
Unsafe Shutdowns: 407
Media and Data Integrity Errors: 0
Error Information Log Entries: 4,286
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 31 Celsius
Temperature Sensor 2: 30 Celsius

Error Information (NVMe Log 0x01, 16 of 64 entries)
No Errors Logged

Self-test Log (NVMe Log 0x06, NSID Oxffffffff)
Self-test status: No self-test in progress
No Self-tests Logged
I tried the following I think from a live disk sudo mount -o remount,rw /mount/point but that output an error such as, cannot complete read-only system. sudo btrfs device stats /home **and** sudo btrfs device stats / outputs:
[/dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d].write_io_errs 0
[/dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d].read_io_errs 0
[/dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d].flush_io_errs 0
[/dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d].corruption_errs 14
[/dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d].generation_errs 0
**This seems to suggest that corruption is only in the /home directory.** **However, sudo btrfs check /dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d stops at [5/8] checking fs roots with the end of the output at the top of this image:** <code class=sudo btrfs check /dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d" class="img-fluid rounded" style="max-width: 100%; height: auto; margin: 10px 0;" loading="lazy"> **Some of these files may be in the / directory, but I’m not sure without looking into further.** sudo btrfs fi usage / provides: <code class=sudo btrfs fi usage /" class="img-fluid rounded" style="max-width: 100%; height: auto; margin: 10px 0;" loading="lazy"> **I think that Data,single, Metadata,DUP, and System,DUP might be saying I can repair the corruption if it’s only in metadata or system but not if it’s the actual file data. Might be something to explore more.** Here is vi /etc/fstab: <code class=vi /etc/fstab" class="img-fluid rounded" style="max-width: 100%; height: auto; margin: 10px 0;" loading="lazy"> sudo dmesg | grep -i “btrfs” states: <code class=sudo dmesg | grep -i “btrfs”" class="img-fluid rounded" style="max-width: 100%; height: auto; margin: 10px 0;" loading="lazy"> The file system is indeed unstable. Once, I wasn’t able to list any files in my /home directory, but I haven't run into this issue again across several reboots. # What I think might be causing this I suspect that changing my username, hostname, and display name (shown on the login screen) recently may have caused problems because my file system became read-only about a week to a week and a half after doing so. I followed some tutorials online, but I noticed that many of my files still had the group and possibly user belonging to the old username. So I created a symbolic link at the top of my home directory pointing the old username to the new one, and it seemed like everything was fine until the read-only issue. There may have been more I did, but I don’t remember exactly as it’s been a few weeks now. I have a history of most or all of the commands I ran if it might be helpful. I think it may be something hardware related, something I did, software bugs (maybe introduced by a recent update — I have a picture of packages affected in my most recent dnf upgrade transaction, but I was unable to rollback or undo the upgrade because of the read-only file system), improper shutdowns (may have done this while making changes to the username, hostname, and display name), or a security issue.
Growing My Roots (351 rep)
Aug 1, 2025, 10:38 PM • Last activity: Aug 3, 2025, 02:37 AM
7 votes
1 answers
2083 views
Btrfs/ZFS Network Replication
Is it possible to replicate a ZFS or Btrfs raid volume in real-time (or as close to as possible, network specs aside) over a network? ZFS and Btrfs are ideal because of their CoW properties. I'm thinking something similar to DRBD, but DRBD won't work because it requires a single-block device, and we...
Is it possible to replicate a ZFS or Btrfs raid volume in real-time (or as close to as possible, network specs aside) over a network? ZFS and Btrfs are ideal because of their CoW properties. I'm thinking something similar to DRBD, but DRBD won't work because it requires a single-block device, and we're ruling out the option of exporting each disk as a DRBD device because that would get messy. I don't want to use send/receive because they would be too slow, even if scripted. Ideally, I'd like something relatively simple to avoid unnecessary complexity.
DevinM (171 rep)
Nov 10, 2015, 12:23 AM • Last activity: Jul 30, 2025, 04:05 PM
3 votes
1 answers
1930 views
Why is DM-Integrity so slow compared to BTRFS?
I want to detect silent corruption of block devices similar to how BTRFS does that for files. I'd even like to do that below BTRFS (and disable BTRFS's native checksumming) so that I can tweak more parameters than BTRFS allows. DM-Integrity seems like the best choice and in principle it must be doin...
I want to detect silent corruption of block devices similar to how BTRFS does that for files. I'd even like to do that below BTRFS (and disable BTRFS's native checksumming) so that I can tweak more parameters than BTRFS allows. DM-Integrity seems like the best choice and in principle it must be doing the same thing as BTRFS. The problem is that it's incredibly, unusably slow. While sequential writes on BTRFS are 170+ MiB/s (with compression disabled), on DM-Integrity they're 8-12 MiB/s. I tried to match DM-Integrity parameters with BTRFS (sector size, hashing algorithm, etc) and I tried lots of combinations of other parameters (data interleaving, bitmapping, native vs generic hashing drivers, etc). The writes were asynchronous, but the speed was calculated based on the time it took for writes to be committed (so I don't think the difference was due to memory caching). Everything was on top of a writethrough Bcache, which should be reordering writes (so I don't think it could be BTRFS reordering writes). I can't think of any other reason that could explain this drastic performance difference. I'm using Debian 11 with a self-compiled 6.0.12 Linux kernel and sha256 as my hashing algorithm. My block layers are (dm-integrity or btrfs)/lvm/dm-crypt/bcache/dm-raid. **Is there a flaw in my testing? Or some other explanation for this huge performance difference? Is there some parameter I can change with DM-Integrity to achieve comparable performance to BTRFS?**
ATLief (328 rep)
Dec 30, 2022, 12:56 PM • Last activity: Jul 5, 2025, 07:08 PM
2 votes
3 answers
114 views
How to recognize theme files with btrfs subvolumes in grub
Recently, I modified my whole ext4 system to become a btrfs one, which was successful. Then, I modified the subvolumes to look like the following: ``` (btrfs partition block device): ├─@rootfs (subvolume for /) ├─@home (subvolume for /home) ├─@log (subvolume for /var/log) ├─@libvirt (subvolume for /...
Recently, I modified my whole ext4 system to become a btrfs one, which was successful. Then, I modified the subvolumes to look like the following:
(btrfs partition block device):
├─@rootfs (subvolume for /)
├─@home (subvolume for /home)
├─@log (subvolume for /var/log)
├─@libvirt (subvolume for /var/lib/libvirt)
├─@opt (subvolume for /opt)
I was able to mount everything properly when booting after using Rescuezilla's live environment for the configurations (for /etc/fstab and grub-install), and I added root=UUID=[btrfs partition UUID] rootflags=rw,subvol=@rootfs to the grub kernel parameters. Now, I have the themes files in the /@rootfs/boot/grub directory, but Grub does not seem to recognize the location and defaults to the default theme whenever I boot into Grub. How can I make Grub recognize theme files in a btrfs subvolume of the root partition? Edit: I have created a separate luks1-encrypted boot partition to try to solve this. Everything still works but detecting the theme. The error message (that is thankfully inert) is about a nonexistent file somewhere in /usr/share/grub. I installed grub2 with grub-install --uefi-secure-boot --boot-directory=/boot --efi-directory=/boot/efi --directory=/usr/lib/grub/x86_64-efi --themes=vimix --target=x86_64-efi /dev/nvme0n1p1 The grub error message when booting is:
error: file `/usr/share/desktop-base/ceratopsian-theme/grub/grub-16x9.png' not found.
The file does exist though, at least in the @rootfs subvolume. I read this which seems to have the same issue, but the solution to stick to the default theme is not helpful: https://forum.endeavouros.com/t/grub-error-no-server-is-specified/42389 Edit 2: Now, I have luks2-encrypted the root partition and boot partition (converting luks1 ⇾ luks2 keeping pbkdf2). A new error shows stating how the cryptodisk module cannot be found, and no server is specified. It is harmless, but it prevents the theme directory from even appearing if the root partition cannot be unlocked while using Grub. Post-boot unlocks fine. The link mentioned previously has these conditions. Significant parts of /etc/default/grub
GRUB_CMDLINE_LINUX="cryptdevice=UUID=[uuid of encrypted partition]:debian_crypt crypto=sha512:aes-xts-plain64:512:0: root=UUID=[uuid of mapped btrfs root device from encrypted partition at /dev/mapper/debian_crypt] rd.luks.name=[uuid of mapped btrfs root device from encrypted partition at /dev/mapper/debian_crypt]=debian_crypt rd.luks.options=[uuid of mapped btrfs root device from encrypted partition at /dev/mapper/debian_crypt]=tpm2-device=auto,password-echo=no,tries=1 rootflags=rw,subvol=@rootfs intel_iommu=on iommu=pt efi=runtime i915.enable_guc=2"

# If your computer has multiple operating systems installed, then you
# probably want to run os-prober. However, if your computer is a host
# for guest OSes installed via LVM or raw disk devices, running
# os-prober can cause damage to those guest OSes as it mounts
# filesystems to look for things.
GRUB_DISABLE_OS_PROBER=false

...

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
GRUB_GFXMODE=1920x1200,auto

...

GRUB_THEME="/usr/share/grub/themes/vimix/theme.txt"
GRUB_BACKGROUND="/usr/share/desktop-base/ceratopsian-theme/grub/grub-16x9.png"

# Enable cryptodisk in case it is needed
GRUB_ENABLE_CRYPTODISK=y

GRUB_PRELOAD_MODULES="part_gpt btrfs cryptodisk"
update-grub output:
Generating grub configuration file ...
Found theme: /usr/share/grub/themes/vimix/theme.txt
Found background image: .background_cache.png
Found linux image: /boot/vmlinuz-6.15-amd64
Found initrd image: /boot/initrd.img-6.15-amd64
Found memtest86+ 64bit EFI image: /boot/memtest86+x64.efi
Found memtest86+ 32bit EFI image: /boot/memtest86+ia32.efi
Found memtest86+ 64bit image: /boot/memtest86+x64.bin
Found memtest86+ 32bit image: /boot/memtest86+ia32.bin
Warning: os-prober will be executed to detect other bootable partitions.
Its output will be used to detect bootable binaries on them and create new boot entries.
Adding boot menu entry for UEFI Firmware Settings ...
done
horsey_guy (421 rep)
Jun 26, 2025, 02:11 PM • Last activity: Jul 1, 2025, 01:59 AM
4 votes
1 answers
5177 views
BTRFS: Adding new hard drive as /home after installation
My SSD is only 110 GB in size, so moving the old /home (btrfs) to a new /home (also btrfs) on a bigger HDD is likely a good idea. Is it possible to combine btrfs-subvolumes as separate subvolumes on separate partitions (even on separate devices) but as children of the top level subvolume (ID 5)????...
My SSD is only 110 GB in size, so moving the old /home (btrfs) to a new /home (also btrfs) on a bigger HDD is likely a good idea. Is it possible to combine btrfs-subvolumes as separate subvolumes on separate partitions (even on separate devices) but as children of the top level subvolume (ID 5)???? Does this procedure enable snapshots of the new /home? This is my current entry for the old /home on SSD in fstab: UUID=23cef669-f46c-4f5b-8476-ba548256e754 /home btrfs rw,noatime,compress=lzo,ssd,space_cache,subvolid=258,subvol=/@home,subvol=@home 0 0 As far as i know the procedure to move /home is as follows: >a) create a mountpoint for the new /home (e.g. /mnt/home) >b) adjusting fstab entry of /home: UUID> mountpoint> btrfs> mountoptions >c) copy all files from old to new /home via life system (e.g. cp -ar /oldhome/* /newhome) But i'm not sure what to do with mount options: can i use the old subvolume options?: subvolid=258,subvol=/@home,subvol=@home Should be harmless as long as the old entry is going to be deleted?! If yes, the new fstab entry on HDD for /home would look like this: UUID=7ad83a78-4e19-45df-9c6e-1d931a9f999c /mnt/home btrfs noatime,compress=lzo,subvolid=258,subvol=/@home,subvol=@home 0 2 What did i forget? Any comments, hints or suggestions for improvement?
kinoe (41 rep)
Jun 13, 2017, 05:19 PM • Last activity: Jun 27, 2025, 01:03 PM
0 votes
1 answers
1907 views
Grub unlock luks encrypted btrfs raid0
The goal is to have grub unlock `/dev/nvme0n1p3` which contains a keyfile to unlock the 2 luks encrypted btrfs raid0 drives. If I can get it working, I'll create a tool that can accompany Linux installers to get it done easier. I keep getting dropped into the grub rescue prompt with: ``` No such dev...
The goal is to have grub unlock /dev/nvme0n1p3 which contains a keyfile to unlock the 2 luks encrypted btrfs raid0 drives. If I can get it working, I'll create a tool that can accompany Linux installers to get it done easier. I keep getting dropped into the grub rescue prompt with:
No such device: 2d6983f7-c10e-4b1a-b182-24d6f2b2a6c0
error: unknown filesystem.
So, it's not unlocking my luks. That's the UUID of /dev/mapper/cryptroot and /dev/mapper/cryptroot2 (They share it since it's raid0). Idk why it's showing up as the first thing grub tries to do though. The first thing I want grub to unlock is 0df41a34-e267-491a-ac02-25758c26ec65 aka /dev/nvme0n1p3 (cryptkeys) in order to unlock the raid0 drives. Here's what I did... ## Setup 2 nvme drives. - 2 NVMe drives. - Garuda Linux (Arch-based). - Grub 2.6 (Supports LUKS2). - blkid output:
/dev/loop1: TYPE="squashfs"
/dev/mapper/cryptroot2: UUID="2d6983f7-c10e-4b1a-b182-24d6f2b2a6c0" UUID_SUB="b2ee9dad-c9cb-4ec4-ae38-d28af19eb183" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/nvme0n1p3: UUID="0df41a34-e267-491a-ac02-25758c26ec65" TYPE="crypto_LUKS" PARTUUID="a49f7cdb-cbb6-44cd-b1e4-00b61dd1f00d"
/dev/nvme0n1p1: LABEL_FATBOOT="NO_LABEL" LABEL="NO_LABEL" UUID="A5AC-81DA" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="b0def085-1288-b746-9d7d-961354131dbc"
/dev/nvme0n1p2: UUID="802edb34-f481-4adf-9f98-3a80028d7cec" TYPE="crypto_LUKS" PARTLABEL="root" PARTUUID="9b945709-b51b-1c46-8ee3-6f3ba74c5a5b"
/dev/sdb2: SEC_TYPE="msdos" LABEL_FATBOOT="MISO_EFI" LABEL="MISO_EFI" UUID="EFD7-7387" BLOCK_SIZE="512" TYPE="vfat"
/dev/sdb1: BLOCK_SIZE="2048" UUID="2021-08-09-16-03-00-00" LABEL="GARUDA_GNOME_SOARING_" TYPE="iso9660"
/dev/loop2: TYPE="squashfs"
/dev/loop0: TYPE="squashfs"
/dev/mapper/cryptroot: UUID="2d6983f7-c10e-4b1a-b182-24d6f2b2a6c0" UUID_SUB="ef6be59d-a4be-4d00-93c2-0084530bf929" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/nvme1n1: UUID="53517d3d-a638-48b9-af4f-125114e4f0c6" TYPE="crypto_LUKS"
/dev/zram0: LABEL="zram0" UUID="aa36a4d8-690e-4f2a-bfc9-e2fad1db8efb" TYPE="swap"
/dev/loop3: TYPE="squashfs"
## Procedures 1. Installed Garuda Linux to /dev/nvme0n1 which gave me the following partition layout on the first drive. I then created an ext4 partition (cryptkeys) in a luks container for storing keys and a luks container spanning the entire nvme1n1 for the btrfs raid:
NAME               FSTYPE          FLAGS
nvme0n1
├─nvme0n1p1        fat32           boot,esp
├─nvme0n1p2        crypto_LUKS
│ └─cryptroot      btrfs
└─nvme0n1p3        crypto_LUKS
  └─cryptkeys      ext4
nvme1n1            crypto_LUKS
└─         
  └─cryptroot2     btrfs
2. Unlocked nvme0n1p2 and nvme1n1 mounting to /mnt/cryptroot. 3. To convert to raid0 spanning 2 drives, ran:
btrfs device add /dev/mapper/cryptroot2 /mnt/cryptroot
btrfs balance start -dconvert=raid0 -mconvert=raid1 /mnt/cryptroot
4. Created a new keyfile for luks and added it to all luks containers except the one I named "cryptkeys" which is /dev/nvme0n1p3. All luks containers can also be unlocked via the same password. nvme0n1p3 was mounted to /mnt/cryptkeys and the keyfile copied to it:
dd bs=512 count=4 if=/dev/random of=/mnt/cryptroot/crypto_keyfile.bin
chmod 600 /mnt/cryptkeys/crypto_keyfile.bin

cryptsetup luksAddKey /dev/nvme0n1p2 cryptkeys/crypto_keyfile.bin
cryptsetup luksAddKey /dev/nvme1n1 cryptkeys/crypto_keyfile.bin
5. With the btrfs raid0 now mounted, chrooted into the new Garuda install via:
mkdir /mnt/newroot
mount -o subvol=@,compress=zstd /dev/mapper/cryptroot newroot
for i in /dev /dev/pts /proc /sys /run; do sudo mount --bind $i /mnt/newroot$i; done
mount /dev/nvme0n1p1 newroot/boot/efi
mount --bind /sys/firmware/efi/efivars newroot/sys/firmware/efi/efivars 
chroot /mnt/newroot
6. Edited /etc/default/grub to be:
# GRUB boot loader configuration

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Garuda"
GRUB_CMDLINE_LINUX_DEFAULT="quiet cryptdevice2=/dev/disk/by-uuid/0df41a34-e267-491a-ac02-25758c26ec65:cryptkeys:allow-discards cryptdevice3=/dev/disk/by-uuid/802edb34-f481-4adf-9f98-3a80028d7cec:cryptroot:allow-discards cryptdevice=/dev/disk/by-uuid/53517d3d-a638-48b9-af4f-125114e4f0c6:cryptroot2:allow-discards root=/dev/mapper/cryptroot splash rd.udev.log_priority=3 vt.global_cursor_default=0 systemd.unified_cgroup_hierarchy=1 loglevel=3"
GRUB_CMDLINE_LINUX=""

# Preload both GPT and MBR modules so that they are not missed
GRUB_PRELOAD_MODULES="part_gpt part_msdos"

# Uncomment to enable booting from LUKS encrypted devices
#GRUB_ENABLE_CRYPTODISK=y

# Set to 'countdown' or 'hidden' to change timeout behavior,
# press ESC key to display menu.
GRUB_TIMEOUT_STYLE=menu

# Uncomment to use basic console
GRUB_TERMINAL_INPUT=console

# Uncomment to disable graphical terminal
#GRUB_TERMINAL_OUTPUT=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
GRUB_GFXMODE=auto

# Uncomment to allow the kernel use the same resolution used by grub
GRUB_GFXPAYLOAD_LINUX=keep

# Uncomment if you want GRUB to pass to the Linux kernel the old parameter
# format "root=/dev/xxx" instead of "root=/dev/disk/by-uuid/xxx"
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
GRUB_DISABLE_RECOVERY=true

# Uncomment and set to the desired menu colors.  Used by normal and wallpaper
# modes only.  Entries specified as foreground/background.
#GRUB_COLOR_NORMAL="light-blue/black"
#GRUB_COLOR_HIGHLIGHT="light-cyan/blue"

# Uncomment one of them for the gfx desired, a image background or a gfxtheme
#GRUB_BACKGROUND="/path/to/wallpaper"
GRUB_THEME="/usr/share/grub/themes/garuda/theme.txt"

# Uncomment to get a beep at GRUB start
#GRUB_INIT_TUNE="480 440 1"

# Uncomment to make GRUB remember the last selection. This requires
# setting 'GRUB_DEFAULT=saved' above.
#GRUB_SAVEDEFAULT=true

# Uncomment to disable submenus in boot menu
#GRUB_DISABLE_SUBMENU=y

GRUB_DISABLE_OS_PROBER=false
GRUB_DISABLE_OS_PROBER=false
GRUB_ENABLE_CRYPTODISK=y
7. Copied hooks as:
# copy the original hook
cp /usr/lib/initcpio/install/encrypt /etc/initcpio/install/encrypt2
cp /usr/lib/initcpio/install/encrypt /etc/initcpio/install/encrypt3
cp /usr/lib/initcpio/hooks/encrypt  /etc/initcpio/hooks/encrypt2
cp /usr/lib/initcpio/hooks/encrypt  /etc/initcpio/hooks/encrypt3
# adapt the new hook to use different names and to NOT delete the keyfile
sed -i "s/cryptdevice/cryptdevice2/" /etc/initcpio/hooks/encrypt2
sed -i "s/cryptdevice/cryptdevice3/" /etc/initcpio/hooks/encrypt3
sed -i "s/cryptkey/cryptkey2/" /etc/initcpio/hooks/encrypt2
sed -i "s/cryptkey/cryptkey3/" /etc/initcpio/hooks/encrypt3
sed -i "s/rm -f \${ckeyfile}//" /etc/initcpio/hooks/encrypt2
sed -i "s/rm -f \${ckeyfile}//" /etc/initcpio/hooks/encrypt3
8. Added encrypt2 and encrypt3 to /etc/mkinitcpio.conf before encrypt hook. Also specified keyfile. mkinitcpio.conf is now:
# vim:set ft=sh
# MODULES
# The following modules are loaded before any boot hooks are
# run.  Advanced users may wish to specify all system modules
# in this array.  For instance:
#     MODULES=(intel_agp i915 amdgpu radeon nouveau)
MODULES=(intel_agp i915 amdgpu radeon nouveau)

# BINARIES
# This setting includes any additional binaries a given user may
# wish into the CPIO image.  This is run last, so it may be used to
# override the actual binaries included by a given hook
# BINARIES are dependency parsed, so you may safely ignore libraries
BINARIES=()

# FILES
# This setting is similar to BINARIES above, however, files are added
# as-is and are not parsed in any way.  This is useful for config files.
FILES="/crypto_keyfile.bin"

# HOOKS
# This is the most important setting in this file.  The HOOKS control the
# modules and scripts added to the image, and what happens at boot time.
# Order is important, and it is recommended that you do not change the
# order in which HOOKS are added.  Run 'mkinitcpio -H ' for
# help on a given hook.
# 'base' is _required_ unless you know precisely what you are doing.
# 'udev' is _required_ in order to automatically load modules
# 'filesystems' is _required_ unless you specify your fs modules in MODULES
# Examples:
##   This setup specifies all modules in the MODULES setting above.
##   No raid, lvm2, or encrypted root is needed.
#    HOOKS=(base)
#
##   This setup will autodetect all modules for your system and should
##   work as a sane default
#    HOOKS=(base udev autodetect block filesystems)
#
##   This setup will generate a 'full' image which supports most systems.
##   No autodetection is done.
#    HOOKS=(base udev block filesystems)
#
##   This setup assembles a pata mdadm array with an encrypted root FS.
##   Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
#    HOOKS=(base udev block mdadm encrypt filesystems)
#
##   This setup loads an lvm2 volume group on a usb device.
#    HOOKS=(base udev block lvm2 filesystems)
#
##   NOTE: If you have /usr on a separate partition, you MUST include the
#    usr, fsck and shutdown hooks.
HOOKS="base udev encrypt autodetect modconf block keyboard keymap consolefont plymouth encrypt2 encrypt3 encrypt filesystems"

# COMPRESSION
# Use this to compress the initramfs image. By default, zstd compression
# is used. Use 'cat' to create an uncompressed image.
#COMPRESSION="zstd"
#COMPRESSION="gzip"
#COMPRESSION="bzip2"
#COMPRESSION="lzma"
#COMPRESSION="xz"
#COMPRESSION="lzop"
#COMPRESSION="lz4"

# COMPRESSION_OPTIONS
# Additional options for the compressor
#COMPRESSION_OPTIONS=()
9. Ran:
mkinitcpio -p linux-zen
# initramfs includes the key, so only root should be able to read it
chmod 600 /boot/initramfs-linux-fallback.img
chmod 600 /boot/initramfs-linux.img
10. Changed /etc/crypttab to:
# /etc/crypttab: mappings for encrypted partitions.
#
# Each mapped device will be created in /dev/mapper, so your /etc/fstab
# should use the /dev/mapper/ paths for encrypted devices.
#
# See crypttab(5) for the supported syntax.
#
# NOTE: Do not list your root (/) partition here, it must be set up
#       beforehand by the initramfs (/etc/mkinitcpio.conf). The same applies
#       to encrypted swap, which should be set up with mkinitcpio-openswap
#       for resume support.
#
#                                          
cryptkeys             UUID=0df41a34-e267-491a-ac02-25758c26ec65     /crypto_keyfile.bin luks,discard,nofail
11. Changed /etc/fstab to:
#                      
UUID=A5AC-81DA        /boot/efi      vfat    umask=0077 0 2
/dev/mapper/cryptroot /              btrfs   subvol=/@,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /home          btrfs   subvol=/@home,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /root          btrfs   subvol=/@root,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /srv           btrfs   subvol=/@srv,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /var/cache     btrfs   subvol=/@cache,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /var/log       btrfs   subvol=/@log,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /var/tmp       btrfs   subvol=/@tmp,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
12. Finally, ran:
grub-mkconfig -o /boot/grub/grub.cfg
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=Garuda --recheck
exit
reboot
**An aside:** A few times that I ran grub-install, the value of --bootloader-id was arch-grub before I changed it to Garuda. I don't think it matters much except that now I have extra boot menu entries as idk how to get rid of them. Probably doesn't matter though. I get the error even when selecting the Garuda entry from the EFI boot menu. **Note:** These procedures were adapted from this blog post . What's different is no luks encrypted boot partition and the addition of the cryptkeys partition instead.
xendi (613 rep)
Aug 31, 2021, 12:56 AM • Last activity: Jun 25, 2025, 05:07 AM
-5 votes
1 answers
169 views
How to protect against both bit rot and device failure with Btrfs
How can you protect simultaneously against bit rot and device failure with Btrfs? Because btrfs only checks data integrity on files when it reads them. The only solution I can think of is using two drives, each of which has two instances of the same data, a total of four instances of the same data....
How can you protect simultaneously against bit rot and device failure with Btrfs? Because btrfs only checks data integrity on files when it reads them. The only solution I can think of is using two drives, each of which has two instances of the same data, a total of four instances of the same data. If one drive fails, the remaining one still has duplicated data. But this profile seems not to exist. Another impractical solution is to use RAID1 and execute a scrub after every write operation. Yet another option is to have two drives, each partitioned in half, and use a RAID profile to have the four copies. But this is probably not good, for the same reasons why dup is preferable to RAID1 on two partitions of the same drive. I don't know those reasons, but there must be some, otherwise, people wouldn't have come up with dup as an alternative to using RAID1 on two partitions of the same drive. Please make the effort to understand the question before posting an answer which doesn't help. Because data integrity is only verified when files are read, it seems that raid-1 alone provides no mitigation against data corruption since the last time the file wasread or scrub was run. = As of 16 June I still have no satisfactory answer =
user324831 (113 rep)
Jun 12, 2025, 02:58 PM • Last activity: Jun 16, 2025, 03:37 PM
1 votes
1 answers
57 views
What does "BTRFS warning (device nvme0n1p7): devid 1 physical 0 len 4194304 inside the reserved space" mean?
On mount, I see the following warning in `dmesg`: ``` [ 0.913159] BTRFS warning (device nvme0n1p7): devid 1 physical 0 len 4194304 inside the reserved space ``` More log context: ``` [ 0.909896] BTRFS: device fsid 82d6bc91-46ab-4255-ab7e-bf98fb0bd4a9 devid 1 transid 2503602 /dev/root (259:7) scanned...
On mount, I see the following warning in dmesg:
[    0.913159] BTRFS warning (device nvme0n1p7): devid 1 physical 0 len 4194304 inside the reserved space
More log context:
[    0.909896] BTRFS: device fsid 82d6bc91-46ab-4255-ab7e-bf98fb0bd4a9 devid 1 transid 2503602 /dev/root (259:7) scanned by swapper/0 (1)
[    0.910467] BTRFS info (device nvme0n1p7): first mount of filesystem 82d6bc91-46ab-4255-ab7e-bf98fb0bd4a9
[    0.910880] BTRFS info (device nvme0n1p7): using crc32c (crc32c-generic) checksum algorithm
[    0.911292] BTRFS info (device nvme0n1p7): using free-space-tree
[    0.913159] BTRFS warning (device nvme0n1p7): devid 1 physical 0 len 4194304 inside the reserved space
[    0.948949] VFS: Mounted root (btrfs filesystem) readonly on device 0:19.
What does this mean? How can I move the relevant data out of the reserved space?
ComputerDruid (410 rep)
Apr 14, 2025, 07:48 PM • Last activity: Jun 16, 2025, 02:18 AM
6 votes
2 answers
468 views
Can I use grep or strings to find the previous atime of a file still present on my btrfs?
The metadata of this file which resides on my HDD is written by CoW, therefore can I look for it just by using grep or strings, and the filename?
The metadata of this file which resides on my HDD is written by CoW, therefore can I look for it just by using grep or strings, and the filename?
user324831 (113 rep)
Jun 1, 2025, 10:00 AM • Last activity: Jun 11, 2025, 08:04 PM
0 votes
1 answers
47 views
Btrfs can see only half of the disk, bug or feature?
I think I have found a bug in btrfs, but I am not sure. At least the following script, very likely, can demonstrate it. This script creates an 1TB big file in `/tmp/btt`, formats it as a btrfs and mounts it. It is using the flags I have found the best, after carefully read the btrfs docs. What I hav...
I think I have found a bug in btrfs, but I am not sure. At least the following script, very likely, can demonstrate it. This script creates an 1TB big file in /tmp/btt, formats it as a btrfs and mounts it. It is using the flags I have found the best, after carefully read the btrfs docs. What I have got... that I could see only half of my hard disk. truncate -s $[1<<40] /tmp/btt losetup /dev/loop4 /tmp/btt mkfs.btrfs --nodesize 4096 --sectorsize 4096 \ --label btrfsbugexample \ --features mixed-bg,extref,free-space-tree,block-group-tree \ --verbose /dev/loop4 mount /dev/loop4 /mnt/tmp -o autodefrag,compress=zstd:15,datacow,datasum,discard=async,max_inline=4096,nossd df -h /mnt/tmp Fun thing is, that the problem actually can be fixed, *even without a remount*, by a command btrfs balance start -f -v -dconvert=single -mconvert=single -sconvert=single /mnt/tmp I googled a lot, until I have found these, and seem working well. What is not very clear, now what. I have found a bug, or I have misinterpreted here something? My diggings have also revealed, btrfs have a numerous really extra features: like in-fs raid1 support, or data duplication, or data de-duplication, and mixed data vs. metadata blocks. These look really awesome - and, maybe, indirectly, it was my mistake or a surprising interaction of the used FS flags. I do not know. Now I have found a bug or a feature?
peterh (10448 rep)
May 31, 2025, 08:33 PM • Last activity: Jun 6, 2025, 08:41 AM
0 votes
2 answers
3666 views
How to convert BTRFS to bcachefs on Linux Mint and Linux Mint Debian Edition LMDE?
**2023-10-31, Bcachefs Merged Into The Linux 6.7 Kernel:** * https://www.phoronix.com/news/Bcachefs-Merged-Linux-6.7 * https://web.archive.org/web/20231103095158/https://www.phoronix.com/news/Bcachefs-Merged-Linux-6.7 **Manuals:** * https://manpages.ubuntu.com/manpages/impish/man8/bcachefs.8.html *...
**2023-10-31, Bcachefs Merged Into The Linux 6.7 Kernel:** * https://www.phoronix.com/news/Bcachefs-Merged-Linux-6.7 * https://web.archive.org/web/20231103095158/https://www.phoronix.com/news/Bcachefs-Merged-Linux-6.7 **Manuals:** * https://manpages.ubuntu.com/manpages/impish/man8/bcachefs.8.html * https://web.archive.org/web/20230205131951/https://manpages.ubuntu.com/manpages/impish * https://manpages.ubuntu.com/manpages/impish/man8/bcachefs.8.html * https://web.archive.org/web/20230205131951/https://manpages.ubuntu.com/manpages/impish **Mailing list:** * http://vger.kernel.org/vger-lists.html#linux-bcachefs The bcachefs management software bcachefs-tools are available by Application Management of Linux Mint 21 and actual Debian version, and consist follow related information: bcachefs migrate [options] device Migrate an existing filesystem to bcachefs -f fs Root of filesystem to migrate --encrypted Enable whole filesystem encryption (chacha20/poly1305) --no_passphrase Don't encrypt master encryption key -F Force, even if metadata file already exists bcachefs migrate-superblock [options] device Create default superblock after migrating -d device Device to create superblock for -o offset Offset of existing superblock **Source:** * https://web.archive.org/web/20230205130327/https://bcachefs.org/bcachefs-principles-of-operation.pdf **Remark:** I am looking for a answer now, which use the bcachefs-tools, gparted or comparable tools. The existing answer, to copy the data from old ext4 partition to a new bcachefs partition, are not what I am looking for.
Alfred.37 (129 rep)
Feb 9, 2023, 11:43 AM • Last activity: Jun 1, 2025, 01:07 PM
0 votes
1 answers
45 views
Can I use grep or strings to seach for deleted file name on btrfs?
Months ago, one of my systemd journal files was purged from my btrfs hdd. Because I couldn't use file carving to check if the content is still on the hdd because unfortunately the format is binary and not text, I can at least look for the metadata of this file (inode is called?). For this, can I jus...
Months ago, one of my systemd journal files was purged from my btrfs hdd. Because I couldn't use file carving to check if the content is still on the hdd because unfortunately the format is binary and not text, I can at least look for the metadata of this file (inode is called?). For this, can I just use grep or strings over my sda1 and search for the filename? If I find something then I will see which blocks it points to and try recover the actual content of the journal file... I know there are btrfs-undelete scripts but I don't see the point of using them at least in this case.
user324831 (113 rep)
May 31, 2025, 09:22 PM • Last activity: Jun 1, 2025, 07:40 AM
5 votes
4 answers
1217 views
How can I limit the number of data stripes in a btrfs RAID-0 profile in order to better utilize the total available space?
### Problem background One great thing about btrfs is its ability to effectively use drives with different sizes, but I just learnt that this is not true for its default RAID-0 (striped) profile. I wanted to use RAID-0 with three drives: 8, 4, and 4 TB, and was hoping to get a total of 16 TB: The fi...
### Problem background One great thing about btrfs is its ability to effectively use drives with different sizes, but I just learnt that this is not true for its default RAID-0 (striped) profile. I wanted to use RAID-0 with three drives: 8, 4, and 4 TB, and was hoping to get a total of 16 TB: The first half of the 8 TB can be striped with the first 4 TB drive, and the second half can be striped with the second 4 TB drive. However, according to the (very useful) [btrfs disk usage calculator](https://carfax.org.uk/btrfs-usage/) I would only get 12 TB: Each chunk would be striped on all three drives, and that leaves 4 TB unused on my 8 TB drive. This is also mentioned in an answer to the question https://unix.stackexchange.com/q/185686/145426 . ### Illustration This is what I expected to happen: btrfs striping for RAID-0 if it works like I want it to …and this is what is actually happening: btrfs default striping for RAID-0 ### What does the manual say? After perusing [mkfs.btrfs](https://btrfs.wiki.kernel.org/index.php/Manpage/mkfs.btrfs#PROFILES) I figure that this is because the default RAID-0 profile sets a _minimum_ number of devices to 2, but does not have an upper limit. This means that a data block will be striped on as many devices it can find in the pool. This can of course be a reasonable option and it makes complete sense. While playing with the btrfs disk usage calculator I found that I can get what I want by setting the _maximum_ number of devices to 2. This would still stripe my data over two drives to get some extra speed, but limit the striping to two devices so that it can use a lot more of the available space. To me this is a very beneficial trade-off, and I assume I am not alone in thinking so. ### The question I did not find a way to _change_ the maximum number of devices when creating a filesystem. - Is this even possible? - If so, how can I change it? - If I _do_ change it, will the other tools understand the layout?
pipe (893 rep)
Oct 14, 2020, 01:22 PM • Last activity: May 31, 2025, 09:48 AM
1 votes
1 answers
55 views
Forensics to recover the second-to-last access timestamp of a file on btrfs on HDD
I searched online, to no avail. Is there some way to recover the access timestamp of my file on BTRFS, before the access timestamp which appears currently? Using HDD (not SSD). Please let me know. Is this question better suited for superuser? I made no snapshots (willingly), using Fedora and the met...
I searched online, to no avail. Is there some way to recover the access timestamp of my file on BTRFS, before the access timestamp which appears currently? Using HDD (not SSD). Please let me know. Is this question better suited for superuser? I made no snapshots (willingly), using Fedora and the metadata change dates back some two weeks... In fact to be precise I'm interested in two timestamps ago, which happened in rapid succession.
user324831 (113 rep)
May 23, 2025, 06:15 PM • Last activity: May 23, 2025, 08:35 PM
21 votes
5 answers
56016 views
Clear all Snapper snapshots
OpenSUSE (among other distributions) uses **snapper** to take snapshots of *btrfs* partitions. Some people think the default snapshot intervals take up too much space too quickly, but whether or not you believe that, there are times when you want to clear space on your filesystem and often find that...
OpenSUSE (among other distributions) uses **snapper** to take snapshots of *btrfs* partitions. Some people think the default snapshot intervals take up too much space too quickly, but whether or not you believe that, there are times when you want to clear space on your filesystem and often find that the *btrfs* snapshots are taking a significant amount of space. Or, in other cases you may want to clear the filesystem of all excess data before moving it to/from a VM or changing the storage medium or something along those lines. But, I can't seem to find a command to quickly wipe all of the snapshots **snapper** has taken, either via snapper or another tool. How would I do this?
palswim (5597 rep)
Jan 28, 2015, 06:39 PM • Last activity: May 21, 2025, 01:38 PM
0 votes
0 answers
29 views
Recover deleted txt file which was stored to btrfs right before another one that I still have
I used my terminal emulator to run a command and redirect output to a text file on my btrfs. Right after, I did the same with another command. I since deleted the first text file and not the second. I'd like to recover part of the first text: photorec seems overkill but I'm familiar with the tool, u...
I used my terminal emulator to run a command and redirect output to a text file on my btrfs. Right after, I did the same with another command. I since deleted the first text file and not the second. I'd like to recover part of the first text: photorec seems overkill but I'm familiar with the tool, using strings seems more suitable but I'm not familiar with this method. How about looking at the actual blocks on the fs: maybe the two files were written sequentially?
user324831 (113 rep)
May 20, 2025, 06:50 PM • Last activity: May 20, 2025, 06:55 PM
7 votes
2 answers
6056 views
How to resize at the beginning/move a btrfs partition on the command line?
I have a Linux/Windows dual boot setup on my notebook, in which I used to keep most of the data on the Windows partition to be able to access it from both systems. Since I almost never use Windows I shrunk the NTFS partition and plan to move the data to the Linux partition which is formatted as btrf...
I have a Linux/Windows dual boot setup on my notebook, in which I used to keep most of the data on the Windows partition to be able to access it from both systems. Since I almost never use Windows I shrunk the NTFS partition and plan to move the data to the Linux partition which is formatted as btrfs. Beforehand the btrfs partition needs to be expanded at the beginning where the now free space is. fdisk can move the beginning of a partition but leaves the filesystem untouched. parted cannot handle the filesystem, either, since version 3.0. One solution to the problem would be to create a partition in the free space and add it as a backing device to the btrfs, then removing the original btrfs from btrfs (using btrfs device) and the partition table and after that expanding the remaining btrfs+partition to the end of the drive. The problems here are that the new free space must be big enough to hold all the files from the btrfs and that all the data has to be moved. So my question is: Is there some other, preferably more elegant and generally applicable, way to expand a btrfs at the beginning? ***Edit: (Solution)*** Even if GParted might be able to resize at the beginning by automatically moving the filesystem, I tried the way described above since I have the free space. As it took ages (perhaps because of many subvolumes), used many cpu and I/O resources and then aborted with an I/O error, I used btrfs replace instead which worked just fine: It took a few hours during which the computer was perfectly usable.
jorsn (183 rep)
Oct 31, 2017, 10:04 AM • Last activity: May 16, 2025, 09:03 PM
0 votes
2 answers
91 views
How to defragment compressed btrfs files?
If I defragment files on btrfs with the command btrfs filesystem defrag --step 1G file Everything is fine. A `filefrag -v file` clearly show, the extent count significantly decreased. Things are very different if I deal with compressed files. First, filefrag gives a huge amount of extents: Filesyste...
If I defragment files on btrfs with the command btrfs filesystem defrag --step 1G file Everything is fine. A filefrag -v file clearly show, the extent count significantly decreased. Things are very different if I deal with compressed files. First, filefrag gives a huge amount of extents: Filesystem type is: 9123683e File size of file is 85942272 (20982 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 31: 607198.. 607229: 32: encoded 1: 32.. 63: 609302.. 609333: 32: 607230: encoded 2: 64.. 95: 609314.. 609345: 32: 609334: encoded 3: 96.. 127: 609326.. 609357: 32: 609346: encoded ... 648: 20928.. 20959: 704298.. 704329: 32: 704299: encoded 649: 20960.. 20981: 691987.. 692008: 22: 704330: last,encoded,eof file: 650 extents found Second, if btrfs filesystem defragment commands return on the spot, without error report - and with an unchanged filefrag output. My impression is that fragmentation of compressed files is not an issue on btrfs filesystems et all. However, my ears clearly show: yes it is an issue for me. So, how to defragment on btrfs compressed file? How could I even see, are they even continuous, but not their encoded ( == compressed ) extents, instead their compressed blocks on the hdd?
peterh (10448 rep)
May 12, 2025, 02:22 PM • Last activity: May 12, 2025, 06:29 PM
6 votes
1 answers
629 views
Why is convert=soft not default in btrfs balance?
Enabling `soft` when seems to be all upside with no downside: `balance` will skip chunks which are already in the right place. [`man btrfs-balance`]() says: > soft Takes no parameters. Only has meaning when converting between profiles. When doing convert from one profile to another and soft mode is...
Enabling soft when seems to be all upside with no downside: balance will skip chunks which are already in the right place. [man btrfs-balance]() says: > soft Takes no parameters. Only has meaning when converting between profiles. When doing convert from one profile to another and soft mode is on, chunks that already have the target profile are left untouched. This is useful e.g. when half of the filesystem was converted earlier but got cancelled. > The soft mode switch is (like every other filter) per-type. For example, this means that we can convert metadata chunks the "hard" way while converting data chunks selectively with soft switch. Are there any cases in which one would **not** wish to pass soft?
Tom Hale (32892 rep)
Jan 26, 2019, 05:48 AM • Last activity: May 2, 2025, 03:56 PM
Showing page 1 of 20 total questions