Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
0 answers
42 views
resize luks encrypted xfs on lvm partition to extend root with additional space
i have a ~200GB luks encrypted partition on a dual boot setup and i've just shrunk my windows partition by a bit so i can use the unallocated space on my root partition which is xfs. how would i go about extending the luks partition and subsequently the voidvm/root one? [![gparted][1]][1] $ lsblk -f...
i have a ~200GB luks encrypted partition on a dual boot setup and i've just shrunk my windows partition by a bit so i can use the unallocated space on my root partition which is xfs. how would i go about extending the luks partition and subsequently the voidvm/root one? gparted $ lsblk -f NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS nvme0n1 ├─nvme0n1p1 vfat FAT32 SYSTEM 089A-0EBD /boot/efi ├─nvme0n1p2 ├─nvme0n1p3 ntfs Windows 18E6E384E6E3610C ├─nvme0n1p4 ntfs 066C04116C03FA67 └─nvme0n1p5 crypto_LUKS 1 2ab65cad-808c-4168-8e51-0e081bd9d58b └─voidvm LVM2_member LVM2 001 c4mDao-UZLC-znl1-efSm-SmPB-DrRU-ChSQ82 ├─voidvm-root xfs root 2559b74d-53a8-437f-82e5-62b514f6987d 2.1G 91% / └─voidvm-home xfs home 60588d15-9846-43c9-996b-a4d09cea8b07 17.1G 90% /home Physical vol sudo pvs PV VG Fmt Attr PSize PFree /dev/mapper/voidvm voidvm lvm2 a-- <195.31g 0 LVM lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home voidvm -wi-ao---- <171.31g root voidvm -wi-ao---- 24.00g
peregrinator (1 rep)
Jul 17, 2025, 08:08 AM
1 votes
1 answers
1907 views
Mounting XFS partition image from xfs_copy
Used `xfs_copy` to copy a partition of the hard drive of a Fedora 27 server to a file, now trying to mount the file on my Antergos desktop I get: mount: /mnt/server: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error. with the command: $ sudo...
Used xfs_copy to copy a partition of the hard drive of a Fedora 27 server to a file, now trying to mount the file on my Antergos desktop I get: mount: /mnt/server: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error. with the command: $ sudo mount -t xfs -o loop serverbackup.img /mnt/server Not sure what I'm missing?
PoLoMoTo (11 rep)
Aug 2, 2018, 10:17 PM • Last activity: Jun 29, 2025, 02:01 AM
0 votes
1 answers
1890 views
Unable to boot on gentoo: GRUB2 - xfs - NVME SSD
I am currently configuring gentoo on a new machine using an NVME 500Gb SSD. I reboot my computer, select the disk I want to boot off of, grub2 initializes, **and then**, the error I get is the following: !!Block device UUID="9a89bdb4-8f36-4aa6-a4c7-831943b0985c" is not a valid root device... !!Could...
I am currently configuring gentoo on a new machine using an NVME 500Gb SSD. I reboot my computer, select the disk I want to boot off of, grub2 initializes, **and then**, the error I get is the following: !!Block device UUID="9a89bdb4-8f36-4aa6-a4c7-831943b0985c" is not a valid root device... !!Could not find the root block device in UUID="9a89bdb4-8f36-4aa6-a4c7-831943b0985c" Please specify another value or: Press Enter for the same, type "shell" for a shell, or q to skip..." root block device() :: Here is my current partition scheme: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 465.8G 0 disk ├─nvme0n1p1 259:1 0 2M 0 part /boot/efi ├─nvme0n1p2 259:2 0 128M 0 part /boot ├─nvme0n1p3 259:3 0 5G 0 part [SWAP] ├─nvme0n1p4 259:4 0 200G 0 part / └─nvme0n1p5 259:5 0 260.6G 0 part /home Here is my blkid: /dev/nvme0n1p1: SEC_TYPE="msdos" UUID="DC09-2FD7" TYPE="vfat" PARTLABEL="grub" PARTUUID="2d5991fd-18ac-1148-a4d8-deb02f744ecb" /dev/nvme0n1p2: UUID="6070-07C6" TYPE="vfat" PARTLABEL="boot" PARTUUID="5dba49e5-03cc-744e-bd47-a7570e83b08c" /dev/nvme0n1p3: UUID="db229aaf-ddb4-4a86-8075-e7f035bfbf19" TYPE="swap" PARTLABEL="swap" PARTUUID="fdc303cc-e54e-c049-899a-e26286b5ec47" /dev/nvme0n1p4: UUID="9a89bdb4-8f36-4aa6-a4c7-831943b0985c" TYPE="xfs" PARTLABEL="root" PARTUUID="da6232eb-58ab-9948-a3f6-8a7f14eebde4" /dev/nvme0n1p5: UUID="e3237966-1b71-44b3-9d96-1ed7cc6f4d84" TYPE="xfs" PARTLABEL="home" PARTUUID="5b294354-fc3b-3148-bba2-418acfbb32bc" This is part of my config in /etc/default/grub GRUB_CMDLINE_LINUX="rootfstype=xfs init=/usr/lib/systemd/systemd" And this is my /boot/grub/grub.cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi if [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=true else set default="0" fi if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_gpt insmod xfs if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root 9a89bdb4-8f36-4aa6-a4c7-831943b0985c else search --no-floppy --fs-uuid --set=root 9a89bdb4-8f36-4aa6-a4c7-831943b0985c fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=auto load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_CA insmod gettext fi terminal_output gfxterm if [ x$feature_timeout_style = xy ] ; then set timeout_style=menu set timeout=5 # Fallback normal timeout code in case the timeout_style feature is # unavailable. else set timeout=5 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Gentoo GNU/Linux' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' { load_video if [ "x$grub_platform" = xefi ]; then set gfxpayload=keep fi insmod gzio insmod part_gpt insmod fat if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root 6070-07C6 else search --no-floppy --fs-uuid --set=root 6070-07C6 fi echo 'Loading Linux x86_64-4.19.44-gentoo ...' linux /kernel-genkernel-x86_64-4.19.44-gentoo root=/dev/nvme0n1p4 ro rootfstype=xfs init=/usr/lib/systemd/systemd echo 'Loading initial ramdisk ...' initrd /initramfs-genkernel-x86_64-4.19.44-gentoo } submenu 'Advanced options for Gentoo GNU/Linux' $menuentry_id_option 'gnulinux-advanced-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' { menuentry 'Gentoo GNU/Linux, with Linux x86_64-4.19.44-gentoo' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-x86_64-4.19.44-gentoo-advanced-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' { load_video if [ "x$grub_platform" = xefi ]; then set gfxpayload=keep fi insmod gzio insmod part_gpt insmod fat if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root 6070-07C6 else search --no-floppy --fs-uuid --set=root 6070-07C6 fi echo 'Loading Linux x86_64-4.19.44-gentoo ...' linux /kernel-genkernel-x86_64-4.19.44-gentoo root=/dev/nvme0n1p4 ro rootfstype=xfs init=/usr/lib/systemd/systemd echo 'Loading initial ramdisk ...' initrd /initramfs-genkernel-x86_64-4.19.44-gentoo } menuentry 'Gentoo GNU/Linux, with Linux x86_64-4.19.44-gentoo (recovery mode)' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-x86_64-4.19.44-gentoo-recovery-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' { load_video if [ "x$grub_platform" = xefi ]; then set gfxpayload=keep fi insmod gzio insmod part_gpt insmod fat if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root 6070-07C6 else search --no-floppy --fs-uuid --set=root 6070-07C6 fi echo 'Loading Linux x86_64-4.19.44-gentoo ...' linux /kernel-genkernel-x86_64-4.19.44-gentoo root=/dev/nvme0n1p4 ro single rootfstype=xfs init=/usr/lib/systemd/systemd echo 'Loading initial ramdisk ...' initrd /initramfs-genkernel-x86_64-4.19.44-gentoo } menuentry 'Gentoo GNU/Linux, with Linux 4.19.44-gentoo' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.19.44-gentoo-advanced-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' { load_video if [ "x$grub_platform" = xefi ]; then set gfxpayload=keep fi insmod gzio insmod part_gpt insmod fat if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root 6070-07C6 else search --no-floppy --fs-uuid --set=root 6070-07C6 fi echo 'Loading Linux 4.19.44-gentoo ...' linux /vmlinuz-4.19.44-gentoo root=/dev/nvme0n1p4 ro rootfstype=xfs init=/usr/lib/systemd/systemd echo 'Loading initial ramdisk ...' initrd /initramfs-genkernel-x86_64-4.19.44-gentoo } menuentry 'Gentoo GNU/Linux, with Linux 4.19.44-gentoo (recovery mode)' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.19.44-gentoo-recovery-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' { load_video if [ "x$grub_platform" = xefi ]; then set gfxpayload=keep fi insmod gzio insmod part_gpt insmod fat if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root 6070-07C6 else search --no-floppy --fs-uuid --set=root 6070-07C6 fi echo 'Loading Linux 4.19.44-gentoo ...' linux /vmlinuz-4.19.44-gentoo root=/dev/nvme0n1p4 ro single rootfstype=xfs init=/usr/lib/systemd/systemd echo 'Loading initial ramdisk ...' initrd /initramfs-genkernel-x86_64-4.19.44-gentoo } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### Finally, here is the content of my etc/fstab: # /etc/fstab: static file system information. # # noatime turns off atimes for increased performance (atimes normally aren't # needed); notail increases performance of ReiserFS (at the expense of storage # efficiency). It's safe to drop the noatime options if you want and to # switch between notail / tail freely. # # The root filesystem should have a pass number of either 0 or 1. # All other filesystems should have a pass number of 0 or greater than 1. # # See the manpage fstab(5) for more information. # # # NOTE: If your BOOT partition is ReiserFS, add the notail option to opts. # # NOTE: Even though we list ext4 as the type here, it will work with ext2/ext3 # filesystems. This just tells the kernel to use the ext4 driver. # # NOTE: You can use full paths to devices like /dev/sda3, but it is often # more reliable to use filesystem labels or UUIDs. See your filesystem # documentation for details on setting a label. To obtain the UUID, use # the blkid(8) command. #LABEL=boot /boot ext4 noauto,noatime 1 2 #UUID=58e72203-57d1-4497-81ad-97655bd56494 / ext4 noatime 0 1 #LABEL=swap none swap sw 0 0 #/dev/cdrom /mnt/cdrom auto noauto,ro 0 0 # /dev/nvme0n1p4 UUID=9a89bdb4-8f36-4aa6-a4c7-831943b0985c / xfs rw,relatime,attr2,inode64,noquota 0 1 # /dev/nvme0n1p2 UUID=6070-07C6 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro 0 2 # /dev/nvme0n1p1 UUID=DC09-2FD7 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro 0 2 # /dev/nvme0n1p5 UUID=e3237966-1b71-44b3-9d96-1ed7cc6f4d84 /home xfs rw,relatime,attr2,inode64,noquota 0 2 # /dev/nvme0n1p3 UUID=3128bf96-71f7-4a95-a81c-f82788c37f4f none swap defaults 0 0 I also did the following for troubleshooting: - enable nvme support in the kernel - enable xfs filesystem support in the kernel - load grub without the rootfstype=xfs - substitute the UUID with /dev/nvme0n1p4 in my fstab file - drown my sorrows in liquor [This issue](https://unix.stackexchange.com/questions/343056/could-not-find-the-root-block-device-in-gentoo) does not apply as it was a USB driver problem. And [this one](https://forums.gentoo.org/viewtopic-t-919588-start-0.html) was not of any help either.
Benjamin Chausse (107 rep)
Jun 1, 2019, 09:15 AM • Last activity: Jun 28, 2025, 02:05 AM
0 votes
1 answers
37 views
Booting in maintenance mode does not show all LVs (for fsck)
This is a follow-up on this question: https://unix.stackexchange.com/questions/797373/how-to-run-fsck-with-lvm I have a RHEL 9 server which uses LVM on `/dev/sda3`. There are several LVs on a single VG. After booting in maintenance mode (`rd.break` kernel option at GRUB boot line), I'd like to run a...
This is a follow-up on this question: https://unix.stackexchange.com/questions/797373/how-to-run-fsck-with-lvm I have a RHEL 9 server which uses LVM on /dev/sda3. There are several LVs on a single VG. After booting in maintenance mode (rd.break kernel option at GRUB boot line), I'd like to run a fsck on all partitions. However, /dev/mapper/ and /dev/myvg/ only list the root and swap LVs, which are the only ones mentioned in rd.lvm.lv= in the kernel options. A xfs_repair /dev/sda3 returns an error > Cannot open /dev/sda3: Device or resource busy. How can I find (and do a fs check) the other LVs?
dr_ (32068 rep)
Jun 25, 2025, 12:52 PM • Last activity: Jun 26, 2025, 12:23 AM
1 votes
2 answers
1924 views
XFS filesystem has 8TB free but fails with "No space left on device" for small files
I am using a xfs file system at work for storing image processing data. Currently, it has around 8.8T of free space. /dev/sdh1 106T 97T 8.8T 92% While there are plans to move some of the data to tape and make room, it won't happen until next week. Currently, I keep running into "No space left on dev...
I am using a xfs file system at work for storing image processing data. Currently, it has around 8.8T of free space. /dev/sdh1 106T 97T 8.8T 92% While there are plans to move some of the data to tape and make room, it won't happen until next week. Currently, I keep running into "No space left on device" error quite regularly. Usually the images trasferred are around 128mb in size and they are around 100-500 of them at time. Is there anything specific to the file system that is making these ~8TB of free space unusable? On my end, I was able to verify that I can use atleast 8TB of this space using the command fallocate to create really large files of around a TB. What I am missing? Are there any obvious file-system level checks that I need to do? For your reference, here is the output of the command xfs_info for the filesystem. meta-data=/dev/sdh1 isize=256 agcount=106, agsize=268435455 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=28319810304, imaxpct=1 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 In order to reproduce the same error, I wrote a simple shell script that creates a large number of files(10k) small files (1M in size) and it fails with the following error: fallocate: temfile-7464: open failed: No space left on device Here is the output of df -i before the script is run /dev/sdh1 4531169600 648793 4530520807 1% /jumbo/K2LEGINON And after /dev/sdh1 4531169600 656256 4530513344 1% /jumbo/K2LEGINON It failed after creating around ~7500 files. Which amount to ~ 7.3G.
feverDream (341 rep)
Aug 29, 2016, 04:58 AM • Last activity: Jun 15, 2025, 10:54 AM
0 votes
1 answers
10393 views
xfs_repair could not find valid secondary superblock
I had 3 bad tracks on my hard drive so I used [Disk Genius](https://www.diskgenius.com/) to fix it,ofcourse the data near the back tracks were wiped out.Then I booted from linux rescue disk and run xfs_repair /dev/sda1 xfs_repair /dev/sda2 sda1 went through ok but sda2 at some point it says"Sorry,co...
I had 3 bad tracks on my hard drive so I used [Disk Genius](https://www.diskgenius.com/) to fix it,ofcourse the data near the back tracks were wiped out.Then I booted from linux rescue disk and run xfs_repair /dev/sda1 xfs_repair /dev/sda2 sda1 went through ok but sda2 at some point it says"Sorry,could not find valid secondary superblock" and can't go all the way through.What other method do I have to fix it?
shadow_wxh (191 rep)
Mar 31, 2018, 11:00 AM • Last activity: Jun 14, 2025, 02:00 AM
11 votes
1 answers
791 views
df shows 539G used on /apps, but du shows only 47G — unexplained disk usage discrepancy
I have a problem and I don't know what the problem is. ``` df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 5.0G 113M 4.9G 3% / /dev/mapper/appsvg-lvapps 690G 539G 152G 79% /apps ``` When I try with the `du` command, it only shows 47G: ``` du -sh /apps 47G /apps ``` I have: - Checked for d...
I have a problem and I don't know what the problem is.
df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/sda4                       5.0G  113M  4.9G   3% /
/dev/mapper/appsvg-lvapps       690G  539G  152G  79% /apps
When I try with the du command, it only shows 47G:
du -sh /apps
47G     /apps
I have: - Checked for deleted files still held open by processes using lsof +L1; found some deleted files, but none accounting for the large disk usage. -- it's not that - Verified presence of hidden or trash folders in /apps —- found none that explain the usage. - Verified filesystem mount point and type (xfs on /apps) -- no nested mounts inside /apps.
findmnt -R /apps
    
    TARGET SOURCE                     FSTYPE OPTIONS
    
    /apps  /dev/mapper/appsvg-lvapps xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota
- Checked inode usage; inodes are plentiful, so no inode exhaustion or excessive small files. - Ran xfs_quota report on /apps to check for quotas -— no output, confirmed quotas are disabled (noquota mount option). - No bind mounts or hidden mount points found inside /apps that might cause discrepancies. - du output consistent across normal and hidden files, so no hidden large directories. I ruled out metadata mismatch (du and find + du totals match) - There are no snapshots. I suspected that the high usage might be due to data written to the mount point before the /apps filesystem was mounted, meaning files were written to the underlying directory, but since / itself shows low usage, that's not the case either. Does someone have an idea what this could be? I've already ruled out common causes like deleted files, loop devices, submounts, hidden files, etc. The discrepancy **is over 500G** which suggests something beyond the scope or can someone explain to me how is this normal behavior? I understand that differences of several gigabytes can happen, but a discrepancy of nearly 500GB -- how is that normal? UPDATE: Thanks everyone for the help, definitely an interesting issue. In the end, the problem was resolved when the application team restarted the application. I’ll keep monitoring the situation and will report back with any updates.
Mina Krstic (119 rep)
May 22, 2025, 10:24 AM • Last activity: Jun 11, 2025, 09:37 AM
0 votes
1 answers
1927 views
Proper way of extend /opt with lvm and xfs filesystem
I am keen to know more about the proper way to extend /opt after additional hard disk 5GB with lvm and xfs filesystem. Kindly advised. Many thanks.
I am keen to know more about the proper way to extend /opt after additional hard disk 5GB with lvm and xfs filesystem. Kindly advised. Many thanks.
Nick eric adelee (49 rep)
May 24, 2022, 05:45 AM • Last activity: Jun 1, 2025, 12:07 AM
1 votes
1 answers
45 views
Is enlarging XFS file system heavy operation?
If there is some data in the file system does it has to be moved?
If there is some data in the file system does it has to be moved?
jarno (738 rep)
May 24, 2025, 08:26 AM • Last activity: May 24, 2025, 10:59 AM
25 votes
5 answers
121598 views
How to correctly install GRUB on a soft RAID 1?
In my setup, I have two disks that are each formatted in the following way: (GPT) 1) 1MB BIOS_BOOT 2) 300MB LINUX_RAID 3) * LINUX_RAID The boot partitions are mapped in /dev/md0, the rootfs in /dev/md1. md0 is formatted with ext2, md1 with XFS. (I understand that formatting has to be done on the md...
In my setup, I have two disks that are each formatted in the following way: (GPT) 1) 1MB BIOS_BOOT 2) 300MB LINUX_RAID 3) * LINUX_RAID The boot partitions are mapped in /dev/md0, the rootfs in /dev/md1. md0 is formatted with ext2, md1 with XFS. (I understand that formatting has to be done on the md devices and not on sd - please tell me if this is wrong). How do I setup GRUB correctly so that if one drive fails, the other will still boot? And by extension, that a replacement drive will automatically include GRUB, too? If this is even possible, of course.
vic (2302 rep)
Sep 17, 2015, 05:40 PM • Last activity: May 12, 2025, 02:31 PM
10 votes
2 answers
1341 views
md raid5: translate md internal sector numbers to offsets
**TL;DR summary**: Translate an md sector number into offsets(s) within the `/dev/mdX` device, and how to investigate it with `xfs_db`. The sector number is from `sh->sector` in [`linux/drivers/md/raid5.c:handle_parity_checks5()`][1]. I don't know MD internals, so I don't know exactly what to do wit...
**TL;DR summary**: Translate an md sector number into offsets(s) within the /dev/mdX device, and how to investigate it with xfs_db. The sector number is from sh->sector in linux/drivers/md/raid5.c:handle_parity_checks5() . I don't know MD internals, so I don't know exactly what to do with the output from the printk logging I added. Offsets into the component devices (for dd or a hex editor/viewer) would also be interesting. I suppose I should ask this on the Linux-raid mailing list. Is it subscribers-only, or can I post without subscribing? ---- I have xfs directly on top of MD RAID5 of 4 disks in my desktop (no LVM). A recent scrub detected a non-zero mismatch_cnt (8 in fact, because md operates on 4kiB pages at a time). This is a RAID5, not RAID1/RAID10 where mismatch_cnt != 0 can happen during normal operation . (The other links at the bottom of this wiki page might be useful to some people.) I could just blindly repair, but then I'd have no idea which file to check for possible corruption, besides losing any chance to choose which way to reconstruct. Frostschutz's answer on a similar question is the only suggestion I found for tracking back to a difference in the filesystem. It's cumbersome and slow, and I'd rather use something better to narrow it down to a few files first. --- ### Kernel patch to add logging Bizarrely, md's check feature doesn't report where an error was found . **I added a printk in md/raid5.c to log sh->sector in the if branch that increments mddev->resync_mismatches in handle_parity_checks5()** (tiny patch published on github , originally based on 4.5-rc4 from kernel.org.) For this to be ok for general use, it would probably need to avoid flooding the logs in repairs with a lot of mismatches (maybe only log if the new value of resync_mismatches is sector * 512 a linear address inside /dev/md/t-r5 (aka /dev/md125)? Is it a sector number within each component device (so it refers to three data and one parity sector)? I'm guessing the latter, since a parity-mismatch in RAID5 means N-1 sectors of the md device are in peril, offset from each other by the stripe unit. Is sector 0 the very start of the component device, or is it the sector after the superblock or something? Was there more information in handle_parity_checks5()` that I should have calculated / logged? If I wanted to get just the mismatching blocks, is this correct? dd if=/dev/sda6 of=mmblock.0 bs=512 count=8 skip=4294708224 dd if=/dev/sdb6 of=mmblock.1 bs=512 count=8 skip=4294708224 dd if=/dev/sda6 of=mmblock.2 bs=512 count=8 skip=4294708224 dd if=/dev/sdd of=mmblock.3 bs=512 count=8 skip=4294708224 ## not a typo: my 4th component is a smaller full-disk # i.e. sec_block() { for dev in {a,b,c}6 d; do dd if=/dev/sd"$dev" of="sec$1.$dev" skip="$1" bs=512 count=8;done; }; sec_block 123456 I'm guessing not, because I get 4k of zeros from all four raid components, and 0^0 == 0, so that should be the correct parity, right? One other place I've seen mention of using sector addresses in md is for sync_min and sync_max (in sysfs). Neil Brown on the linux-raid list , in response to a question about a failed drive with sector numbers from hdrecover, where Neil used the full-disk sector number as an MD sector number. That's not right is it? Wouldn't md sector numbers be relative to the component devices (partitions in that case), not the full device that the partition is a part of? ---- ### linear sector to XFS filename: Before realizing that the md sector number was probably for the components, not the RAID device, I tried using it in read-only xfs_db: Dave Chinner's very brief suggestion on how to find how XFS is using a given block didn't seem to work at all for me. (I would have expected some kind of result, for some sector, since the number shouldn't be beyond the end of the device even if it's not the mismatched sector) # xfs_db -r /dev/md/t-r5 xfs_db> convert daddr 4294708224 fsblock 0x29ad5e00 (699227648) xfs_db> blockget -nv -b 699227648 xfs_db> blockuse -n # with or without -c 8 must run blockget first huh? What am I doing wrong here? I guess this should be a separate question. I'll replace this with a link if/when I ask it or find an answer to this part somewhere else. My RAID5 is essentially idle, with no write activity and minimal read (and noatime, so reads aren't producing writes). ---- ### Extra stuff about my setup, nothing important here Many of my files are video or other compressed data that give an effective way to tell whether the data is correct or not (either internal checksums in the file format, or just whether it decodes without errors). That would make this read-only loopback method viable, once I know which file to check. I didn't want to run a 4-way diff of every file in the filesystem to find the mismatch first, though, when the kernel has the necessary information while checking, and could easily log it. ---- my /proc/mdstat for my bulk-data array: md125 : active raid5 sdd[3] sda6 sdb6[1] sdc6[4] 7325273088 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/19 pages [0KB], 65536KB chunk It's on partitions on three Toshiba 3TB drives, and a non-partitioned WD25EZRS green-power (slow) drive which I'm replacing with another Toshiba. (Using [mdadm --replace](https://unix.stackexchange.com/a/104052/79808) to do it online with no gaps in redundancy. I realized after one copy that I should check the RAID health before as well as after, to detect problems. That's when I detected the mismatch. It's possible it's been around for a long time, since I had some crashes almost a year ago, but I don't have old logs and mdadm doesn't seem to send mail about this by default (Ubuntu 15.10). My other filesystems are on RAID10f2 devices made from earlier partitions on the three larger HDs (and RAID0 for /var/tmp). The RAID5 is just for bulk-storage, not /home or /. My drives are all fine: SMART error counts are 0 all bad-block counters on all drives, and short + long SMART self-tests passed. ---- near-duplicates of this question which don't have answers: * https://unix.stackexchange.com/questions/256514/what-chunks-are-mismatched-in-a-linux-md-array * http://www.spinics.net/lists/raid/msg49459.html * MDADM mismatch_cnt > 0. Any way to identify which blocks are in disagreement? * Other things already linked inline, but most notably frostschutz's read-only loopback idea . * scrubbing on the Arch wiki RAID page
Peter Cordes (6645 rep)
Feb 29, 2016, 04:01 AM • Last activity: May 6, 2025, 03:40 PM
2 votes
0 answers
67 views
Adding a Label to XFS devlog partition
**Situation** I am setting up a XFS filesystem with an external "devlog" partition. The reason I'm doing this is to save on costs a bit by making a large slow drive for data storage and a small fast drive for the log section. Created like so: #-m: global_metadata_options; -f Force overwrite; -l log_...
**Situation** I am setting up a XFS filesystem with an external "devlog" partition. The reason I'm doing this is to save on costs a bit by making a large slow drive for data storage and a small fast drive for the log section. Created like so: #-m: global_metadata_options; -f Force overwrite; -l log_section_options; -n naming_options; -L label # finobt=1: Separate free inode btree: this allows things to run much faster as inodes are created and discarded. # ftype=1 : Overlayfs will not work unless you set ftype=1. mkfs.xfs -m finobt=1 -l logdev="/dev/nvme2n1p1" -n ftype=1 -L "Backup_Data" "/dev/nvme1n1" And resulting in: # lsblk -o name,fstype,label NAME FSTYPE LABEL nvme1n1 xfs Backup_Data nvme2n1 └─nvme2n1p1 xfs_external_log I am labeling the XFS file system so it can be more reliably mounted by fstab. LABEL=Backup_Data /data xfs nodiratime,noatime,logdev=/dev/nvme2n1p1 0 2 **Problem** You may see my concern here. I can label the main XFS drive (Backup_Data), but I cannot find any documentation (or much of anything really) on if I can label and use that label to define the logdev option(/dev/nvme2n1p1). So if for some reason, my drive names change the XFS data section could be found, but the external log would not be found and the mount would fail. **Wishful Solution** I was hoping to do something along the lines of NAME FSTYPE LABEL nvme1n1 xfs Backup_Data nvme2n1 └─nvme2n1p1 xfs_external_log Backup_Log LABEL=Backup_Data /data xfs nodiratime,noatime,logdev=Backup_Log 0 2 But I've no clue if I can even label the logdev since it isn't exactly a filesystem, let alone use labels in the logdev option. **Research** Beyond the man page for xfs and a couple basic how tos that amount to "you can use an external log!", I am not finding much on logdev. Likely on me for not know quite where to look, but as far as I can tell there isn't much out there. > logdev=device and rtdev=device > >Use an external log (metadata journal) and/or real-time device. An XFS filesystem has up to three parts: a data section, a log section, and a real-time section. The real- time section is optional, and the log section can be separate from the data section or contained within it. I'm guessing there isn't a good solution for this, but thought wiser minds then mine may know an answer. Thank you for your time either way!
Nathan Woehr (21 rep)
Apr 25, 2025, 08:53 PM
2 votes
2 answers
1033 views
"xfs_copy" equivalent for ext4?
I love `xfs_copy`'s ability to clone an XFS file system from disk to disk. Is there an equivalent tool to clone an ext4 file system? I've tried `dump`/`restore`, but it requires the destination file system to be created and mounted. So it is not an equivalent to `xfs_copy`. What is the `xfs_copy` eq...
I love xfs_copy's ability to clone an XFS file system from disk to disk. Is there an equivalent tool to clone an ext4 file system? I've tried dump/restore, but it requires the destination file system to be created and mounted. So it is not an equivalent to xfs_copy. What is the xfs_copy equivalent for ext4?
user3501019 (23 rep)
Jan 8, 2017, 08:20 AM • Last activity: Apr 12, 2025, 05:11 PM
3 votes
0 answers
99 views
Slow Linux file access to /tmp
time touch /tmp/test.dat real 0m1.03s user 0m0.00s sys 0m1.02s A full second of sys-mode time to create a file in `/tmp`. That can become unbearable for ksh scripts that open dozens of files in `/tmp` for subshell handling. strace shows the time at the `openat` call: strace -tttT touch /tmp/test.dat...
time touch /tmp/test.dat real 0m1.03s user 0m0.00s sys 0m1.02s A full second of sys-mode time to create a file in /tmp. That can become unbearable for ksh scripts that open dozens of files in /tmp for subshell handling. strace shows the time at the openat call: strace -tttT touch /tmp/test.dat . . . [clip] . . . 1737560680.004656 close(3) = 0 1737560680.004750 openat(AT_FDCWD, "/tmp/test.dat", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666) = 3 1737560681.375253 dup2(3, 0) = 0 . . . [clip] . . . Current CPU during these tests has 70-80% idle. Plenty of memory. /tmp is only 1% used, though it does have a lot of empty directories beneath it (5,268). Server has been up 96 days. We had this problem on another server which panicked for some reason and rebooted. After the reboot, the problem was gone - access to /tmp was fast again. So something over time is causing /tmp access to get slower and slower, and a reboot clears it. OS version: 5.4.17-2136.322.6.4.el8uek.x86_64, built by Oracle (this is an Exadata compute node) /tmp mount: /dev/mapper/VGExaDb-LVDbTmp xfs 45G 45M 45G 1% /tmp Oracle support threw up their hands (didn't really try, to be honest). Any Unix gurus out there have ideas on what I can look into? What kinds of things can cause this to be so slow?
Paul W (183 rep)
Jan 22, 2025, 03:54 PM • Last activity: Jan 22, 2025, 08:25 PM
1 votes
0 answers
107 views
SMB writes fail, system thinks a file is a directory
Got a new NAS server I'm testing. It's basically CentOS 7.5.1804 with Areca hardware RAID cards and lots of HDDs, and XFS volumes. Sporadic copying tests often succeed. But whenever I'm running large data copies from our SAN, they invariably eventually fail in the following manner: rsync -ah --exclu...
Got a new NAS server I'm testing. It's basically CentOS 7.5.1804 with Areca hardware RAID cards and lots of HDDs, and XFS volumes. Sporadic copying tests often succeed. But whenever I'm running large data copies from our SAN, they invariably eventually fail in the following manner: rsync -ah --exclude="._*" --exclude=".DS_Store" --quiet --progress /Volumes/BEST* /Volumes/SAN_Backups/BEST/ rsync: writefd_unbuffered failed to write 32768 bytes [sender]: Broken pipe (32) rsync: write failed on "/Volumes/NEXIS_Backups/BEST/BEST_Media/FROM CLIENT/091224_BONUS CLEAN/02_YG_MAR_v2.mov": Is a directory (21) rsync: connection unexpectedly closed (4116 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at /AppleInternal/Library/BuildRoots/46298ee5-4a1c-11ef-a181-1aec23608739/Library/Caches/com.apple.xbs/Sources/rsync/rsync/io.c(453) [sender=2.6.9] writefd_unbuffered fails to write, broken pipe (32). Write fails "Is a directory (21)" on a non-directory file Connection unexpectedly closes, rsync error code 12. - Running the rsync on an Apple Silicon Mac, as you noticed. The Mac has our SAN mounted, as well as this CentOS server over SMB. - Needless to say, "02_YG_MAR_v2.mov" is a regular file, and not a directory. - If I re-run the command, it will process the errored file correctly, and fail on another file in the same manner. Rinse and repeat. - Running the same rsync command, on the same source data, on the same system, to a different CentOS (ZFS) NAS server, does not cause these errors. That server is quite different, it just supports my assertion that the problem is in this new NAS and not with the Mac client, rsync code, source data, nor the SAN. - So far we tried blowing up and recreating the XFS filesystem (including the volume group and logical volume), and checked the RAID controller for errors. No change. Any insights? I haven't found any info online to go off of, about files operations failing with the last file being mistaken for a directory… Update: We gathered more verbose file I/O logs from the server end, and don't see any errors when the copying breaks: editor closed file BEST/BEST_Media/FROM CLIENT/091224_BONUS CLEAN/02_YG_MAR_v1.mov (numopen=4) NT_STATUS_OK unix_mode(BEST/BEST_Media/FROM CLIENT/091224_BONUS CLEAN/.02_YG_MAR_v2.mov.kNey0S) inherit mode 40777 editor opened file BEST/BEST_Media/FROM CLIENT/091224_BONUS CLEAN/.02_YG_MAR_v2.mov.kNey0S read=Yes write=Yes (numopen=4) editor closed file BEST/BEST_Media/FROM CLIENT/091224_BONUS CLEAN/.02_YG_MAR_v2.mov.kNey0S (numopen=3) NT_STATUS_OK unix_mode(BEST/BEST_Media/FROM CLIENT/091224_BONUS CLEAN/.02_YG_MAR_v2.mov.kNey0S) inherit mode 40777 editor opened file BEST/BEST_Media/FROM CLIENT/091224_BONUS CLEAN/.02_YG_MAR_v2.mov.kNey0S read=No write=No (numopen=4) editor closed file BEST/BEST_Media/FROM CLIENT/091224_BONUS CLEAN/.02_YG_MAR_v2.mov.kNey0S (numopen=3) NT_STATUS_OK So i appears like the client gives up before even trying to rename the temp file to the final. Which makes it look like a macOS problem, not the server, except that I've never seen this happen before, e.g. another CentOS NAS server we have doesn't exhibit such an issue. Next, I realized the rsync bundled with macOS to this day is ancient (2.6.9 from 18 years ago!), and tried v3.3.0. But that one destroyed the computer every time… in the exact same manner that's been documented around fairly well. For example: https://apple.stackexchange.com/questions/421007/rsync-3-2-3-keeps-crashing-on-mac-mini-m1 The conclusion that using SMB on Mac OS's from the past 2 years is broken and unusable would be senseless either. So I'm just more stumped…
Drew (111 rep)
Nov 27, 2024, 07:45 PM • Last activity: Dec 19, 2024, 11:38 PM
-1 votes
1 answers
1516 views
LVM + lvcreate has detected an XFS filesystem signature on the logical + how to force the creation or wipe automatically
The following warning we are encountering occurs because lvcreate has detected an XFS filesystem signature on the logical `volume lv_rocket_lvm`, which means it may already contain data formatted with XFS. The lvcreate command is asking if we want to wipe that signature to create the new logical vol...
The following warning we are encountering occurs because lvcreate has detected an XFS filesystem signature on the logical volume lv_rocket_lvm, which means it may already contain data formatted with XFS. The lvcreate command is asking if we want to wipe that signature to create the new logical volume. one solution we can force the creation of the logical volume and wipe any existing data (including the filesystem signature), so we can proceed by answering y (for yes) when prompted. However, because we are running this command in a script we want to bypass the confirmation, so we use the --yes flag to force it. (still not tested this on my linux OS) example from our rhel 7.x server lvcreate -n lv_rocket_lvm --size 100g VGlinux WARNING: xfs signature detected on /dev/VGlinux/lv_rocket at offset 0. Wipe it? [y/n]: example of my suggestion lvcreate -n lv_rocket_lvm--size 100G --yes /dev/VGlinux/lv_rocket other option is to erase separately the volume like wipefs --all --force /dev/VGlinux/lv_rocket and then lvcreate -n lv_rocket_lvm --size 100g VGlinux bedside my solution I want to know if there are other options? Note - lv_rocket_lvm not appears from lvdisplay lvdisplay | grep "LV Path" LV Path /dev/VGlinux/lvm_swap LV Path /dev/VGlinux/lvm_var LV Path /dev/VGlinux/lvm_root
yael (13936 rep)
Dec 7, 2024, 07:45 PM • Last activity: Dec 8, 2024, 02:19 AM
2 votes
3 answers
7464 views
Filesystem with checksums?
I have a single hard drive. I want to use a filesystem that will give me less storage space, but as a tradeoff, give me checksums or any other method to help preserve data integrity. It is my understanding that something like ext4 or xfs will not do this, and thus you can suffer from silent data cor...
I have a single hard drive. I want to use a filesystem that will give me less storage space, but as a tradeoff, give me checksums or any other method to help preserve data integrity. It is my understanding that something like ext4 or xfs will not do this, and thus you can suffer from silent data corruption, aka bitrot. zfs looks like an excellent choice, but everything I have read says you need more than one disk to use it. Why is this? I realize having only one disk will not tolerate a single disk failure, but that is what multiple backup schemes are for. What backups *won't* help with is something like bitrot. So can I use zfs on a single hard drive for the single purpose of preventing bitrot? If not, what do you recommend?
gfdjjrtiejo (43 rep)
May 29, 2022, 11:19 PM • Last activity: Nov 25, 2024, 10:03 PM
0 votes
1 answers
1051 views
XFS superblock not found
I had an old drive beginning to fail and my backups were no good apparently. I got a new drive larger than the failed one and was able to ddrescue it over, with bad/missing superblocks of course. Through some fiddling around I was able to mount the new drive, but there were errors of course displaye...
I had an old drive beginning to fail and my backups were no good apparently.
I got a new drive larger than the failed one and was able to ddrescue it over, with bad/missing superblocks of course.
Through some fiddling around I was able to mount the new drive, but there were errors of course displayed on a directory listing. I began to copy what I could, and I'm assuming subdirectories had issues, because it errored mid-copy on the new drive and the partition stopped responding.
I cannot get xfs_repair to salvage the new drive saying to secondary superblocks were found. What are my options here? I suppose I could ddrescue off the old drive again, which unfortunately took over a day last time, but even then how do I proceed? Running testdisk/photorec would be a nightmare.
MaKR (181 rep)
Mar 2, 2023, 10:40 PM • Last activity: Oct 2, 2024, 08:25 AM
6 votes
2 answers
1664 views
Is ext4 and xfs only for usage with internal file systems?
I am attempting to store some large files and I thought an encrypted ext4 partition would be excellent. However the GNOME Disk Utility appears to state ext4 as for internal disks and xfs only for Linux filesystems. Can I utilize them both safely with removable media? Is there one that is more utilit...
I am attempting to store some large files and I thought an encrypted ext4 partition would be excellent. However the GNOME Disk Utility appears to state ext4 as for internal disks and xfs only for Linux filesystems. Can I utilize them both safely with removable media? Is there one that is more utilitarian for this usage? What makes them different? _________________ Disks Gnome EXT4 GNOME Disks XFS Thank you so much for any information you can provide.
Kitty Cat (157 rep)
Sep 21, 2024, 07:40 PM • Last activity: Sep 22, 2024, 09:23 PM
0 votes
1 answers
106 views
reflink==always space usage
I've got a large directory structure (email Maildir), with 705430 files, 1719 directories Using `cp -rP --reflink=always $source $destination` uses a fair chunk of space, approx 2–5GB. This is not a CoW change because the mail volume is an order of magnitude less, and e.g. running `du -chs` reveals...
I've got a large directory structure (email Maildir), with 705430 files, 1719 directories Using cp -rP --reflink=always $source $destination uses a fair chunk of space, approx 2–5GB. This is not a CoW change because the mail volume is an order of magnitude less, and e.g. running du -chs reveals zero change between start-time and end-time of the reflink copy. So I have two questions: 1. Is this behaviour expected ? I thought reflink was a zero cost copy, irrespective of directory tree size ? 2. Is there a more efficient way to do this ? Should I consider LVM snapshots for example ? This is XFS on Debian Bookworm.
Little Code (491 rep)
Aug 5, 2024, 07:30 PM • Last activity: Aug 6, 2024, 11:05 AM
Showing page 1 of 20 total questions