Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

2 votes
0 answers
122 views
How to get the stats of an lvm cache?
I am using an lvm cache in the most usual combination (small, fast ssd before a huge, slow hdd). It is simply awesome. However, I have not found a way to know, how many blocks are actually cached and how. What is particularly interesting: 1. Size of the read cache. These are the blocks on the ssd wh...
I am using an lvm cache in the most usual combination (small, fast ssd before a huge, slow hdd). It is simply awesome. However, I have not found a way to know, how many blocks are actually cached and how. What is particularly interesting: 1. Size of the read cache. These are the blocks on the ssd which are the same as the corresponding hdd block. 2. Size of the write cache. These blocks are the result if a write operation to the merged device, they are different on the ssd as on the hdd, and once (as there will be resources for that) will need to be synced out. My research found that lvm-cache is using device mapper below, more exactly the dm-cache driver. A
table
command is enough to get some numbers, but there is no way to know, exactly which number means what. I think, there should exist some lvm-level solution for the task.
peterh (10448 rep)
May 25, 2025, 02:24 PM
3 votes
1 answers
5227 views
device-mapper: reload ioctl on cache1 failed: Device or resource busy
When I run the below command while setting up dm-cache on my CentOS system, I receive the error: device-mapper: reload ioctl on cache1 failed: Device or resource busy Command failed Command is: dmsetup create 'cache1' --table '0 195309568 cache /dev/sdb /dev/sda 512 1 writethrough default 0' Does an...
When I run the below command while setting up dm-cache on my CentOS system, I receive the error: device-mapper: reload ioctl on cache1 failed: Device or resource busy Command failed Command is: dmsetup create 'cache1' --table '0 195309568 cache /dev/sdb /dev/sda 512 1 writethrough default 0' Does anyone have idea about this error or have faced this error while setting up dm-cache? My dmesg output is [1907480.058991] device-mapper: table: 253:3: cache: Error opening metadata device [1907480.058996] device-mapper: ioctl: error adding target to table
arpit joshi (445 rep)
Apr 18, 2016, 11:08 PM • Last activity: May 17, 2025, 06:06 PM
0 votes
0 answers
269 views
Systemd shutdown hanging for 90 seconds due to luks/btrfs
ver since I set up my LUKS-encrypted BTRFS RAID1 between my two NVMe drives on Debian 12, the shutdown process has taken way too long, about a minute and a half. I had to get a video of the shutdown screen before it powers off, which led me to the culprit of the dm/crypt not being un-mounted. I real...
ver since I set up my LUKS-encrypted BTRFS RAID1 between my two NVMe drives on Debian 12, the shutdown process has taken way too long, about a minute and a half. I had to get a video of the shutdown screen before it powers off, which led me to the culprit of the dm/crypt not being un-mounted. I realize that / being on the unlocked BTRFS file systems must complicate the umounting process during shutdown, but waiting this long cannot be the only way. I wouldn't care too much, as other posts about this issue suggest, except that waiting this long for a shutdown is annoying, and waiting this long for a reboot is frustrating. It's also a laptop, so waiting this amount of time before putting it away in my bag is a problem. I am guessing that this is supposed to happen, I wouldn't think people with LUKS and BTRFS are trulying waiting this long for reboot/shutdown?? 1. In my particular disk setup, is there any risk in these devices not being umounted properly? 2. Is there any way to remove the huge delay in shutdown? I attempted to change systemd's watchdog timer to 3 seconds, but it did not affect the shutdown time. 3. Why doesn't systemd handle this already? On a normal system, "/" would be on an EXT4 filesystem that would also need to be umounted at shutdown? lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
nvme0n1         259:0    0 931.5G  0 disk
├─nvme0n1p1     259:2    0   949M  0 part  /boot/efi
└─nvme0n1p2     259:3    0 930.6G  0 part
  └─crypt_nvme0 254:0    0 930.6G  0 crypt /
nvme1n1         259:1    0 931.5G  0 disk
├─nvme1n1p1     259:4    0   949M  0 part
└─nvme1n1p2     259:5    0 930.6G  0 part
  └─crypt_nvme1 254:1    0 930.6G  0 crypt
cat /etc/fstab
# MAIN
# Primary efi partition - secondary below
UUID=A490-28B5                            /boot/efi             vfat    umask=0077,noexec,nodev,nosuid  0       1

# BTRFS RAID 1 (UUID applies to both disks)
UUID=15767954-1ec3-44aa-b1e3-b890ca937277 /             btrfs   defaults,subvol=@,ssd,noatime,space_cache=v2,commit=120,compress=zstd 0 0
cat /etc/crypttab
crypt_nvme0 UUID=63780057-f91e-426d-9be3-84383fd9b534 none luks
crypt_nvme1 UUID=c434a425-6552-4036-ae34-4f8c1c728d9a none luks
journald logs before next boot:
Nov 17 15:35:36 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:36 debian systemd-cryptsetup: Device crypt_nvme0 is still in use.
Nov 17 15:35:36 debian systemd-cryptsetup: Failed to deactivate: Device or resource busy
Nov 17 15:35:36 debian systemd: systemd-cryptsetup@crypt_nvme0.service: Failed with result 'exit-code'.
Nov 17 15:35:36 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:37 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:37 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:37 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:37 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:37 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:38 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:38 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:38 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:38 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:38 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:39 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:39 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:39 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:39 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:39 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:40 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:40 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:40 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:40 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:40 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:41 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:41 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:41 debian systemd-cryptsetup: device-mapper: remove ioctl on crypt_nvme1  failed: Device or resource busy
Nov 17 15:35:41 debian systemd-cryptsetup: Device crypt_nvme1 is still in use.
Nov 17 15:35:41 debian systemd-cryptsetup: Failed to deactivate: Device or resource busy
Nov 17 15:35:41 debian systemd: systemd-cryptsetup@crypt_nvme1.service: Failed with result 'exit-code'.
Nov 17 15:35:41 debian kernel: watchdog: watchdog0: watchdog did not stop!
Last messages before power off:
systemd-shutdown: Could not detach DM /dev/dm-1: Device or resource busy
systemd-shutdown: Could not detach DM /dev/dm-0: Device or resource busy
watchdog: watchdog0: watchdog did not stop!
sstemd-shutdown: Failed to finalize DM devices, ignoring.
ls -l /dev/mapper/
total 0
crw------- 1 root root 10, 236 Nov 17 15:36 control
lrwxrwxrwx 1 root root       7 Nov 17 15:36 crypt_nvme0 -> ../dm-0
lrwxrwxrwx 1 root root       7 Nov 17 15:36 crypt_nvme1 -> ../dm-1
After reboot systemd crypt services:
● blockdev@dev-mapper-crypt_nvme0.target - Block Device Preparation for /dev/mapper/crypt_nvme0
     Loaded: loaded (/lib/systemd/system/blockdev@.target; static)
     Active: active since Sun 2024-11-17 15:36:35 PST; 5min ago
       Docs: man:systemd.special(7)

● systemd-cryptsetup@crypt_nvme1.service - Cryptography Setup for crypt_nvme1
     Loaded: loaded (/etc/crypttab; generated)
     Active: active (exited) since Sun 2024-11-17 15:36:35 PST; 5min ago
       Docs: man:crypttab(5)
             man:systemd-cryptsetup-generator(8)
             man:systemd-cryptsetup@.service(8)
    Process: 896 ExecStart=/lib/systemd/systemd-cryptsetup attach crypt_nvme1 /dev/disk/by-uuid/c434a425-6552-4036-ae34-4f8c1c728d9a none luks (code=exited, status=0/SUCCESS)
   Main PID: 896 (code=exited, status=0/SUCCESS)
        CPU: 5ms

Nov 17 15:36:35 debian systemd-cryptsetup: Volume crypt_nvme1 already active.

● system-systemd\x2dcryptsetup.slice - Cryptsetup Units Slice
     Loaded: loaded (/lib/systemd/system/system-systemd\x2dcryptsetup.slice; static)
     Active: active since Sun 2024-11-17 15:36:34 PST; 5min ago
       Docs: man:systemd-cryptsetup@.service(8)
      Tasks: 0
     Memory: 828.0K
        CPU: 10ms
     CGroup: /system.slice/system-systemd\x2dcryptsetup.slice

Nov 17 15:36:35 debian systemd-cryptsetup: Volume crypt_nvme1 already active.
Nov 17 15:36:35 debian systemd-cryptsetup: Volume crypt_nvme0 already active.
Notice: journal has been rotated since unit was started, output may be incomplete.

● systemd-cryptsetup@crypt_nvme0.service - Cryptography Setup for crypt_nvme0
     Loaded: loaded (/etc/crypttab; generated)
     Active: active (exited) since Sun 2024-11-17 15:36:35 PST; 5min ago
       Docs: man:crypttab(5)
             man:systemd-cryptsetup-generator(8)
             man:systemd-cryptsetup@.service(8)
    Process: 895 ExecStart=/lib/systemd/systemd-cryptsetup attach crypt_nvme0 /dev/disk/by-uuid/63780057-f91e-426d-9be3-84383fd9b534 none luks (code=exited, status=0/SUCCESS)
   Main PID: 895 (code=exited, status=0/SUCCESS)
        CPU: 5ms

Nov 17 15:36:35 debian systemd-cryptsetup: Volume crypt_nvme0 already active.

● cryptsetup.target - Local Encrypted Volumes
     Loaded: loaded (/lib/systemd/system/cryptsetup.target; static)
     Active: active since Sun 2024-11-17 15:36:35 PST; 5min ago
       Docs: man:systemd.special(7)

● blockdev@dev-mapper-crypt_nvme1.target - Block Device Preparation for /dev/mapper/crypt_nvme1
     Loaded: loaded (/lib/systemd/system/blockdev@.target; static)
     Active: active since Sun 2024-11-17 15:36:35 PST; 5min ago
       Docs: man:systemd.special(7)
ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 Nov 17 15:36 15767954-1ec3-44aa-b1e3-b890ca937277 -> ../../dm-0
lrwxrwxrwx 1 root root 15 Nov 17 15:36 63780057-f91e-426d-9be3-84383fd9b534 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Nov 17 15:36 A490-28B5 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Nov 17 15:36 c434a425-6552-4036-ae34-4f8c1c728d9a -> ../../nvme1n1p2
lrwxrwxrwx 1 root root 15 Nov 17 15:36 D2D5-D83C -> ../../nvme1n1p1
mount | grep nvme
/dev/mapper/crypt_nvme0 on / type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache=v2,commit=120,subvolid=256,subvol=/@)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,nosuid,nodev,noexec,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
bdrun33 (1 rep)
Nov 18, 2024, 12:14 AM • Last activity: Apr 8, 2025, 09:43 AM
0 votes
0 answers
41 views
Use device mapper snapshots to create multiple versions of the same machine and choose at boot
I already use LVM to create snapshots of volumes, and I recently discovered that device mapper can manage snapshots "manually" without LVM. It would be useful to be able to create a snapshot of the current system, and then create multiple versions from that base and choose which version to use. With...
I already use LVM to create snapshots of volumes, and I recently discovered that device mapper can manage snapshots "manually" without LVM. It would be useful to be able to create a snapshot of the current system, and then create multiple versions from that base and choose which version to use. With virtual machines it is quiet easy, but I did not find how to do it on a bare metal machine (i.e. a laptop). Even more useful would be to be able to choose at boot which version to boot, e.g. in the Grub menu. Then one could boot the COW device and make all the changes to the base system (install packages, configure things...), which will be saved on the COW device but not the base snapshot. At the next boot, one could choose another COW device to do other changes. I know it can be done (not easily) with Btrfs but I'd like it at the device level, so one could even use dmcrypt to create independent versions of the same encrypted device (although probably it would be useless to change the passphrase since the key would remain the same). Another option would be to use overlayfs, but it is quite difficult to setup correctly (e.g. in case of kernel update and initramfs update it gets a mess).
Kzar (111 rep)
Apr 7, 2025, 05:57 PM
2 votes
1 answers
96 views
Is it safe to use a loop device to circumvent EBUSY on a block device underlying device-mapper?
I’m trying to include the beginning of a disk block device (the GPT area) read-only in a device-mapper linear mapping. This block device also contains my root filesystem, as such the partition housing it would be in use concurrently with the mapping’s existence. This is so I can pass through parts (...
I’m trying to include the beginning of a disk block device (the GPT area) read-only in a device-mapper linear mapping. This block device also contains my root filesystem, as such the partition housing it would be in use concurrently with the mapping’s existence. This is so I can pass through parts (but not all of) the same disk to Windows 10 running under QEMU, for use in there. I don’t intend to ever have any sections of the disk read-write in parallel across kernels, but Windows’ partition as well as the ESP (not mounted in Linux) would be. To visualize (partition numbers given): So, the beginning of the disk as well as the partitions one and three I want passed through, maybe I’ll need the end of the disk as well. Write access should only be allowed to the partitions, not to the outer disk parts. I’ve learnt about device-mapper some years ago on another occassion and revisited its docs. Encouraged, I’ve gone on my merry way and constructed the mapping, starting from the beginning, whereby I’ve eventually hit the current roadblock: When attempting to create the mapper device, dmsetup passed on an EBUSY error. I suspect this to have happened because one of the disk block device’s sub-devices, the partition block device with my Linux installation, currently being in use. For troubleshooting purposes, I’ve recreated the situation on a RAM disk (Does brd count as a Linux Arcanum by now?), which (from what I recall) made operations fail in the same manner in which they had failed on the true disk:
#!/bin/sh #This will obviously need root, or maybe CAPs.
[ -b /dev/ram0 ] && exit 1
   #In this script, I rely on brd not being in use and /dev/ram0 being created.
modprobe brd rd_nr=1 rd_size=261120 max_part=2 && [ -b /dev/ram0 ] || exit 2
sgdisk -a=8 -n=0:0:+100M -t=0:ef00 -n=0 -t=0:8304 /dev/ram0
#There’s no filesystem necessary on p1 for demonstration purposes.
mkfs.ext4 -i 6144 -O ext_attr,flex_bg,inline_data,metadata_csum,sparse_super2 /dev/ram0p2
   #Just some example filesystem, taken from my shell history grab-bag.
mkdir /tmp/ExampleMountpoint || exit 3
mount /dev/ram0p2 /tmp/ExampleMountpoint || exit 4
dmsetup create --readonly Example1 --table '0 33 linear /dev/ram0 0' #Output:
#device-mapper: reload ioctl on Example1 (253:0) failed: Device or resource busy
#Command failed.
   #The exit code is 1, but the error message matches EBUSY.
   #To no surprise, adding -f doesn’t help.
umount /tmp/ExampleMountpoint
dmsetup create --readonly Example2 --table '0 33 linear /dev/ram0 0'
   #This one works, but I need it to work with the partition mounted.
ls -l /dev/mapper/Example2
dmsetup remove Example2
#rm /tmp/ExampleMountpoint
On another StackExchange question I’ve found when I started out—I’ve searched for it twenty days ago when I’ve asked on some Linux forum about this (in vain), then I’ve searched for it today again, I really haven’t been able to dig it up a second time—there was an answer that revealed that I could get around the EBUSY status by employing a loop device as an intermediary before the real disk device:
mount /dev/ram0p2 /tmp/ExampleMountpoint #Remount.
LoopDev=$(losetup --show -f /dev/ram0)
dmsetup create --readonly Example2 --table '0 68 linear /dev/loop0 0'
   #Side note: Less than 68s makes sgdisk err out instead of printing the table.
   #This might be a bug, it should, by my reckoning, go down do 1+32+32+1 or even just 33s.
sgdisk -Pp "$LoopDev" #This should print some warnings, then the partition table created above.
losetup -d "$LoopDev"
umount /tmp/ExampleMountpoint
rm /tmp/ExampleMountpoint
So, when redirecting the disk block device through a loop device, dmsetup will comply instead of erring out. Caring much about my data (though I do have pulled an image onto a separate disk) I now wonder, whether this is actually safe to do and give the expected results in the greater scheme of things. (Among them, preventing corruption of partition 4 and the GPI, as well as allowing write access to the partitions 1 and 3 through the VM.) Are there any, I don’t know, additional I/O alignment gotchas to watch out for?
2C7GHz (21 rep)
Jan 24, 2025, 09:24 AM • Last activity: Feb 9, 2025, 10:53 PM
1 votes
0 answers
68 views
IO wait/failure timeout on iscsi device with multipath enablement
- I'm accessing a remote iscsi based SAN using multipath. - The network on the server side has known intermittent issues such that there are session failures and path failures/IO failures. I'm not trying to beat this problem as it's already a WIP. - Now, the issue i have is let's say I'm trying to f...
- I'm accessing a remote iscsi based SAN using multipath. - The network on the server side has known intermittent issues such that there are session failures and path failures/IO failures. I'm not trying to beat this problem as it's already a WIP. - Now, the issue i have is let's say I'm trying to format or partition the device via a process/service, the parted/mkfs cmd gets hung causing Kernel panic. This value is set to 240 secs. - Now, what i want to avoid is the kernel panic, i want parted/mkfs command to fail and return than cause kernel panic. - I have searched and tried changing various parameters ( iscsid, sysfs, multipath ) to no avail. This is my iscsid config
iscsid.startup = /bin/systemctl start iscsid.socket iscsiuio.socket
node.startup = automatic
node.leading_login = No
node.session.timeo.replacement_timeout = 30
node.conn.timeo.login_timeout = 30
node.conn.timeo.logout_timeout = 15
node.conn.timeo.noop_out_interval = 5
node.conn.timeo.noop_out_timeout = 5
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.initial_login_retry_max = 8
node.session.cmds_max = 128
node.session.queue_depth = 2
node.session.xmit_thread_priority = -20
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 262144
node.conn.iscsi.MaxRecvDataSegmentLength = 262144
node.conn.iscsi.MaxXmitDataSegmentLength = 262144
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
node.conn.iscsi.HeaderDigest = CRC32C
node.conn.iscsi.DataDigest = CRC32C
node.session.nr_sessions = 1
node.session.reopen_max = 0
node.session.iscsi.FastAbort = Yes
node.session.scan = auto
multipath conf
defaults {
        path_checker none
        user_friendly_names yes          # To create ‘mpathn’ names for multipath devices
        path_grouping_policy multibus    # To place all the paths in one priority group
        path_selector "round-robin 0"    # To use round robin algorithm to determine path for next I/O operation
        failback immediate               # For immediate failback to highest priority path group with active paths
        no_path_retry 1                  # To disable I/O queueing after retrying once when all paths are down
    }
And I've set all sysfs timeout values of all slave paths to be 30 seconds. But still parted/mkfs never fail and return when there's network issue ( simulated ). What am i missing? My multipath version is tad old but i can't upgrade as this is supported version on Rocky 8. multipath-tools v0.8.4 (05/04, 2020) iscsid version 6.2.1.4-1
Neetz (111 rep)
Jan 21, 2025, 09:38 PM
0 votes
1 answers
64 views
udev triggered service not able to run sh script
I am working on a service to capture device-mapper dm-verity uevent. However, I am not able to figure out why the service is able to be triggered but fails to run my logger script. I have the following rules (`/etc/udev/rules.d/dm-verity.rules`) based on [this discussion](https://github.com/systemd/...
I am working on a service to capture device-mapper dm-verity uevent. However, I am not able to figure out why the service is able to be triggered but fails to run my logger script. I have the following rules (/etc/udev/rules.d/dm-verity.rules) based on [this discussion](https://github.com/systemd/systemd/issues/15855#issue-621269603) (newline added for readability):
KERNEL=="dm-[0-9]*", \
ACTION=="change", \
SUBSYSTEM=="block", \
ENV{DM_VERITY_ERR_BLOCK_NR}!="", \
TAG+="systemd", \
ENV{SYSTEMD_WANTS}+="dm-verity.service"
I have the following service (/etc/systemd/system/dm-verity.service):
[Unit]
Description=dm-verity uevent logger

[Service]
Type=forking
ExecStart=/usr/bin/dm-verity_log_error.sh
I have the following script (/usr/bin/dm-verity_log_error.sh):
#!/bin/bash

echo "dm-verity occurs" | my-logger-adaptor
exit 0
I can confirm from journal that the dm-verity.rules and dm-verity.service is triggered, however, it fails when it tries to execute dm-verity_log_error.sh. Journal log snippet as follows.
systemd: dm-verity.service: About to execute: /bin/bash -c /usr/bin/dm-verity_log_error.sh
systemd: dm-verity.service: Forked /bin/bash as 19823
systemd: dm-verity.service: Kernel keyring not supported, ignoring.
systemd: dm-verity.service: Executing: /bin/bash -c /usr/bin/dm-verity_log_error.sh
systemd: dm-verity.service: Changed dead -> start
systemd: Starting dm-verity uevent logger
systemd: dm-verity.service: cgroup is empty
audit: ANOM_ABEND auid=4294967295 uid=0 gid=0 ses=4294967295 pid=19823 comm="bash" exe="/bin/bash.bash" sig=11 res=1
systemd: Received SIGCHLD from PID 19823 (bash).
systemd: Child 19823 (bash) died (code=killed, status=11/SEGV)
systemd: dm-verity.service: Child 19823 belongs to dm-verity.service.
systemd: dm-verity.service: Main process exited, code=killed, status=11/SEGV
systemd: dm-verity.service: Failed with result 'signal'.
systemd: dm-verity.service: Changed start -> failed
systemd: dm-verity.service: Job dm-verity.service/start finished, result=failed
systemd: Failed to start dm-verity uevent logger
From the logs, it seem like there is an error when executing the bash shell to run the script (code=killed, status=11/SEGV) but I am not quite sure the cause of it. Any pointer to some documents or possible solutions on the problem? My thoughts on what might be the cause: 1. Maybe it is because of systemd.exec executing the command in sandbox environment so that the bash shell does not has all of the necessary environment ([reference 1](https://unix.stackexchange.com/a/642714/676447) 2. Maybe I invoke dm-verity_log_error.sh incorrectly in ExecStart= attribute field. Starting the service with systemctl start dm-verity.service is successful. I did some other test with ExecStart=/bin/ls and it can also run successfully when triggered by the rules. 3. Maybe dm-verity_log_error.sh did run, but ends too quickly. However, I am not sure if the main process in the journal is referring to the dm-verity.service that finished or referring to the `dm-verity_log_error.sh script itself. I have tried to search for similar problem online but have not find one quite that match my problem. Maybe I am using the wrong keyword to search. I have read the following reference so far: - [udev rules to trigger systemd service](https://unix.stackexchange.com/questions/550279/udev-rule-to-trigger-systemd-service) - [how udev works](https://unix.stackexchange.com/a/551047/676447) - [how to start service with udev event](https://blog.fraggod.net/2012/06/16/proper-ish-way-to-start-long-running-systemd-service-on-udev-event-device-hotplug.html) I am using a development board running Yocto 2.6, and systemd 239.
semicolon (1 rep)
Nov 19, 2024, 03:13 PM • Last activity: Jan 7, 2025, 08:57 AM
0 votes
1 answers
47 views
Does dm-crypt waste device space?
That is, when a device-mapping is created manually with the `dm-crypt` target, is the resulting device smaller than the backing device? What is the missing space used for? Will the answer change depending on which crypto mode/algorithm is used?
That is, when a device-mapping is created manually with the dm-crypt target, is the resulting device smaller than the backing device? What is the missing space used for? Will the answer change depending on which crypto mode/algorithm is used?
melonfsck - she her (150 rep)
Nov 14, 2024, 05:50 PM • Last activity: Nov 14, 2024, 07:30 PM
0 votes
0 answers
82 views
Dynamic memory allocations (kmalloc, vmalloc, ...) in the I/O path of known block drivers
I was thinking about proper *swap* solutions for a Linux system, and a very important question arised regarding all of them. As known and confirmed by my experience, a swap device *will* eventually be accessed under an extreme memory pressure situation. In such cases, an OOM kill is hypothetically n...
I was thinking about proper *swap* solutions for a Linux system, and a very important question arised regarding all of them. As known and confirmed by my experience, a swap device *will* eventually be accessed under an extreme memory pressure situation. In such cases, an OOM kill is hypothetically near, but actually, since overcommit_memory=2 and enough swap space is still *available* in system — the OOM path is unreachable. What happens instead - is that kernel thinks it can swap memory out when needed, but "when needed" is too late. Swap I/O gets submitted to the blockdev driver when the memory pools are already absolutely exhausted. Let the block driver attempt any allocation which is more than some absolute minimum, and the system will be dead locked on memory. Unfortunately, it doesn't even get detected, and the System deadlocked on memory panic path is never taken. Instead the system freezes forever. I attempted many experiments to try to track the problem down. I have put code to print stacktrace where block I/O is happening, to find which alloc-family functions are in the path for popular devices (dm-*, loop+nfs, and other configurations), and how much they allocate. Yet it didn't help me. I experience the issues **most often on the "loop dev+ some FS" setups**, but eventually I have also hit it with dm-vdo (which is a deduplication and compression device), and few else, when using them for swap. I thought that the problem could be solved by increasing swappines and min_free_kbytes values in vm.\* sysctls, but I still hit the described problems. I had an idea of replacing all the alloc-like calls with on-stack allocations in the needed I/O path code (since I have 128kb kernel stacks), but that seemed too unclean solution (think, resolving conflicts on pulls from upstream), and not applicable everywhere. My question is: which of these device drivers rely on (heavy?) dynamic memory allocations in the I/O path? **Which are known to work fine in the Swap role? What considerations should be in mind to set up a reliable swap device? Can these devices be stacked on top of each other as much as needed?** - dm-crypt - dm-verity - dm-linear - loop (--direct-io) + ext4/fat32/? - loop + nfs/fuse - loop (--direct-io) + btrfs/zfs/bcachefs/? You can mention personal experience, of course.
melonfsck - she her (150 rep)
Nov 12, 2024, 10:14 PM • Last activity: Nov 12, 2024, 10:21 PM
0 votes
1 answers
89 views
Passing an unlocked LUKS partition context from GRUB to Linux?
### Question In GRUB one can use the [`cryptomount`](https://www.gnu.org/software/grub/manual/grub/grub.html#cryptomount) command to mount a LUKS partition. Is there a way to pass this decrypted partition to linux such that it appears as a device mapper (`/dev/mapper/xxx`) entry without having to ru...
### Question In GRUB one can use the [cryptomount](https://www.gnu.org/software/grub/manual/grub/grub.html#cryptomount) command to mount a LUKS partition. Is there a way to pass this decrypted partition to linux such that it appears as a device mapper (/dev/mapper/xxx) entry without having to run [cryptsetup luksOpen](https://man7.org/linux/man-pages/man8/cryptsetup-open.8.html) ? If it's not possible with GRUB, are there other bootloaders that support this? ### Notes Some distributions support [cryptdevice/cryptkey](https://wiki.archlinux.org/title/Dm-crypt/System_configuration#Using_encrypt_hook) parameters, however this [doesn't appear to be a standard linux kernel parameter](https://www.kernel.org/doc/html/latest/search.html?q=cryptdevice) (and isn't supported by the distribution I use). There is also the [dm-mod.create](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-init.html) kernel parameter but it appears that it only supports cleartext passphrase (viewable from /proc/cmdline) or use a linux keyring entry. Both of these methods would need to decrypt the partition again in order for linux to mount it though, right? Or else why would they need the key?
Daniel (701 rep)
Nov 6, 2024, 03:51 AM • Last activity: Nov 6, 2024, 05:43 AM
0 votes
1 answers
36 views
Device Mapper Snapshot Stored In An Expanding File
Is there a way to create a block device that is backed by a file that grows in size as the block device is written to? I'm looking to use device mapper snapshots (ie: `dmsetup create` command that specifies `snapshot` as the type of item to create) and according to the documentation the snapshot nee...
Is there a way to create a block device that is backed by a file that grows in size as the block device is written to? I'm looking to use device mapper snapshots (ie: dmsetup create command that specifies snapshot as the type of item to create) and according to the documentation the snapshot needs to be stored on a block device. One way to create this block device is by using losetup to turn a file into a block device. However, that requires as fixed size file. I'm trying to figure out if there is any way to not need a fixed size file but instead have the file that backs the block device that backs the snapshot grow as the snapshot grows.
Harry Muscle (2697 rep)
Sep 21, 2024, 02:17 AM • Last activity: Sep 21, 2024, 11:56 AM
4 votes
2 answers
491 views
How can I format a partition in a mapper device?
I created a mapper device with `dmsetup` and created a partition table with `parted`: ``` $ fdisk -l /dev/mapper/vdisk Disk /dev/mapper/vdisk: 511.57 GiB, 549295737344 bytes, 1072843237 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (mini...
I created a mapper device with dmsetup and created a partition table with parted:
$ fdisk -l /dev/mapper/vdisk
Disk /dev/mapper/vdisk: 511.57 GiB, 549295737344 bytes, 1072843237 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: OMITTED

Device                       Start        End    Sectors   Size Type
/dev/mapper/vdisk-part1   2048     204799     202752    99M EFI System
/dev/mapper/vdisk-part2 204800 1072841188 1072636389 511.5G Microsoft basic data
Now how do I manipulate the partitions, say format the first partition to FAT? /dev/mapper/vdisk-part1 or /dev/mapper/vdisk1 don't seem to exist. PS: I do remember /dev/mapper/vdisk1 or something similar appeared after creating the partition table with parted, but disappeared after a reboot.
chienyan (43 rep)
Sep 11, 2024, 10:58 AM • Last activity: Sep 11, 2024, 11:30 AM
0 votes
1 answers
120 views
How to reinstate an invalidated DM snapshot?
I'm using device mapper snapshots. Let's assume that `/dev/sda` is the read-only origin device, and `/dev/sdb` is the COW device. I created a persistent snapshot this way: ``` # cat /dev/zero > /dev/sdb # dmsetup create mysnap 0 1000000000 snapshot /dev/sda /dev/sdb P 16 ^D # ls /dev/mapper/ control...
I'm using device mapper snapshots. Let's assume that /dev/sda is the read-only origin device, and /dev/sdb is the COW device. I created a persistent snapshot this way:
# cat /dev/zero > /dev/sdb
# dmsetup create mysnap
0 1000000000 snapshot /dev/sda /dev/sdb P 16
^D
# ls /dev/mapper/
control    mysnap
#
It worked fine for a while. After every boot, to re-attach my persistent snapshot, I was running the same command:
dmsetup create mysnap
0 1000000000 snapshot /dev/sda /dev/sdb P 16
But one day I accidentally disconnected the read-only origin device during operation (the COW device was still there). There was a kernel message like that:
device-mapper: snapshots: Invalidating snapshot: error reading/writing
After that happened, any attempt to attach the snapshot (on any machine) results in error:
device-mapper: snapshots: Snapshot is marked invalid
The mysnap device gets created, but it refuses any reads/writes with "Input/output error". Is it possible to clear the "invalid" status on the DM snapshot and bring it up, or at least to recover the data?
I believe that this "invalid" status is fully artificial because, from my experience, persistent DM snapshots survived total system crashes.
melonfsck - she her (150 rep)
Jun 30, 2024, 12:16 PM • Last activity: Jun 30, 2024, 02:27 PM
5 votes
3 answers
2817 views
dm-integrity standalone mapper device lost after reboot
I currently try to use dm-integrity to run in standalone mode. For that I install a plain ubuntu server 20.04 in a virtual box VM. In the next steps I create the dm-integrity device, a ext4 filesystem and mount it: integritysetup format /dev/sdb integritysetup open /dev/sdb hdd-int mkfs.ext4 /dev/ma...
I currently try to use dm-integrity to run in standalone mode. For that I install a plain ubuntu server 20.04 in a virtual box VM. In the next steps I create the dm-integrity device, a ext4 filesystem and mount it: integritysetup format /dev/sdb integritysetup open /dev/sdb hdd-int mkfs.ext4 /dev/mapper/hdd-int mkdir /data mount /dev/mapper/hdd-int /data echo "/dev/mapper/hdd-int /data ext4 defaults 0 0" >> /etc/fstab **NOTE:** For simplification I use /dev/sdb instead of /dev/disk/by-id/. Now I reboot and see, that the device /dev/mapper/hdd-int does not exist and therefore the mount to /data failed. Now my Question: How can I permanently persist the information of the dm-integrity device, so that the mount after a reboot is already there? Should create a line in /etc/fstab? Or is there another config file?
schlagi123 (153 rep)
Apr 30, 2020, 11:06 AM • Last activity: May 18, 2024, 03:24 AM
2 votes
0 answers
59 views
Can't mount extended logical partitions in RAID 0 Fake Array created by dmaid and device mapper
First of all, I would like to let you know I come here cause I'm looking for a solution or a way to be able to mount the HOME partition and to read the inside data. I've been running Funtoo GNU/Linux on a **RAID 0 Fake Array** since I bought this computer in 2010, aprox. Yesterday booted into **Syst...
First of all, I would like to let you know I come here cause I'm looking for a solution or a way to be able to mount the HOME partition and to read the inside data. I've been running Funtoo GNU/Linux on a **RAID 0 Fake Array** since I bought this computer in 2010, aprox. Yesterday booted into **SystemRescue** and tried to format every partition **except HOME** part. and I think something went wrong while formatting cause suddenly I started suffering the following issue. dmraid -ay RAID set "isw_bggjiidefd_240GB_BX100v2_5" was activated device "isw_bggjiidefd_240GB_BX100v2_5" is now registered with dmeventd for monitoring RAID set "isw_cfccfdiidi_640GB_RAID0" was activated device "isw_cfccfdiidi_640GB_RAID0" is now registered with dmeventd for monitoring **ERROR: dos: partition address past end of RAID device** RAID set "isw_bggjiidefd_240GB_BX100v2_5p1" was activated RAID set "isw_bggjiidefd_240GB_BX100v2_5p2" was activated RAID set "isw_bggjiidefd_240GB_BX100v2_5p3" was activated ls /dev/mapper/ control isw_bggjiidefd_240GB_BX100v2_5p1 isw_bggjiidefd_240GB_BX100v2_5p3 isw_bggjiidefd_240GB_BX100v2_5 isw_bggjiidefd_240GB_BX100v2_5p2 isw_cfccfdiidi_640GB_RAID0 In the previous scheme and directory, **logical partitions inside the extended partition are missing**. Disk /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p1: 300 MiB, 314572800 bytes, 614400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 16384 bytes / 32768 bytes Disk /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p2: 99.56 GiB, 106902323200 bytes, 208793600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 16384 bytes / 32768 bytes Disklabel type: dos Disk identifier: 0x73736572 Device Boot Start End Sectors Size Id Type /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p2-part1 1920221984 3736432267 1816210284 866G 72 unknown /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p2-part2 1936028192 3889681299 1953653108 931.6G 6c unknown /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p2-part3 0 0 0 0B 0 Empty /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p2-part4 27722122 27722568 447 223.5K 0 Empty Partition 4 does not start on physical sector boundary. Partition table entries are not in disk order. Disk /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p3: 450 MiB, 471859200 bytes, 921600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 16384 bytes / 32768 bytes Disklabel type: dos Disk identifier: 0x6c727443 Device Boot Start End Sectors Size Id Type /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p3-part1 1634886000 3403142031 1768256032 843.2G 75 PC/IX /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p3-part2 1936028160 3889681267 1953653108 931.6G 61 SpeedStor /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p3-part3 0 0 0 0B 0 Empty /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p3-part4 26935690 26936121 432 216K 0 Empty Partition 1 does not start on physical sector boundary. Partition 4 does not start on physical sector boundary. Partition table entries are not in disk order. Gparted startup Drive scheme in Gparted Gparted Partition information Now, **using mdadm** instead of dmraid Fake RAID Tool. Assembling the array through *mdadm* I can see every block device, but I can't mount HOME partition */dev/md/240GB_BX100v2.5_0p9*. I can mount the other partitions under the extended partition because I formatted them when assembled by mdadm. mdadm --examine --scan ARRAY metadata=imsm UUID=4f6eb512:955e67f6:5a22279e:f181f40d ARRAY /dev/md/640GB_RAID0 container=4f6eb512:955e67f6:5a22279e:f181f40d member=0 UUID=1f9b13e6:b6dc2975:9c367bbb:88fa3d2b ARRAY metadata=imsm UUID=c842ced3:6e254355:fed743f8:a4e8b8b8 ARRAY /dev/md/240GB_BX100v2.5 container=c842ced3:6e254355:fed743f8:a4e8b8b8 member=0 UUID=a2e2268c:17e0d658:17b6f16d:b090f250 ls /dev/md/ 240GB_BX100v2.5_0 240GB_BX100v2.5_0p3 240GB_BX100v2.5_0p6 240GB_BX100v2.5_0p9 640GB_RAID0_0p2 240GB_BX100v2.5_0p1 240GB_BX100v2.5_0p4 240GB_BX100v2.5_0p7 640GB_RAID0_0 imsm0 240GB_BX100v2.5_0p2 240GB_BX100v2.5_0p5 240GB_BX100v2.5_0p8 640GB_RAID0_0p1 imsm1 mount /dev/md/240GB_BX100v2.5_0p9 /mnt/ mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md126p9, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. dmesg [ 179.400010] EXT4-fs (md126p9): bad geometry: block count 92565760 exceeds size of device (92565504 blocks) But I can list every file using debugfs: debugfs -c /dev/md126p9 debugfs 1.47.0 (5-Feb-2023) debugfs: ls 2 (12) . 2 (12) .. 11 (56) lost+found 1712129 (16) joan 3670017 (12) tmp 4653057 (916) sys Home partition with mdadm & Gparted
peris (121 rep)
Mar 23, 2024, 06:45 PM • Last activity: Mar 23, 2024, 08:08 PM
3 votes
0 answers
152 views
What's the expected overhead of a passthrough device mapper?
I'm trying to establish a baseline throughput overhead for a passthrough device mapper; i.e. a device mapper that does nothing. Roughly following benchmarking procedures [from Cloudflare](https://blog.cloudflare.com/speeding-up-linux-disk-encryption), I'm measuring roughly **30% throughput decrease*...
I'm trying to establish a baseline throughput overhead for a passthrough device mapper; i.e. a device mapper that does nothing. Roughly following benchmarking procedures [from Cloudflare](https://blog.cloudflare.com/speeding-up-linux-disk-encryption) , I'm measuring roughly **30% throughput decrease** from using a passthrough device mapper over ramdisk with fio, as opposed to straight I/Os to ramdisk, running on Azure VMs, GCP VMs, and raw metal machines, with both Ubuntu 20.04 LTS and 22.04 LTS. **Is this expected?** I'm getting roughly 1000+MB/s over ramdisk across devices and 600+MB/s for passthrough. Here's my setup for those of you who would like to replicate my results: 1. Create a Ubuntu 20.04 or 22.04 VM or get access to such a machine. Turn off secure boot so you can load kernel modules. 2. Create 4GB of ramdisk: sudo modprobe brd rd_nr=1 rd_size=4194304 3. Install fio: sudo apt install -y fio 4. Run fio over ramdisk: sudo fio --filename=/dev/ram0 --readwrite=readwrite --bs=4k --direct=1 --loops=20 --name=plain 5. Record the read/write throughput at the bottom of the output: aggrb=?MB/s. This is the baseline. Now for the passthrough device mapper, using the "delay" device mapper. This is what was done by [Cloudflare](https://www.usenix.org/sites/default/files/conference/protected-files/vault20_slides_korchagin.pdf) : 1. Set up the passthrough device mapper: echo '0 8388608 delay /dev/ram0 0 0' | sudo dmsetup create plain 2. Run fio over it: sudo fio --filename=/dev/mapper/plain --readwrite=readwrite --bs=4k --direct=1 --loops=20 --name=plain 3. Record the throughput similarly. Alternatively, if you suspect the "delay" device mapper with 0 delay is not performant, you can use my implementation of a passthrough device [here](https://github.com/davidchuyaya/rollbaccine/blob/main/src/passthrough/passthrough.c) . Download both files [here](https://github.com/davidchuyaya/rollbaccine/tree/main/src/passthrough) , then compile and load the kernel module: 1. Run make 2. Load the module: sudo insmod passthrough.ko 3. Load the device mapper: `echo "0 sudo blockdev --getsz /dev/ram0 passthrough /dev/ram0" | sudo dmsetup create passthrough` 4. Run fio over it: sudo fio --filename=/dev/mapper/passthrough --readwrite=readwrite --bs=4k --direct=1 --loops=20 --name=passthrough 5. Record the throughput similarly.
davidchuyaya (81 rep)
Mar 13, 2024, 08:30 PM • Last activity: Mar 18, 2024, 09:18 AM
8 votes
2 answers
1871 views
How does EXT4 handle sudden lack of space in the underlying storage?
Usually, block device drivers report correct size of the device, and it is possible to actually use all the "available" blocks. So, the filesystem knows how much it can write to such device in prior. But in some special cases, like it is with `dm-thin` or `dm-vdo` devices, this statement is false. T...
Usually, block device drivers report correct size of the device, and it is possible to actually use all the "available" blocks. So, the filesystem knows how much it can write to such device in prior.
But in some special cases, like it is with dm-thin or dm-vdo devices, this statement is false. This kind of block devices can return ENOSPC error at any moment, if their underlying storage (which the upper-level FS knows nothing about) gets full.

Therefore, my question is, what happens in such scenario: an EXT4 filesystem is mounted r/w, in async mode (which is the default), and it is doing a massive amount of writes. The disk cache (dirty memory) gets involved too, and at the moment there is a lot of data to be written if user runs sync command.

But suddenly, the underlying block device of that EXT4 filesystem starts to refuse any writes due to "no space left". What will be the behavior of the filesystem?
Will it print errors and go to r/o mode aborting all the writes and possibly causing data loss? If not, will it just wait for space, periodically retrying writes and refusing new ones? In that case, what will happen to the huge disk cache, if other processes try to allocate lots of RAM? (On Linux, dirty memory is considered Available, isn't it?).
Considering worst scenario, if the disk cache was taking up most of the RAM at the moment of ENOSPC error (because admin has set vm.dirty_ratio too high), can the kernel crash or lock up? Or it will just make all processes which want allocate memory wait/hang? Finally, does the behavior differ across filesystems?
Thanks in advance.
melonfsck - she her (150 rep)
Feb 20, 2024, 12:16 PM • Last activity: Feb 25, 2024, 12:39 PM
1 votes
1 answers
69 views
Is it possible to restore data from a non-persistent DM snapshot after crash?
I often use non-persistent device mapper snapshots, an example table looks like this: ``` 0 10485760 snapshot /dev/sdc3 /dev/sdc6 N 16 ``` In case of a crash, I still have both `/dev/sdc3` and `/dev/sdc6` because disks are non-volatile. But is it possible to get that `snapshot` device back, or at le...
I often use non-persistent device mapper snapshots, an example table looks like this:
0 10485760 snapshot /dev/sdc3 /dev/sdc6 N 16
In case of a crash, I still have both /dev/sdc3 and /dev/sdc6 because disks are non-volatile. But is it possible to get that snapshot device back, or at least recover the changes from /dev/sdc6 somehow? I know that persistent snapshots exist for my purpose but I'm still curious. Thanks.
melonfsck - she her (150 rep)
Jan 28, 2024, 09:51 PM • Last activity: Jan 29, 2024, 10:43 AM
2 votes
1 answers
1248 views
Fakeraid partition missing (not mapped as a device on boot) after upgrade to Ubuntu 22.04
I have a RAID-0 volume with an NTFS partition that has worked fine for years on my dual-boot system (readable and writable by both Windows and Linux). Today after doing a `do-release-upgrade -d` to upgrade to Ubuntu 22.04 (from Ubuntu 20.04), this filesystem isn't showing up in Ubuntu. The problem s...
I have a RAID-0 volume with an NTFS partition that has worked fine for years on my dual-boot system (readable and writable by both Windows and Linux). Today after doing a do-release-upgrade -d to upgrade to Ubuntu 22.04 (from Ubuntu 20.04), this filesystem isn't showing up in Ubuntu. The problem seems to be around device mapping; here's what I've tried/found so far: - It still works fine in Windows. I don't think anything changed on the disks. - An NTFS partition on a different disk (non-RAID) still mounts and works fine. - Booting into the old kernel (via grub) doesn't fix it (and seems to cause other problems). - I thought my setup was "hardware RAID" because I configured it via a BIOS boot screen titled "Intel Matrix Storage Manager", but I guess this is actually "fakeraid". - The RAID volume shows up in the Disks utility (as /dev/dm-0, and this file exists) with no partitions, just "unallocated space". - The RAID volume shows up in GParted (as /dev/mapper/isw_dfjaifidah_KarlsRaid, and this file exists) with an ntfs partition named /dev/mapper/isw_dfjaifidah_KarlsRaid1 (i.e. the volume name with a 1 appended), but that device file does not exist. The only file in /dev/mapper/ is isw_dfjaifidah_KarlsRaid. Here's the relevant part of sudo fdisk -l. (sda,sdb,sdc are the disks in the RAID array.)
Disk /dev/sda: 596.17 GiB, 640135028736 bytes, 1250263728 sectors
Disk model: WDC WD6401AALS-0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x15967f5e

Device     Boot Start        End    Sectors  Size Id Type
/dev/sda1        2048 3750772735 3750770688  1.7T  7 HPFS/NTFS/exFAT


Disk /dev/sdb: 596.17 GiB, 640135028736 bytes, 1250263728 sectors
Disk model: WDC WD6401AALS-0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2a0921b8


Disk /dev/sdc: 596.17 GiB, 640135028736 bytes, 1250263728 sectors
Disk model: WDC WD6401AALS-0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2a0921bf


Disk /dev/mapper/isw_dfjaifidah_KarlsRaid: 1.75 TiB, 1920398131200 bytes, 3750777600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 393216 bytes
Disklabel type: dos
Disk identifier: 0x15967f5e

Device                                     Boot Start        End    Sectors  Size Id Type
/dev/mapper/isw_dfjaifidah_KarlsRaid-part1       2048 3750772735 3750770688  1.7T  7 HPFS/NTFS/exFAT
The file /dev/mapper/isw_dfjaifidah_KarlsRaid-part1 (note the -part1) doesn't exist either. **I'm slightly worried by the fact that /dev/sda1 shows up**, since (if my assumptions are correct) we should only be looking for partition tables on the combined volume, not directly on a single disk in the array. **The file /dev/sda1 exists**, and sudo ntfs-3g.probe --readwrite /dev/sda1 reports "NTFS signature is missing". Maybe the system is finding my partition table on sda, even though that data is just part of a RAID stripe, and creating dev/sda1 based on it. I can imagine this causing some kind of name collision when the identical "real" partition table on the RAID volume is encountered. FWIW, hdparm -z /dev/mapper/isw_dfjaifidah_KarlsRaid outputs:
/dev/mapper/isw_dfjaifidah_KarlsRaid:
 re-reading partition table
 BLKRRPART failed: Invalid argument
That's pretty much where I'm stuck! How can I fix this? Thanks in advance for any suggestions - even obvious ones, since I don't really know what I'm doing. Some other notes (likely irrelevant): - Yesterday I upgraded from nvidia-driver-390 to nvidia-driver-470 via the gui "Additional Drivers" tool and had [this problem](https://ubuntuforums.org/showthread.php?t=2473057) where it switched me from a -generic to an -oracle kernel that didn't recognize my networking hardware. Wanting a newer (generic) kernel is what motivated me to do the distribution upgrade. - I wanted to do a clean install from the Ubuntu 22.04 Live CD (which I verified against the published checksum after burning), but it fails to boot ("Failed to start CUPS scheduler" after several minutes). - The do-release-upgrade went smoothly AFAICT except for an error about a handful of "mpi" packages at the end. Afterwards, apt commands were failing, with dpkg complaining that these packages were "not configured yet". I fixed this by reinstalling openmpi-bin as in [this answer](https://stackoverflow.com/a/62464086) . ---------- More output as requested in comments:
# lsblk -M -f
    NAME
     FSTYPE FSVER LABEL         UUID                                 FSAVAIL FSUSE% MOUNTPOINTS

[after a bunch of loop devices related to /snap/...]

┌┈▶ sda
     isw_ra 1.2.0                                                                   
├┈▶ sdb
     isw_ra 1.2.0                                                                   
└┬▶ sdc
     isw_ra 1.2.0                                                                   
 └┈┈isw_dfjaifidah_KarlsRaid
                                                                                    
    sdd
│                                                                                   
    ├─sdd1
│    ntfs         OCZ Vertex 4  1A7643E57643C06D                       58.6G    69% /mnt/WinC
    ├─sdd2
│    ntfs                       129E918C9E9168CD                                    
    ├─sdd3
│                                                                                   
    ├─sdd5
│    ext4   1.0                 5b327639-85e6-4f6a-ac79-743cfedf3e29   10.8G    64% /
    └─sdd6
     swap   1                   b601da00-767d-4e50-b62a-0b832992599c                [SWAP]

# partx /dev/mapper/ is isw_dfjaifidah_KarlsRaid
partx: bad usage
Try 'partx --help' for more information.

# partx /dev/mapper/isw_dfjaifidah_KarlsRaid   
NR START        END    SECTORS SIZE NAME UUID
 1  2048 3750772735 3750770688 1.7T      15967f5e-01

# partx /dev/sda                            
NR START        END    SECTORS SIZE NAME UUID
 1  2048 3750772735 3750770688 1.7T      15967f5e-01
Karl (121 rep)
Jul 9, 2022, 12:40 AM • Last activity: Nov 8, 2023, 11:00 PM
0 votes
1 answers
28 views
For the Device-Mapper framework, who is doing the development, and how do you contribute to it?
I've been using the device-mapper framework (dm-crypt, dm-verity), but there are some things I wish could be improved. Where is the discussion about its development and is there a way to contribute to it?
I've been using the device-mapper framework (dm-crypt, dm-verity), but there are some things I wish could be improved. Where is the discussion about its development and is there a way to contribute to it?
itsmarziparzi (101 rep)
Aug 30, 2023, 01:27 AM • Last activity: Aug 30, 2023, 01:49 AM
Showing page 1 of 20 total questions