Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
0
answers
45
views
Boot QEMU from SPDK vhost-user-blk-pci
I'm trying to boot a QEMU VM from a `vhost-user-blk-pci` device, which appears to be generally possible (https://github.com/spdk/spdk/issues/1728). In my case, vhost gets the image via SPDK's NVMe-oF driver. However, QEMU does not find a bootable device. What I am doing: 1. Start vhost bin/vhost -S...
I'm trying to boot a QEMU VM from a
vhost-user-blk-pci
device, which appears to be generally possible (https://github.com/spdk/spdk/issues/1728) . In my case, vhost gets the image via SPDK's NVMe-oF driver. However, QEMU does not find a bootable device. What I am doing:
1. Start vhost
bin/vhost -S /var/tmp -s 1024 -m 0x3 -A 0000:82:00.1
2. Connect to NVMe-oF server and create blk controller
./rpc.py bdev_nvme_attach_controller -t tcp -a 10.0.0.4 -s 4420 -f ipv4 -n nqn.2024-10.placeholder:bd --name placeholder
./rpc.py vhost_create_blk_controller --cpumask 0x1 vhost.0 placeholdern1
3. Attempt to launch QEMU with blk controller as boot device (does not find anything bootable)
taskset -c 2,3 qemu-system-x86_64 \
-enable-kvm \
-m 1G \
-smp 8 \
-nographic \
-object memory-backend-file,id=mem0,size=1G,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem0 \
-chardev socket,id=spdk_vhost_blk0,path=/var/tmp/vhost.0,reconnect=1 \
-device vhost-user-blk-pci,chardev=spdk_vhost_blk0,bootindex=1,num-queues=2
Things I've checked:
* I can mount an NMVe-oF disk to the VM just fine using the same sequence of commands (giving QEMU an additional bootable drive) (just booting from it won't work)
* the image on the NVMe-oF server boots just fine if I provide it locally (via the host-kernel NVMe-oF driver that I can't use in production) and declare it in the QEMU options as a drive
* QEMU does not appear to have an NVMe-oF driver itself that I could use instead (it does have an NVMe driver)
QEMU version 7.2.15 (Debian 1:7.2+dfsg-7+deb12u12)
SPDK version SPDK v25.01-pre git sha1 8d960f1d8
Slow
(1 rep)
Aug 4, 2025, 10:39 AM
• Last activity: Aug 5, 2025, 12:05 PM
0
votes
2
answers
3320
views
Isolating I/O issue with NVME or hardware?
Hardware: - Samsung 980 PRO M.2 NVMe SSD (MZ-V8P2T0BW) (2TB) - Beelink GTR6, with the SSD in the NVMe slot Since the hardware arrived, I've installed Ubuntu Server on it as well as a bunch of services (mostly in docker, DBs and services like Kafka). After 2-3 days of uptime (record is almost a week,...
Hardware:
- Samsung 980 PRO M.2 NVMe SSD (MZ-V8P2T0BW) (2TB)
- Beelink GTR6, with the SSD in the NVMe slot
Since the hardware arrived, I've installed Ubuntu Server on it as well as a bunch of services (mostly in docker, DBs and services like Kafka).
After 2-3 days of uptime (record is almost a week, but usually it's 2-3 days), I typically start getting buffer i/o errors on the nvme slot (which is also the boot drive):
If I'm quick enough, I can still login via SSH but the system becomes increasingly unstable before commands start failing with an I/O error. When I did manage to login, it did seem to think there's no connected NVME SSDs:
Another instance of the buffer I/O error on the nvme slot:
Because of this and trying to check everything I could find, I ran FSCK on boot to see if there was anything obvious - this is quite common after the hard reset:
# cat /run/initramfs/fsck.log
Log of fsck -C -f -y -V -t ext4 /dev/mapper/ubuntu--vg-ubuntu--lv
Fri Dec 30 17:26:21 2022
fsck from util-linux 2.37.2
[/usr/sbin/fsck.ext4 (1) -- /dev/mapper/ubuntu--vg-ubuntu--lv] fsck.ext4 -f -y -C0 /dev/mapper/ubuntu--vg-ubuntu--lv
e2fsck 1.46.5 (30-Dec-2021)
/dev/mapper/ubuntu--vg-ubuntu--lv: recovering journal
Clearing orphaned inode 524449 (uid=1000, gid=1000, mode=0100664, size=6216)
Pass 1: Checking inodes, blocks, and sizes
Inode 6947190 extent tree (at level 1) could be shorter. Optimize? yes
Inode 6947197 extent tree (at level 1) could be shorter. Optimize? yes
Inode 6947204 extent tree (at level 1) could be shorter. Optimize? yes
Inode 6947212 extent tree (at level 1) could be shorter. Optimize? yes
Inode 6947408 extent tree (at level 1) could be shorter. Optimize? yes
Inode 6947414 extent tree (at level 1) could be shorter. Optimize? yes
Inode 6947829 extent tree (at level 1) could be shorter. Optimize? yes
Inode 6947835 extent tree (at level 1) could be shorter. Optimize? yes
Inode 6947841 extent tree (at level 1) could be shorter. Optimize? yes
Pass 1E: Optimizing extent trees
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong (401572584, counted=405399533).
Fix? yes
Free inodes count wrong (121360470, counted=121358242).
Fix? yes
/dev/mapper/ubuntu--vg-ubuntu--lv: ***** FILE SYSTEM WAS MODIFIED *****
/dev/mapper/ubuntu--vg-ubuntu--lv: 538718/121896960 files (0.2% non-contiguous), 82178067/487577600 blocks
fsck exited with status code 1
Fri Dec 30 17:26:25 2022
----------------
Running smart-log doesn't seem to show anything concerning, other than the number of unsafe shutdowns (the number of times this has happened so far)...
# nvme smart-log /dev/nvme0
Smart Log for NVME device:nvme0 namespace-id:ffffffff
critical_warning : 0
temperature : 32 C (305 Kelvin)
available_spare : 100%
available_spare_threshold : 10%
percentage_used : 0%
endurance group critical warning summary: 0
data_units_read : 8,544,896
data_units_written : 5,175,904
host_read_commands : 39,050,379
host_write_commands : 191,366,905
controller_busy_time : 1,069
power_cycles : 21
power_on_hours : 142
unsafe_shutdowns : 12
media_errors : 0
num_err_log_entries : 0
Warning Temperature Time : 0
Critical Composite Temperature Time : 0
Temperature Sensor 1 : 32 C (305 Kelvin)
Temperature Sensor 2 : 36 C (309 Kelvin)
Thermal Management T1 Trans Count : 0
Thermal Management T2 Trans Count : 0
Thermal Management T1 Total Time : 0
Thermal Management T2 Total Time : 0
I have reached out to support and their initial suggestion along with a bunch of questions was whether I had tried to reinstall the OS. I've given this a go too, formatting the drive and reinstalling the OS (Ubuntu Server 22 LTS).
After that, the issue hadn't happened for 4 days before it finally showed itself as a kernel panic:
Any ideas what I can do to identify if the problem is with the SSD itself or the hardware that the SSD is slotted into (the GTR6)? I have until the 31st to return the SSD, so would love to pin down the most likely cause of the issue sooner rather than later...
I'm even more concerned after seeing reports that others are having serious health issues with the Samsung 990 Pro:
https://www.reddit.com/r/hardware/comments/10jkwwh/samsung_990_pro_ssd_with_rapid_health_drops/
Edit: although I realised those reported issues are with the 990 pro, not the 980 pro that I have!
Edit2: someone in overclockers was kind enough to suggest hd sentinel, which does show a health metric, which seems ok:
# ./hdsentinel-019c-x64
Hard Disk Sentinel for LINUX console 0.19c.9986 (c) 2021 info@hdsentinel.com
Start with -r [reportfile] to save data to report, -h for help
Examining hard disk configuration ...
HDD Device 0: /dev/nvme0
HDD Model ID : Samsung SSD 980 PRO 2TB
HDD Serial No: S69ENL0T905031A
HDD Revision : 5B2QGXA7
HDD Size : 1907729 MB
Interface : NVMe
Temperature : 41 °C
Highest Temp.: 41 °C
Health : 99 %
Performance : 100 %
Power on time: 21 days, 12 hours
Est. lifetime: more than 1000 days
Total written: 8.30 TB
The status of the solid state disk is PERFECT. Problematic or weak sectors were not found.
The health is determined by SSD specific S.M.A.R.T. attribute(s): Available Spare (Percent), Percentage Used
No actions needed.
Lastly, none of the things I tried such as the smart-log seem to show something like a health metric. How can I check this in ubuntu?
Thanks!




Tiago
(101 rep)
Jan 26, 2023, 10:57 AM
• Last activity: Jul 18, 2025, 09:03 AM
8
votes
1
answers
491
views
Is it worth the hassle and risk of reformatting an NVME to use 4K blocks on a ZFS pool created with ashift=12?
I recently upgraded the NVME drives on my workstation machine, from a pair of Samsung EVO 970 512GB drives to a pair of of Kingston Fury 2TB drives. All went well, and I even converted the machine from old BIOS boot to UEFI boot. No problem. However, I just noticed that the NVME drives are formatted...
I recently upgraded the NVME drives on my workstation machine, from a pair of Samsung EVO 970 512GB drives to a pair of of Kingston Fury 2TB drives. All went well, and I even converted the machine from old BIOS boot to UEFI boot. No problem.
However, I just noticed that the NVME drives are formatted with 512 byte blocks rather than 4KiB blocks. I mistakenly assumed that they'd be 4K and didn't check.
# nvme list
Node Generic SN Model Namespace Usage Format FW Rev
------------ ---------- ---- ------------------- ---------- ------------- --------- --------
/dev/nvme0n1 /dev/ng0n1 XXXX KINGSTON SFYRD2000G 0x1 2.00TB/2.00TB 512B + 0B EIFK31.7
/dev/nvme1n1 /dev/ng1n1 XXXX KINGSTON SFYRD2000G 0x1 2.00TB/2.00TB 512B + 0B EIFK31.7
# nvme id-ns -H /dev/nvme0n1 | grep Data.Size
LBA Format 0 : Metadata Size: 0 bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good (in use)
LBA Format 1 : Metadata Size: 0 bytes - Data Size: 4096 bytes - Relative Performance: 0x1 Better
I'm using partitions on these drives for GRUB BIOS boot (p1), ESP (p2), an mdadm RAID-1 ext4 /boot filesystem (p3) with lots of space for kernels & ISO images, swap space (p4), L2ARC (p5) and ZIL (p6) for a HDD zfs pool, and the ZFS rootfs (p7).
The BIOS boot partition is obsolete now, since I've switched to UEFI but it resides in otherwise unused space before sector 2048 so isn't important.
They're both partitioned identically.
# gdisk -l /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.10
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/nvme0n1: 3907029168 sectors, 1.8 TiB
Model: KINGSTON SFYRD2000G
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 9E7187C9-3ED2-46EF-A695-E72489F2BEC3
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 8-sector boundaries
Total free space is 143 sectors (71.5 KiB)
Number Start (sector) End (sector) Size Code Name
1 34 2047 1007.0 KiB EF02 BIOS boot partition
2 2048 1050623 512.0 MiB EF00 EFI system partition
3 1050624 8390655 3.5 GiB FD00
4 8390656 142608383 64.0 GiB 8200 Linux swap
5 142608384 276826111 64.0 GiB BF08 Solaris Reserved 2
6 276826112 285214719 4.0 GiB BF09 Solaris Reserved 3
7 285214720 3907028991 1.7 TiB BF00 Solaris root
Anyway, I created the ZFS pool with ashift=12
for 4KiB block sizes, so it's always going to be reading and writing in multiples of 4K at a time.
What I want to know is if there will be a noticeable performance difference if I reformat the NVME drives to use 4K sectors?
I know (roughly) how to do that using the nvme
command while booted from a rescue image, but given the hassle involved and the amount of downtime, and the risk of losing data if I make a mistake or if disaster strikes during one of the periods when the ZFS pool is in a degraded state, I only want to do it if there is a significant benefit...significant, to me, meaning at least a 5 or 10% improvement, not just 1 or 2%.
(I have backups of the root pool - multiple nightly backups in multiple locations - but I'd prefer to avoid restoring from backup)
I don't care about performance for the ESP or /boot partitions. Swap & L2ARC might benefit. The ZIL rarely gets used and probably won't be noticable. The main concern is performance of the zpool partition itself.
cas
(81932 rep)
Jul 15, 2025, 02:25 PM
• Last activity: Jul 15, 2025, 05:23 PM
5
votes
1
answers
2373
views
grub command after fresh install of 20.04 focal alongside Win10 on NVMe drive
I have a Windows 10 install on an NVMe drive. I've installed Ubuntu 20.04 and all installed smoothly, until the first boot. I was greeted with a grub prompt. grub> After searching the forums and finding a wealth of information, I've been able to issue the following command and reach GRUB bootloader...
I have a Windows 10 install on an NVMe drive. I've installed Ubuntu 20.04 and all installed smoothly, until the first boot. I was greeted with a grub prompt.
grub>
After searching the forums and finding a wealth of information, I've been able to issue the following command and reach GRUB bootloader (and both Windows and Ubuntu load correctly from there):
grub> configfile (hd1,gpt5)/boot/grub/grub.cfg
However, when I reboot, I'm back to the grub command line. I've also found the following commands from the forums:
grub> set root=(hd1,gptN)
grub> set prefix=(hd1,gptN)/boot/grub/
grub> insmod normal
grub> normal
These commands also bring me to my grub menu and I can safely boot into either OS (Windows or Ubuntu). The problem is that I have to do this each time. Thus, I'm trying to make a permanent change to my grub settings.
Once in Ubuntu, I can update grub from the command line, and I can also reinstall grub. Both with the following.
$: sudo update-grub
$: sudo grub-install /dev/nvme0n1pX
However, I'm at a loss of how to ensure the correct partition number for X in the grub-install command. Is it as simple as N from the root/prefix commands within the above grub terminal? Or is there more definitive way to check which partition number to choose?
Any help is much appreciated.
user641699
(51 rep)
Jan 28, 2021, 02:42 PM
• Last activity: Jul 1, 2025, 07:08 AM
0
votes
1
answers
1890
views
Unable to boot on gentoo: GRUB2 - xfs - NVME SSD
I am currently configuring gentoo on a new machine using an NVME 500Gb SSD. I reboot my computer, select the disk I want to boot off of, grub2 initializes, **and then**, the error I get is the following: !!Block device UUID="9a89bdb4-8f36-4aa6-a4c7-831943b0985c" is not a valid root device... !!Could...
I am currently configuring gentoo on a new machine using an NVME 500Gb SSD.
I reboot my computer, select the disk I want to boot off of, grub2 initializes, **and then**, the error I get is the following:
!!Block device UUID="9a89bdb4-8f36-4aa6-a4c7-831943b0985c" is not a valid root device...
!!Could not find the root block device in UUID="9a89bdb4-8f36-4aa6-a4c7-831943b0985c"
Please specify another value or: Press Enter for the same, type "shell" for a shell, or q to skip..."
root block device() ::
Here is my current partition scheme:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 465.8G 0 disk
├─nvme0n1p1 259:1 0 2M 0 part /boot/efi
├─nvme0n1p2 259:2 0 128M 0 part /boot
├─nvme0n1p3 259:3 0 5G 0 part [SWAP]
├─nvme0n1p4 259:4 0 200G 0 part /
└─nvme0n1p5 259:5 0 260.6G 0 part /home
Here is my blkid:
/dev/nvme0n1p1: SEC_TYPE="msdos" UUID="DC09-2FD7" TYPE="vfat" PARTLABEL="grub" PARTUUID="2d5991fd-18ac-1148-a4d8-deb02f744ecb"
/dev/nvme0n1p2: UUID="6070-07C6" TYPE="vfat" PARTLABEL="boot" PARTUUID="5dba49e5-03cc-744e-bd47-a7570e83b08c"
/dev/nvme0n1p3: UUID="db229aaf-ddb4-4a86-8075-e7f035bfbf19" TYPE="swap" PARTLABEL="swap" PARTUUID="fdc303cc-e54e-c049-899a-e26286b5ec47"
/dev/nvme0n1p4: UUID="9a89bdb4-8f36-4aa6-a4c7-831943b0985c" TYPE="xfs" PARTLABEL="root" PARTUUID="da6232eb-58ab-9948-a3f6-8a7f14eebde4"
/dev/nvme0n1p5: UUID="e3237966-1b71-44b3-9d96-1ed7cc6f4d84" TYPE="xfs" PARTLABEL="home" PARTUUID="5b294354-fc3b-3148-bba2-418acfbb32bc"
This is part of my config in
/etc/default/grub
GRUB_CMDLINE_LINUX="rootfstype=xfs init=/usr/lib/systemd/systemd"
And this is my /boot/grub/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#
### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
load_env
fi
if [ "${next_entry}" ] ; then
set default="${next_entry}"
set next_entry=
save_env next_entry
set boot_once=true
else
set default="0"
fi
if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
else
menuentry_id_option=""
fi
export menuentry_id_option
if [ "${prev_saved_entry}" ]; then
set saved_entry="${prev_saved_entry}"
save_env saved_entry
set prev_saved_entry=
save_env prev_saved_entry
set boot_once=true
fi
function savedefault {
if [ -z "${boot_once}" ]; then
saved_entry="${chosen}"
save_env saved_entry
fi
}
function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod all_video
else
insmod efi_gop
insmod efi_uga
insmod ieee1275_fb
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
fi
}
if [ x$feature_default_font_path = xy ] ; then
font=unicode
else
insmod part_gpt
insmod xfs
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 9a89bdb4-8f36-4aa6-a4c7-831943b0985c
else
search --no-floppy --fs-uuid --set=root 9a89bdb4-8f36-4aa6-a4c7-831943b0985c
fi
font="/usr/share/grub/unicode.pf2"
fi
if loadfont $font ; then
set gfxmode=auto
load_video
insmod gfxterm
set locale_dir=$prefix/locale
set lang=en_CA
insmod gettext
fi
terminal_output gfxterm
if [ x$feature_timeout_style = xy ] ; then
set timeout_style=menu
set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
set timeout=5
fi
### END /etc/grub.d/00_header ###
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Gentoo GNU/Linux' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' {
load_video
if [ "x$grub_platform" = xefi ]; then
set gfxpayload=keep
fi
insmod gzio
insmod part_gpt
insmod fat
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 6070-07C6
else
search --no-floppy --fs-uuid --set=root 6070-07C6
fi
echo 'Loading Linux x86_64-4.19.44-gentoo ...'
linux /kernel-genkernel-x86_64-4.19.44-gentoo root=/dev/nvme0n1p4 ro rootfstype=xfs init=/usr/lib/systemd/systemd
echo 'Loading initial ramdisk ...'
initrd /initramfs-genkernel-x86_64-4.19.44-gentoo
}
submenu 'Advanced options for Gentoo GNU/Linux' $menuentry_id_option 'gnulinux-advanced-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' {
menuentry 'Gentoo GNU/Linux, with Linux x86_64-4.19.44-gentoo' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-x86_64-4.19.44-gentoo-advanced-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' {
load_video
if [ "x$grub_platform" = xefi ]; then
set gfxpayload=keep
fi
insmod gzio
insmod part_gpt
insmod fat
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 6070-07C6
else
search --no-floppy --fs-uuid --set=root 6070-07C6
fi
echo 'Loading Linux x86_64-4.19.44-gentoo ...'
linux /kernel-genkernel-x86_64-4.19.44-gentoo root=/dev/nvme0n1p4 ro rootfstype=xfs init=/usr/lib/systemd/systemd
echo 'Loading initial ramdisk ...'
initrd /initramfs-genkernel-x86_64-4.19.44-gentoo
}
menuentry 'Gentoo GNU/Linux, with Linux x86_64-4.19.44-gentoo (recovery mode)' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-x86_64-4.19.44-gentoo-recovery-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' {
load_video
if [ "x$grub_platform" = xefi ]; then
set gfxpayload=keep
fi
insmod gzio
insmod part_gpt
insmod fat
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 6070-07C6
else
search --no-floppy --fs-uuid --set=root 6070-07C6
fi
echo 'Loading Linux x86_64-4.19.44-gentoo ...'
linux /kernel-genkernel-x86_64-4.19.44-gentoo root=/dev/nvme0n1p4 ro single rootfstype=xfs init=/usr/lib/systemd/systemd
echo 'Loading initial ramdisk ...'
initrd /initramfs-genkernel-x86_64-4.19.44-gentoo
}
menuentry 'Gentoo GNU/Linux, with Linux 4.19.44-gentoo' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.19.44-gentoo-advanced-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' {
load_video
if [ "x$grub_platform" = xefi ]; then
set gfxpayload=keep
fi
insmod gzio
insmod part_gpt
insmod fat
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 6070-07C6
else
search --no-floppy --fs-uuid --set=root 6070-07C6
fi
echo 'Loading Linux 4.19.44-gentoo ...'
linux /vmlinuz-4.19.44-gentoo root=/dev/nvme0n1p4 ro rootfstype=xfs init=/usr/lib/systemd/systemd
echo 'Loading initial ramdisk ...'
initrd /initramfs-genkernel-x86_64-4.19.44-gentoo
}
menuentry 'Gentoo GNU/Linux, with Linux 4.19.44-gentoo (recovery mode)' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.19.44-gentoo-recovery-9a89bdb4-8f36-4aa6-a4c7-831943b0985c' {
load_video
if [ "x$grub_platform" = xefi ]; then
set gfxpayload=keep
fi
insmod gzio
insmod part_gpt
insmod fat
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root 6070-07C6
else
search --no-floppy --fs-uuid --set=root 6070-07C6
fi
echo 'Loading Linux 4.19.44-gentoo ...'
linux /vmlinuz-4.19.44-gentoo root=/dev/nvme0n1p4 ro single rootfstype=xfs init=/usr/lib/systemd/systemd
echo 'Loading initial ramdisk ...'
initrd /initramfs-genkernel-x86_64-4.19.44-gentoo
}
}
### END /etc/grub.d/10_linux ###
### BEGIN /etc/grub.d/20_linux_xen ###
### END /etc/grub.d/20_linux_xen ###
### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###
### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###
### BEGIN /etc/grub.d/41_custom ###
if [ -f ${config_directory}/custom.cfg ]; then
source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
Finally, here is the content of my etc/fstab
:
# /etc/fstab: static file system information.
#
# noatime turns off atimes for increased performance (atimes normally aren't
# needed); notail increases performance of ReiserFS (at the expense of storage
# efficiency). It's safe to drop the noatime options if you want and to
# switch between notail / tail freely.
#
# The root filesystem should have a pass number of either 0 or 1.
# All other filesystems should have a pass number of 0 or greater than 1.
#
# See the manpage fstab(5) for more information.
#
#
# NOTE: If your BOOT partition is ReiserFS, add the notail option to opts.
#
# NOTE: Even though we list ext4 as the type here, it will work with ext2/ext3
# filesystems. This just tells the kernel to use the ext4 driver.
#
# NOTE: You can use full paths to devices like /dev/sda3, but it is often
# more reliable to use filesystem labels or UUIDs. See your filesystem
# documentation for details on setting a label. To obtain the UUID, use
# the blkid(8) command.
#LABEL=boot /boot ext4 noauto,noatime 1 2
#UUID=58e72203-57d1-4497-81ad-97655bd56494 / ext4 noatime 0 1
#LABEL=swap none swap sw 0 0
#/dev/cdrom /mnt/cdrom auto noauto,ro 0 0
# /dev/nvme0n1p4
UUID=9a89bdb4-8f36-4aa6-a4c7-831943b0985c / xfs rw,relatime,attr2,inode64,noquota 0 1
# /dev/nvme0n1p2
UUID=6070-07C6 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro 0 2
# /dev/nvme0n1p1
UUID=DC09-2FD7 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro 0 2
# /dev/nvme0n1p5
UUID=e3237966-1b71-44b3-9d96-1ed7cc6f4d84 /home xfs rw,relatime,attr2,inode64,noquota 0 2
# /dev/nvme0n1p3
UUID=3128bf96-71f7-4a95-a81c-f82788c37f4f none swap defaults 0 0
I also did the following for troubleshooting:
- enable nvme support in the kernel
- enable xfs filesystem support in the kernel
- load grub without the rootfstype=xfs
- substitute the UUID with /dev/nvme0n1p4
in my fstab file
- drown my sorrows in liquor
[This issue](https://unix.stackexchange.com/questions/343056/could-not-find-the-root-block-device-in-gentoo) does not apply as it was a USB driver problem. And [this one](https://forums.gentoo.org/viewtopic-t-919588-start-0.html) was not of any help either.
Benjamin Chausse
(107 rep)
Jun 1, 2019, 09:15 AM
• Last activity: Jun 28, 2025, 02:05 AM
1
votes
1
answers
51
views
U.2 SSD drive not detected. Drive's or adapter's fault?
I'm connecting a U.2 SSD drive to my computer with a U.2 USB enclosure. The drive was never tested by me, and it's a classic ex-enterprise drive, the ones removed from a server and sold second-hand, susceptible to being getting rid of because faulty. However, I'm not an expert of the interface and I...
I'm connecting a U.2 SSD drive to my computer with a U.2 USB enclosure. The drive was never tested by me, and it's a classic ex-enterprise drive, the ones removed from a server and sold second-hand, susceptible to being getting rid of because faulty. However, I'm not an expert of the interface and I've bought an adapter specifically to be able to connect this drive.
Obviously, something is not working, but I can't tell if the drive is corrupted or if the adapter is not working properly. Here is the output of
[ 97.657513] usb 2-1: new SuperSpeed USB device number 3 using xhci_hcd
[ 97.669267] usb 2-1: New USB device found, idVendor=152d, idProduct=0583, bcdDevice= 2.12
[ 97.669274] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 97.669276] usb 2-1: Product: NVME USB3.2
[ 97.669279] usb 2-1: Manufacturer: JMicron
[ 97.669281] usb 2-1: SerialNumber: 0123456789ABC
[ 97.673653] scsi host1: uas
[ 97.674217] scsi 1:0:0:0: Direct-Access NVME USB 3.2 0212 PQ: 0 ANSI: 6
[ 97.676078] sd 1:0:0:0: Attached scsi generic sg1 type 0
[ 105.835638] sd 1:0:0:0: [sdb] Unit Not Ready
[ 105.835647] sd 1:0:0:0: [sdb] Sense Key : Hardware Error [current]
[ 105.835654] sd 1:0:0:0: [sdb] ASC=0x44 >ASCQ=0x81
[ 105.960737] sd 1:0:0:0: [sdb] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[ 105.960749] sd 1:0:0:0: [sdb] Sense Key : Hardware Error [current]
[ 105.960757] sd 1:0:0:0: [sdb] ASC=0x44 >ASCQ=0x81
[ 106.126503] sd 1:0:0:0: [sdb] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[ 106.126513] sd 1:0:0:0: [sdb] Sense Key : Hardware Error [current]
[ 106.126520] sd 1:0:0:0: [sdb] ASC=0x44 >ASCQ=0x81
[ 106.166475] sd 1:0:0:0: [sdb] 0 512-byte logical blocks: (0 B/0 B)
[ 106.166484] sd 1:0:0:0: [sdb] 0-byte physical blocks
[ 106.287357] sd 1:0:0:0: [sdb] Test WP failed, assume Write Enabled
[ 106.327325] sd 1:0:0:0: [sdb] Asking for cache data failed
[ 106.327333] sd 1:0:0:0: [sdb] Assuming drive cache: write through
[ 106.327337] sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes not a multiple of physical block size (0 bytes)
[ 106.327341] sd 1:0:0:0: [sdb] Optimal transfer size 33553920 bytes not a multiple of physical block size (0 bytes)
[ 106.327760] sd 1:0:0:0: [sdb] Attached SCSI disk
Can anybody clarify this for me?
Jeffrey Lebowski
(427 rep)
Jun 11, 2025, 03:15 PM
• Last activity: Jun 12, 2025, 04:22 AM
5
votes
1
answers
165
views
Replace Disk Raid 1 And Reconfigure to use full size - mdadm almalinux
I am facing an issue with my RAID 1 (mdadm softraid) on an AlmaLinux/CloudLinux OS server, which is a production server with live data. Here's the chronology of events: 1. Initially, I created a RAID 1 array with two 1TB NVMe disks (2 x 1TB). 2. At some point, the second NVMe disk failed. I replaced...
I am facing an issue with my RAID 1 (mdadm softraid) on an AlmaLinux/CloudLinux OS server, which is a production server with live data. Here's the chronology of events:
1. Initially, I created a RAID 1 array with two 1TB NVMe disks (2 x 1TB).
2. At some point, the second NVMe disk failed. I replaced it with a new
2TB NVMe disk. I then added this new 2TB NVMe disk to the RAID
array, but it was partitioned/configured to match the 1TB capacity
of the remaining active disk.
3. Currently, the first 1TB disk has failed and was automatically
kicked out by the RAID system when I rebooted the server. So, only
the 2TB NVMe disk (which is currently acting as a 1TB member of the
degraded RAID) remains.
**Replacement and Setup Plan**
I have already replaced the failed 1TB disk with a new 2TB NVMe disk. I want to utilize the full 2TB capacity since both disks are now 2 x 2TB.
[root@id1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md124 : active raid5 sdd2 sdc2 sda2
62945280 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk
md125 : active raid5 sdd1 sdc1 sda1
1888176128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 7/8 pages [28KB], 65536KB chunk
md126 : active raid5 sda3 sdc3 sdd3
2097152 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 nvme1n1p1
976628736 blocks super 1.2 [2/1] [_U]
bitmap: 8/8 pages [32KB], 65536KB chunk
unused devices:
---
mdadm --detail /dev/md127
[root@id1 ~]# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Tue Aug 29 05:57:10 2023
Raid Level : raid1
Array Size : 976628736 (931.39 GiB 1000.07 GB)
Used Dev Size : 976628736 (931.39 GiB 1000.07 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu May 29 01:33:09 2025
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : idweb.webserver.com:root (local to host idweb.webserver.com)
UUID : 3fb9f52f:45f39d12:e7bb3392:8eb1481f
Events : 33132451
Number Major Minor RaidDevice State
- 0 0 0 removed
2 259 2 1 active sync /dev/nvme1n1p1
---
lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
[root@id1 ~]# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE MOUNTPOINT LABEL
sda 931.5G
├─sda1 linux_raid_member 900.5G web1srv.serverhostweb.com:home2
│ └─md125 ext4 1.8T /home2
├─sda2 linux_raid_member 30G web1srv.serverhostweb.com:tmp
│ └─md124 ext4 60G /var/tmp
└─sda3 linux_raid_member 1G web1srv.serverhostweb.com:boot
└─md126 xfs 2G /boot
sdb ext4 5.5T
sdc 931.5G
├─sdc1 linux_raid_member 900.5G web1srv.serverhostweb.com:home2
│ └─md125 ext4 1.8T /home2
├─sdc2 linux_raid_member 30G web1srv.serverhostweb.com:tmp
│ └─md124 ext4 60G /var/tmp
└─sdc3 linux_raid_member 1G web1srv.serverhostweb.com:boot
└─md126 xfs 2G /boot
sdd 931.5G
├─sdd1 linux_raid_member 900.5G web1srv.serverhostweb.com:home2
│ └─md125 ext4 1.8T /home2
├─sdd2 linux_raid_member 30G web1srv.serverhostweb.com:tmp
│ └─md124 ext4 60G /var/tmp
└─sdd3 linux_raid_member 1G web1srv.serverhostweb.com:boot
└─md126 xfs 2G /boot
nvme0n1 1.8T
nvme1n1 1.8T
└─nvme1n1p1 linux_raid_member 931.5G web1srv.serverhostweb.com:root
└─md127 ext4 931.4G /
What are the steps to repair my soft RAID 1, maximize the storage to 2TB, and ensure the data remains safe?
I have some example step but not really sure, does the below step right?:
# Create a partition on the new disk with a full size of 2TB
fdisk /dev/nvme0n1
mdadm --manage /dev/md127 --add /dev/nvme0n1p1
# Wait for First Sync
# Fail and remove the old disk
mdadm --manage /dev/md127 --fail /dev/nvme1n1p1
mdadm --manage /dev/md127 --remove /dev/nvme1n1p1
# Repartition the old disk for full 2TB
gdisk /dev/nvme1n1
# Add back to RAID
mdadm --manage /dev/md127 --add /dev/nvme1n1p1
# Wait for Second Sync
# Expand RAID array to maximum
mdadm --grow /dev/md127 --size=max
# Verify new size
mdadm --detail /dev/md127
# Resize ext4 filesystem
resize2fs /dev/md127
# Update mdadm.conf
mdadm --detail --scan > /etc/mdadm.conf
# Update initramfs
dracut -f
Server Spec:
- Os Almalinux/Cloudlinux 8
Hendra Setyawan
(51 rep)
May 28, 2025, 07:01 PM
• Last activity: May 30, 2025, 01:02 AM
6
votes
1
answers
10569
views
How to check/fix nvme health?
I'm running debian stable with a 2 x nvme Raid 1. Here is the hardware/hoster it's running on https://www.hetzner.com/dedicated-rootserver/ex62-nvme?country=us Almost every second day mdadm monitoring reports a fail event and leaves the array degraded. It only disables 1 partition as you can see her...
I'm running debian stable with a 2 x nvme Raid 1.
Here is the hardware/hoster it's running on
https://www.hetzner.com/dedicated-rootserver/ex62-nvme?country=us
Almost every second day mdadm monitoring reports a fail event and leaves the array degraded.
It only disables 1 partition as you can see here:
This is an automatically generated mail message from mdadm
running on xxx
A Fail event had been detected on md device /dev/md/2.
It could be related to component device /dev/nvme1n1p3.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 nvme1n1p3(F) nvme0n1p3
465895744 blocks super 1.2 [2/1] [U_]
bitmap: 4/4 pages [16KB], 65536KB chunk
md0 : active (auto-read-only) raid1 nvme1n1p1 nvme0n1p1
33521664 blocks super 1.2 [2/2] [UU]
md1 : active raid1 nvme0n1p2 nvme1n1p2
523712 blocks super 1.2 [2/2] [UU]
unused devices:
This happens on both disks. One time it's nvme0n1p3 and next time it's nvme1n1p3.
I then just re-add the failed partition with
mdadm --re-add /dev/md2 /dev/nvme0n1p3
or
mdadm --re-add /dev/md2 /dev/nvme1n1p3
and after the resync it works for a day or two.
In dmesg I found this:
[94879.144892] nvme nvme1: I/O 311 QID 1 timeout, reset controller
[94879.252851] nvme nvme1: completing aborted command with status: 0007
[94879.252970] blk_update_request: I/O error, dev nvme1n1, sector 452352001
[94879.253091] nvme nvme1: completing aborted command with status: fffffffc
[94879.253223] blk_update_request: I/O error, dev nvme1n1, sector 68159504
[94879.253418] md: super_written gets error=-5
I tried to check the health of the devices with these commands, but they don't give me stats like "Reallocated_Sector_Ct" or "Reported_Uncorrect".
smartctl -x /dev/nvme1
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-8-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Number: KXG50ZNV512G TOSHIBA
Serial Number: 28SS10F6TYST
Firmware Version: AAGA4102
PCI Vendor/Subsystem ID: 0x1179
IEEE OUI Identifier: 0x00080d
Total NVM Capacity: 512,110,190,592 [512 GB]
Unallocated NVM Capacity: 0
Controller ID: 0
Number of Namespaces: 1
Namespace 1 Size/Capacity: 512,110,190,592 [512 GB]
Namespace 1 Formatted LBA Size: 512
Local Time is: Mon May 13 10:34:11 2019 CEST
Firmware Updates (0x14): 2 Slots, no Reset required
Optional Admin Commands (0x0017): Security Format Frmw_DL *Other*
Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat *Other*
Maximum Data Transfer Size: 512 Pages
Warning Comp. Temp. Threshold: 78 Celsius
Critical Comp. Temp. Threshold: 82 Celsius
Namespace 1 Features (0x02): NA_Fields
Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 6.00W - - 0 0 0 0 0 0
1 + 2.40W - - 1 1 1 1 0 0
2 + 1.90W - - 2 2 2 2 0 0
3 - 0.0500W - - 3 3 3 3 1500 1500
4 - 0.0050W - - 4 4 4 4 6000 14000
5 - 0.0030W - - 5 5 5 5 50000 80000
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 2
1 - 4096 0 1
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02, NSID 0xffffffff)
Critical Warning: 0x00
Temperature: 47 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 57%
Data Units Read: 31,858,921 [16.3 TB]
Data Units Written: 293,589,002 [150 TB]
Host Read Commands: 4,130,502,428
Host Write Commands: 889,121,505
Controller Busy Time: 13,552
Power Cycles: 7
Power On Hours: 6,720
Unsafe Shutdowns: 0
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 47 Celsius
Error Information (NVMe Log 0x01, max 128 entries)
No Errors Logged
nvme smart-log /dev/nvme1
Smart Log for NVME device:nvme1 namespace-id:ffffffff
critical_warning : 0
temperature : 47 C
available_spare : 100%
available_spare_threshold : 10%
percentage_used : 57%
data_units_read : 31,858,921
data_units_written : 293,589,023
host_read_commands : 4,130,502,429
host_write_commands : 889,122,059
controller_busy_time : 13,552
power_cycles : 7
power_on_hours : 6,720
unsafe_shutdowns : 0
media_errors : 0
num_err_log_entries : 0
Warning Temperature Time : 0
Critical Composite Temperature Time : 0
Temperature Sensor 1 : 47 C
Temperature Sensor 2 : 0 C
Temperature Sensor 3 : 0 C
Temperature Sensor 4 : 0 C
Temperature Sensor 5 : 0 C
Temperature Sensor 6 : 0 C
Temperature Sensor 7 : 0 C
Temperature Sensor 8 : 0 C
nvme smart-log-add /dev/nvme1
NVMe Status:INVALID_LOG_PAGE(4109)
smartctl -A /dev/nvme1
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-8-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF SMART DATA SECTION ===
SMART/Health Information (NVMe Log 0x02, NSID 0xffffffff)
Critical Warning: 0x00
Temperature: 46 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 57%
Data Units Read: 31,858,924 [16.3 TB]
Data Units Written: 293,591,327 [150 TB]
Host Read Commands: 4,130,502,490
Host Write Commands: 889,172,096
Controller Busy Time: 13,552
Power Cycles: 7
Power On Hours: 6,721
Unsafe Shutdowns: 0
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 46 Celsius
I only noticed the issue after apache failed to start and I repaired the filesystem with fsck.ext4 -f. Before I didn't have setup root mail correctly.
So looks to me like a hardware error and I should get rid of both nvmes.
Is there anything I can try to fix these issues and save the nvmes? Or at least to get all the smart values like "Reported_Uncorrect" or "Offline_Uncorrectable".
treffner
(61 rep)
May 13, 2019, 08:55 AM
• Last activity: May 17, 2025, 06:00 AM
3
votes
1
answers
107
views
How do I debug NVME SSD heating while idle when booting from it?
`nvme smart-log /dev/nvme0n1` shows consistently climbing temporature of a `KINGSTON SKC3000D4096G` NVME SSD drive, even when idle. Without load, it converges to 85-90 degrees Celsium. Laptop is indeed very hot at that and the fan is running all the time. The problem is reproducible when booting in...
nvme smart-log /dev/nvme0n1
shows consistently climbing temporature of a KINGSTON SKC3000D4096G
NVME SSD drive, even when idle. Without load, it converges to 85-90 degrees Celsium. Laptop is indeed very hot at that and the fan is running all the time.
The problem is reproducible when booting in single-user mode.
System is Debian running kernel 6.12.12+bpo-amd64
(or 6.1.0-31-amd64
- the same result).
However, when booting Linux Mint or Debian **LiveCD** (with similar kernel, e.g. 6.8; actually a USB flash drive, not CD), the drive does **not heat up** when idle, even if something is mounted from it. Only actively reading to it or writing to it makes the temperature climb.
It stays hot even when doing
echo s2idle > /sys/power/mem_sleep
echo mem > /sys/power/sleep
, but cools down when doing
echo deep > /sys/power/mem_sleep
echo mem > /sys/power/sleep
How do I debug this?
I tried experimenting with applying powertop
suggestions, with using nvme_core.default_ps_max_latency_us=0
or (=20
). Maybe there is some arcane nvme set-feature
command to help? Maybe there is a way to temporarily suspend the device besides full s2ram (like hdparm -y
, but for NVMe)?
---
nvme get-feature /dev/nvme0 -f 0x0c -H
shows a couple of non-zero entries (100 ms
, 4
). Entries seem to stay the same.
---
# Update: partial workaround
Based on [this bug report](https://bugs.launchpad.net/ubuntu/+source/nvme-cli/+bug/2064042) , I tried using nobarrier
and it indeed stopped the heating (after s2ram cycle). Doing mount -o remount,barrier /
immediately starts the power drain.
Vi.
(5985 rep)
Apr 15, 2025, 09:31 PM
• Last activity: Apr 16, 2025, 05:44 PM
0
votes
1
answers
291
views
PCI correctable errors on my SSD
I am on Ubuntu 22.04 LTS, and I have a Western Digital WD Black 500Gb NVME2 ssd. The laptop is a Dell E5495. I installed a fresh Ubuntu (previously was Windows 11), but I continuosly get the following errors into the system log: ``` 426.038056] pcieport 0000:00:01.5: AER: Correctable error message r...
I am on Ubuntu 22.04 LTS, and I have a Western Digital WD Black 500Gb NVME2 ssd. The laptop is a Dell E5495.
I installed a fresh Ubuntu (previously was Windows 11), but I continuosly get the following errors into the system log:
426.038056] pcieport 0000:00:01.5: AER: Correctable error message received from 0000:04:00.0
[ 426.038083] nvme 0000:04:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
[ 426.038092] nvme 0000:04:00.0: device [15b7:5017] error status/mask=00000001/0000e000
[ 426.038101] nvme 0000:04:00.0: [ 0] RxErr (First)
[ 426.575193] pcieport 0000:00:01.5: AER: Multiple Correctable error message received from 0000:04:00.0
[ 426.575220] pcieport 0000:00:01.5: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Transmitter ID)
[ 426.575227] pcieport 0000:00:01.5: device [1022:15d3] error status/mask=00001000/00006000
[ 426.575236] pcieport 0000:00:01.5: Timeout
[ 426.575248] nvme 0000:04:00.0: PCIe Bus Error: severity=Correctable, type=Physical Layer, (Receiver ID)
[ 426.575255] nvme 0000:04:00.0: device [15b7:5017] error status/mask=00000081/0000e000
[ 426.575263] nvme 0000:04:00.0: [ 0] RxErr (First)
[ 426.575270] nvme 0000:04:00.0: [ 7] BadDLLP
[ 426.575276] nvme 0000:04:00.0: AER: Error of this Agent is reported first
Despite of this the laptop works well.
Could anybody tell me why?
Thank you!
**SOLVED:** after upgrading the PC BIOS by fwuptd the issue has gone!
$ journalctl -b 0 | grep -i aer
apr 13 00:08:53 Laptop kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
apr 13 00:08:53 Laptop kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 25
apr 13 00:08:53 Laptop kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 26
apr 13 00:08:53 Laptop kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 27
apr 13 00:08:53 Laptop kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 28
apr 13 00:08:53 Laptop kernel: pcieport 0000:00:08.2: AER: enabled with IRQ 30
Antonio Petricca
(654 rep)
Apr 4, 2025, 04:57 AM
• Last activity: Apr 12, 2025, 10:19 PM
-1
votes
1
answers
95
views
NVMe M.2 PCIe 4.0 recognised in BIOS but fail to find from OS
I have a Dell Latitude 7390 2-in-1 with a 256Gb NVMe installed with dual boot Win10 and Deb12. All working fine. Just running out of space so planned to clone to bigger drive. From Dell site and Crucial upgrade site concluded a 2Tb NVMe M.2 PCIe 4.0 should work. Bought a Kingston of that spec. I hav...
I have a Dell Latitude 7390 2-in-1 with a 256Gb NVMe installed with dual boot Win10 and Deb12. All working fine. Just running out of space so planned to clone to bigger drive. From Dell site and Crucial upgrade site concluded a 2Tb NVMe M.2 PCIe 4.0 should work. Bought a Kingston of that spec. I have attempted to clone the 256Gb to the 2Tb using, Clonezilla, dd, AOMEI Backupper and Macrium Reflect X Home (Free trial). Both via direct clone with the 2Tb in a USB-C housing or via backup to image file and restore. After replacing the 256Gb with the 2Tb in the Dell, ALL methods of cloning fail to boot. The Kingston is listed in BIOS but I just get a spinning white circle and unit eventually rebooting and repeating. If I boot the Dell from GParted live, clonezilla, or attempt to install a fresh Debian 12, ALL fail to find/list the 2Tb NVME drive, despite it showing in BIOS. If I swap back to the 256Gb the Dell boots fine and from Windows it will see the 2Tb from windows though it marks as offline as it has same disk signature ... all the partitions look as expected from the clone. If I connect to a separate unit running Debian 11 it shows as expected and I can mount the partitions and read write. So this maybe more a Dell question, but any thoughts or knowledge? Have I bought the wrong spec of NVME for that Dell or is there a BIOS setting I need to tweak?
More info, the old drive is a "SK hynix SC311 SATA 256Gb" if that is an M2 SSD not NVMe would that prevent the cloning?
Also in addition to software already previously listed above, have also now tried Boot-Repair that @oldfred suggested. It lists the 2Tb NVMe like BIOS does, but as per other software it then has no mention of it in the rest of the log
Thank you to @oldfred, his comments were main part of answer. Per his comments, I had to put the old drive back in. Apply the AHCI change, then re clone it. I used Macrium and both created an image and also did a direct clone to the new 2Tb NVMe in a USB caddy. Then swapped the 2Tb for the old 256Gb drive, rebooted, which failed. I ran Macrium's boot repair option on the 2Tb but all the blue screen options failed to do anything apart from accessing UEFI settings failed. So option 2 was I booted Macrium's USB recovery media and used it to restore the image to the 2Tb, on rebooting the 2Tb said "Preparing Automatic Repair" and "Diagnosing PC" rebooted did again, but I could select Advanced and enter Safe Mode...which ran fine. Rebooted and finally all good.
kalpha
(3 rep)
Feb 26, 2025, 11:37 PM
• Last activity: Mar 2, 2025, 03:36 AM
4
votes
0
answers
650
views
Behavior of the nvme tool with respect to protection information?
Environment: Debian Bullseye, up to date at the time of writing, nvme-cli 1.12 I am totally new to NVMe and currently try to configure an NVMe SSD correctly. As far as I can tell, I don't need metadata, but **I would like to use the T10-PI protection information.** I have got two questions regarding...
Environment: Debian Bullseye, up to date at the time of writing, nvme-cli 1.12
I am totally new to NVMe and currently try to configure an NVMe SSD correctly. As far as I can tell, I don't need metadata, but **I would like to use the T10-PI protection information.** I have got two questions regarding this subject:
**First, I'd like to know how to find out whether that protection has been enabled for a certain device or namespace at all.** I know that I can enable or disable the T10-PI when formatting the device or namespace, but I can't figure out how to get its current status. I have read a good part of the manual pages of the various
nvme
commands, and also have tried to make sense of the NVMe specification to a certain level, but to no avail. I just seem to be unable to spot it.
I am having that problem only with that specific setting; with other settings that are of interest to me, it didn't take too long to find out how to read their current status or value.
**Second, I am not sure how to enable that protection.** Theoretically, and from reading man nvme-format
, it is clear. I just have to add the -i
parameter to the format command to have something like that:
nvme format /dev/nvme0 -l 3 -i 1
The disk in question provides 6 LBA modes, mode 3
being the one I want: 4096 bytes per sector, no metadata; hence the -l 3
parameter. -i 1
turns on the T10-PI.
When I issue the command above, it gets executed without error message. Afterwards, smartctl -x /dev/nvme0
shows that the current LBA size is now 4096
; nvme id-ns /dev/nvme0n1
confirms that mode 3
is in use as expected. So far, so good.
But the following is very suspicious:
root@gaia ~/scripts # nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 PHFT640100G7800CGN INTEL SSDPEDMD800G4 1 800.17 GB / 800.17 GB 4 KiB + 0 B 8DV10171
Under Format
, it shows 4 KiB + 0 B
. Why? As far as I have understood, the T10-PI needs at least 8 bytes of metadata per LBA. Therefore, I am unsure what actually happens.
Does nvme format /dev/nvme0 -l 3 -i 1
just leave the PI disabled (because there is no metadata, and therefore no space for it)? Or is the PI enabled, but nvme list
shows only the "real" metadata size (not including the "implicit" bytes which are needed for the PI)?
Do I need to use -l 4
instead of -l 3
with nvme format
? -l 4
means 4096 bytes LBA size + 8 bytes metadata. If I need to use -l 4
, why does nvme format -l 3 -i 1
not throw an error due to wrong command line parameters (we can't turn on T10-PI if we don't have metadata)?
Binarus
(3891 rep)
Oct 30, 2022, 09:14 PM
• Last activity: Jan 16, 2025, 01:05 PM
1
votes
1
answers
538
views
Possible to map an entire NVMe SSD into PCIe BAR for MMIO?
Assumed with a 1 TiB NVMe SSD, I am wondering if it's possible to map its entire capacity (1 TiB) into PCIe BAR for memory-mapped I/O (MMIO). My understanding is that typically only device registers and doorbell registers of an NVMe SSD are mapped to PCIe BAR space, allowing MMIO access. Once the do...
Assumed with a 1 TiB NVMe SSD, I am wondering if it's possible to map its entire capacity (1 TiB) into PCIe BAR for memory-mapped I/O (MMIO).
My understanding is that typically only device registers and doorbell registers of an NVMe SSD are mapped to PCIe BAR space, allowing MMIO access. Once the doorbell is triggered, data transfers occur via DMA between system memory and the NVMe SSD. It makes me thinking if is possible to open up the limited size of devices memory/registers for large range MMIO. ALso in this post, NVMe SSDs's CMB (Controller Memory Buffer) is excluded.
Given the disparity between the small size of the NVMe SSD's PCIe BAR space and its overall storage capacity, I'm unsure whether the entire SSD can be exposed to the PCIe BAR or physical memory.
I'm seeking guidance or clarification on my understanding of PCIe, BAR, and NVMe.
---
Here is an example of 1 TiB Samsung 980Pro SSD with only 16K in PCIe BAR:
# lspci -s 3b:00.0 -v
3b:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO (prog-if 02 [NVM Express])
Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
Flags: bus master, fast devsel, latency 0, IRQ 116, NUMA node 0, IOMMU group 11
Memory at b8600000 (64-bit, non-prefetchable) [size=16K]
Capabilities: Power Management version 3
Capabilities: MSI: Enable- Count=1/32 Maskable- 64bit+
Capabilities: Express Endpoint, MSI 00
Capabilities: [b0] MSI-X: Enable+ Count=130 Masked-
Capabilities: Advanced Error Reporting
Capabilities: Alternative Routing-ID Interpretation (ARI)
Capabilities: Secondary PCI Express
Capabilities: Physical Layer 16.0 GT/s
Capabilities: [1bc] Lane Margining at the Receiver
Capabilities: Latency Tolerance Reporting
Capabilities: [21c] L1 PM Substates
Capabilities: [3a0] Data Link Feature
Kernel driver in use: nvme
Kernel modules: nvme
JGL
(161 rep)
May 2, 2024, 08:58 PM
• Last activity: Dec 31, 2024, 04:36 PM
1
votes
4
answers
1043
views
How to truly securely erase a nvme disk?
I was trying to securely erase my disk, and realized this is a much more difficult task than I expected, and even the first link of stackexchange is not really updated, and quite dangerous indeed https://unix.stackexchange.com/questions/593181/is-shred-bad-for-erasing-ssds I was going to use shred f...
I was trying to securely erase my disk, and realized this is a much more difficult task than I expected, and even the first link of stackexchange is not really updated, and quite dangerous indeed
https://unix.stackexchange.com/questions/593181/is-shred-bad-for-erasing-ssds
I was going to use shred from live linux usb, but it appears it is not safe for SSD (and nvme?) so I should used blkdiscard -s but it says my device is not supported
blkdiscard: /dev/nvme0n1: BLKSECDISCARD ioctl failed: Operation not supported
The other question is quite dangerous because it says they are not worried with advanced data recovery methods, but this is not my case, and the first result in google shouldn't make people do dangerous things.
I also discovered this ATA Secure Erase from the linux kernel but even they say this is outdated
I am looking for what is truly the most secure method for erasing data in a device, like if your life depends on it, short of burning the disk to ashes or something.
So based on my research what we should do is
blkdiscard -s
if supported, also blkdiscard -v
and then also just to be sure shred -v
Tear of the hardware is not a concern at all, but can we even trust that software solutions can be 100% safe those days?
Freedo
(1384 rep)
Dec 4, 2024, 04:23 AM
• Last activity: Dec 4, 2024, 06:19 PM
3
votes
2
answers
762
views
nvme timeout issues on new Intel 7-155H laptop - kernel 6.5 OEM
I have a new laptop with an Intel 7-155H processor and an nvme SSD. I'm running Linux Mint 21.3 and having storage issues. On initial installation the entire system was very slow to do all operations. I upgraded to the 6.5.0-1023-oem kernel and things improved significantly. However, whenever I do a...
I have a new laptop with an Intel 7-155H processor and an nvme SSD. I'm running Linux Mint 21.3 and having storage issues. On initial installation the entire system was very slow to do all operations. I upgraded to the 6.5.0-1023-oem kernel and things improved significantly. However, whenever I do a moderate amount of read/writes the entire system freezes, except for the mouse cursor, for about a minute. The dmesg output shows multiple messages like:
nvme nvme0: I/O 97 QID 1 timeout, completion polled
How can I find the root cause of this and resolve it?
Origin
(141 rep)
May 15, 2024, 09:24 PM
• Last activity: Nov 22, 2024, 08:13 PM
0
votes
2
answers
174
views
U-Boot doesn't load boot script from NVME
Is this possible to boot Rpi CM4 + io-board completely from NVME ? I updated Rpi bootloader and it loads kernel well without U-boot. When I switch to U-boot, it doesn't load boot.scr located in the same FAT partition. It tries mmc, usb but never nvme drive. What is my problem ?
Is this possible to boot Rpi CM4 + io-board completely from NVME ?
I updated Rpi bootloader and it loads kernel well without U-boot.
When I switch to U-boot, it doesn't load boot.scr located in the same FAT partition. It tries mmc, usb but never nvme drive.
What is my problem ?
hanni76
(3 rep)
Nov 17, 2024, 02:15 PM
• Last activity: Nov 22, 2024, 01:18 AM
4
votes
1
answers
3211
views
NVMe errors diagnostics
I would like to understand why I get the below mails about S.M.A.R.T. of my new NVMe drive. **DMESG** ```lang-none $ dmesg --ctime | grep -i nvm [Mon Aug 8 10:48:31 2022] nvme nvme0: pci function 0000:3d:00.0 [Mon Aug 8 10:48:31 2022] nvme nvme0: missing or invalid SUBNQN field. [Mon Aug 8 10:48:31...
I would like to understand why I get the below mails about S.M.A.R.T. of my new NVMe drive.
**DMESG**
-none
$ dmesg --ctime | grep -i nvm
[Mon Aug 8 10:48:31 2022] nvme nvme0: pci function 0000:3d:00.0
[Mon Aug 8 10:48:31 2022] nvme nvme0: missing or invalid SUBNQN field.
[Mon Aug 8 10:48:31 2022] nvme nvme0: Shutdown timeout set to 8 seconds
[Mon Aug 8 10:48:31 2022] nvme nvme0: 8/0/0 default/read/poll queues
[Mon Aug 8 10:48:31 2022] nvme0n1: p1 p2
[Mon Aug 8 10:48:37 2022] EXT4-fs (nvme0n1p2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
[Mon Aug 8 10:48:37 2022] EXT4-fs (nvme0n1p2): re-mounted. Opts: errors=remount-ro. Quota mode: none.
**NVME ERRORS**
-none
$ sudo nvme error-log /dev/nvme0
...
Entry
.................
error_count : 0
sqid : 0
cmdid : 0
status_field : 0(SUCCESS: The command completed successfully)
phase_tag : 0
parm_err_loc : 0
lba : 0
nsid : 0
vs : 0
trtype : The transport type is not indicated or the error is not transport related.
cs : 0
trtype_spec_info: 0
.................
...
Could anyone shed some light on why I am getting new mails like this:
**MAIL**
-none
# mail
Message 44:
From root@dell-laptop-CENSORED Sun Aug 7 08:13:07 2022
X-Original-To: root
To: root@dell-laptop-CENSORED
Subject: SMART error (ErrorCount) detected on host: dell-inspiron-15
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
Date: Sun, 7 Aug 2022 08:12:59 +0200 (CEST)
From: root
This message was generated by the smartd daemon running on:
host name: dell-inspiron-15
DNS domain: [Empty]
The following warning/error was logged by the smartd daemon:
Device: /dev/nvme0, number of Error Log entries increased from 485 to 486
Device info:
Samsung SSD 970 EVO Plus 2TB, S/N:, FW:2B2QEXM7, 2.00 TB
For details see host's SYSLOG.
You can also use the smartctl utility for further investigation.
The original message about this issue was sent at Fri Apr 22 09:53:56 2022 CEST
Another message will be sent in 24 hours if the problem persists.
**SMART**
-none
# smartctl -a /dev/nvme0n1
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-43-generic] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Number: Samsung SSD 970 EVO Plus 2TB
Serial Number:
Firmware Version: 2B2QEXM7
PCI Vendor/Subsystem ID: 0x144d
IEEE OUI Identifier: 0x002538
Total NVM Capacity: 2,000,398,934,016 [2.00 TB]
Unallocated NVM Capacity: 0
Controller ID: 4
NVMe Version: 1.3
Number of Namespaces: 1
Namespace 1 Size/Capacity: 2,000,398,934,016 [2.00 TB]
Namespace 1 Utilization: 544,784,187,392 [544 GB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: 002538 5221904ad7
Local Time is: Mon Aug 8 11:13:10 2022 CEST
Firmware Updates (0x16): 3 Slots, no Reset required
Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x03): S/H_per_NS Cmd_Eff_Lg
Maximum Data Transfer Size: 512 Pages
Warning Comp. Temp. Threshold: 85 Celsius
Critical Comp. Temp. Threshold: 85 Celsius
Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 7.50W - - 0 0 0 0 0 0
1 + 5.90W - - 1 1 1 1 0 0
2 + 3.60W - - 2 2 2 2 0 0
3 - 0.0700W - - 3 3 3 3 210 1200
4 - 0.0050W - - 4 4 4 4 2000 8000
Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 0
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 44 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 0%
Data Units Read: 5,565,230 [2.84 TB]
Data Units Written: 2,658,490 [1.36 TB]
Host Read Commands: 29,877,415
Host Write Commands: 18,211,598
Controller Busy Time: 112
Power Cycles: 240
Power On Hours: 215
Unsafe Shutdowns: 5
Media and Data Integrity Errors: 0
Error Information Log Entries: 502
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 44 Celsius
Temperature Sensor 2: 39 Celsius
Error Information (NVMe Log 0x01, 16 of 64 entries)
Num ErrCount SQId CmdId Status PELoc LBA NSID VS
0 502 0 0x1005 0x4004 - 0 0 -
**SYSLOG**
-none
# cat /var/log/syslog | grep -i smart | grep -i nvm
Aug 7 16:08:27 dell-inspiron-15 smartd: Device: /dev/nvme0, opened
Aug 7 16:08:27 dell-inspiron-15 smartd: Device: /dev/nvme0, Samsung SSD 970 EVO Plus 2TB, S/N:S4J4NM0T201785H, FW:2B2QEXM7, 2.00 TB
Aug 7 16:08:27 dell-inspiron-15 smartd: Device: /dev/nvme0, is SMART capable. Adding to "monitor" list.
Aug 7 16:08:27 dell-inspiron-15 smartd: Device: /dev/nvme0, state read from /var/lib/smartmontools/smartd.Samsung_SSD_970_EVO_Plus_2TB-S4J4NM0T201785H.nvme.state
Aug 7 16:08:27 dell-inspiron-15 smartd: Monitoring 1 ATA/SATA, 0 SCSI/SAS and 1 NVMe devices
Aug 7 16:08:28 dell-inspiron-15 smartd: Device: /dev/nvme0, number of Error Log entries increased from 486 to 487
Aug 7 16:08:28 dell-inspiron-15 smartd: Device: /dev/nvme0, state written to /var/lib/smartmontools/smartd.Samsung_SSD_970_EVO_Plus_2TB-S4J4NM0T201785H.nvme.state
Aug 8 07:17:38 dell-inspiron-15 smartd: Device: /dev/nvme0, opened
Aug 8 07:17:38 dell-inspiron-15 smartd: Device: /dev/nvme0, Samsung SSD 970 EVO Plus 2TB, S/N:S4J4NM0T201785H, FW:2B2QEXM7, 2.00 TB
Aug 8 07:17:38 dell-inspiron-15 smartd: Device: /dev/nvme0, is SMART capable. Adding to "monitor" list.
Aug 8 07:17:38 dell-inspiron-15 smartd: Device: /dev/nvme0, state read from /var/lib/smartmontools/smartd.Samsung_SSD_970_EVO_Plus_2TB-S4J4NM0T201785H.nvme.state
Aug 8 07:17:38 dell-inspiron-15 smartd: Monitoring 1 ATA/SATA, 0 SCSI/SAS and 1 NVMe devices
Aug 8 07:17:38 dell-inspiron-15 smartd: Device: /dev/nvme0, number of Error Log entries increased from 487 to 488
Aug 8 07:17:38 dell-inspiron-15 smartd: Device: /dev/nvme0, state written to /var/lib/smartmontools/smartd.Samsung_SSD_970_EVO_Plus_2TB-S4J4NM0T201785H.nvme.state
Aug 8 08:21:16 dell-inspiron-15 smartd: Device: /dev/nvme0, state written to /var/lib/smartmontools/smartd.Samsung_SSD_970_EVO_Plus_2TB-S4J4NM0T201785H.nvme.state
Aug 8 11:14:00 dell-inspiron-15 smartd: Device: /dev/nvme0, opened
Aug 8 11:14:00 dell-inspiron-15 smartd: Device: /dev/nvme0, Samsung SSD 970 EVO Plus 2TB, S/N:S4J4NM0T201785H, FW:2B2QEXM7, 2.00 TB
Aug 8 11:14:00 dell-inspiron-15 smartd: Device: /dev/nvme0, is SMART capable. Adding to "monitor" list.
Aug 8 11:14:00 dell-inspiron-15 smartd: Device: /dev/nvme0, state read from /var/lib/smartmontools/smartd.Samsung_SSD_970_EVO_Plus_2TB-S4J4NM0T201785H.nvme.state
Aug 8 11:14:00 dell-inspiron-15 smartd: Monitoring 1 ATA/SATA, 0 SCSI/SAS and 1 NVMe devices
Aug 8 11:14:00 dell-inspiron-15 smartd: Device: /dev/nvme0, number of Error Log entries increased from 488 to 494
Aug 8 11:14:01 dell-inspiron-15 smartd: Device: /dev/nvme0, state written to /var/lib/smartmontools/smartd.Samsung_SSD_970_EVO_Plus_2TB-S4J4NM0T201785H.nvme.state
Aug 8 12:48:40 dell-inspiron-15 smartd: Device: /dev/nvme0, opened
Aug 8 12:48:40 dell-inspiron-15 smartd: Device: /dev/nvme0, Samsung SSD 970 EVO Plus 2TB, S/N:S4J4NM0T201785H, FW:2B2QEXM7, 2.00 TB
Aug 8 12:48:40 dell-inspiron-15 smartd: Device: /dev/nvme0, is SMART capable. Adding to "monitor" list.
Aug 8 12:48:40 dell-inspiron-15 smartd: Device: /dev/nvme0, state read from /var/lib/smartmontools/smartd.Samsung_SSD_970_EVO_Plus_2TB-S4J4NM0T201785H.nvme.state
Aug 8 12:48:40 dell-inspiron-15 smartd: Monitoring 1 ATA/SATA, 0 SCSI/SAS and 1 NVMe devices
Aug 8 12:48:40 dell-inspiron-15 smartd: Device: /dev/nvme0, number of Error Log entries increased from 494 to 502
Aug 8 12:48:40 dell-inspiron-15 smartd: Device: /dev/nvme0, state written to /var/lib/smartmontools/smartd.Samsung_SSD_970_EVO_Plus_2TB-S4J4NM0T201785H.nvme.state
Vlastimil Burián
(30505 rep)
Aug 8, 2022, 09:22 AM
• Last activity: Oct 29, 2024, 10:53 AM
0
votes
0
answers
359
views
How to change hard disk bus type NVMe to SCSI in VMWare Fusion?
I use Rocky Linux and have to change my hard disk bus type but then my booting failed. However, it runs normally when i select the bus type NVMe. [![enter image description here][1]][1] Boot failed [![enter image description here][2]][2] How do i solve this? Should i make a new VM and choose SCSI fr...
I use Rocky Linux and have to change my hard disk bus type but then my booting failed. However, it runs normally when i select the bus type NVMe.
Boot failed
How do i solve this? Should i make a new VM and choose SCSI from the start?


Fathya
(1 rep)
Oct 25, 2024, 07:20 AM
• Last activity: Oct 27, 2024, 09:06 PM
3
votes
2
answers
3905
views
I deleted the namespaces (NS) in my NVMe SSD and Ubuntu is not able to recognize the device
I had two namespaces (NS) in my NVMe SSD (Samsung) and deleted both to create just one, but Ubuntu is not able to recognize the device upon deleting.  How do I recover the drive now? Command used to delete: `sudo nvme delete-ns /dev/nvme0n1 -n 1` - Ubuntu 18.04.1 LTS - Kernel 4.1...
I had two namespaces (NS) in my NVMe SSD (Samsung)
and deleted both to create just one,
but Ubuntu is not able to recognize the device upon deleting.
How do I recover the drive now?
Command used to delete:
sudo nvme delete-ns /dev/nvme0n1 -n 1
- Ubuntu 18.04.1 LTS
- Kernel 4.15
Kstar174
(49 rep)
Jan 29, 2019, 07:45 PM
• Last activity: Sep 28, 2024, 11:05 PM
0
votes
0
answers
1940
views
What is the difference between nvme0n1 and nvme1n1?
Some systems have a drive called nvme0n1 and others have a drive called nvme1n1, or a system will have both drives, but with different storage amounts. Why? What is the difference between these, and what does it mean that they are named differently? Example from computer 1: lsblk -dno NAME,SIZE,TYPE...
Some systems have a drive called nvme0n1 and others have a drive called nvme1n1, or a system will have both drives, but with different storage amounts. Why? What is the difference between these, and what does it mean that they are named differently?
Example from computer 1:
lsblk -dno NAME,SIZE,TYPE | grep nvme
nvme1n1 8G disk
nvme0n1 139.7G disk
Example from computer 2:
lsblk -dno NAME,SIZE,TYPE | grep nvme
nvme1n1 139.7G disk
nvme0n1 8G disk
A similar (but distinct) question was asked here, with no answers:
https://unix.stackexchange.com/questions/711664/why-nvme-device-name-is-nvme1c1n1-not-nvme0n1
BigMistake
(119 rep)
Sep 27, 2024, 08:31 PM
• Last activity: Sep 27, 2024, 08:39 PM
Showing page 1 of 20 total questions