Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
4
votes
2
answers
6641
views
linux stress, impose work on specific disks
Have a server running centos 7.6, and it has 4 ssd's as Raid-0 mounted as `/scratch/` I have the linux program `stress-1.0.4-16` and I just learned `stress-ng` existed. Is there a way with `stress` to tell it to do I/O operations to stress a specific set of disks such as my 4 disk Raid-0? Or does it...
Have a server running centos 7.6, and it has 4 ssd's as Raid-0 mounted as
/scratch/
I have the linux program stress-1.0.4-16
and I just learned stress-ng
existed.
Is there a way with stress
to tell it to do I/O operations to stress a specific set of disks such as my 4 disk Raid-0? Or does it only work on whatever the root file system is such as /tmp
? And if that's the case I've done systemctl enable tmp.mount
which means the /tmp
folder is no longer on disk negating the disk i/o function of stress ?
ron
(8647 rep)
Jan 22, 2020, 04:36 PM
• Last activity: Aug 6, 2025, 11:09 AM
7
votes
1
answers
2083
views
Btrfs/ZFS Network Replication
Is it possible to replicate a ZFS or Btrfs raid volume in real-time (or as close to as possible, network specs aside) over a network? ZFS and Btrfs are ideal because of their CoW properties. I'm thinking something similar to DRBD, but DRBD won't work because it requires a single-block device, and we...
Is it possible to replicate a ZFS or Btrfs raid volume in real-time (or as close to as possible, network specs aside) over a network?
ZFS and Btrfs are ideal because of their CoW properties.
I'm thinking something similar to DRBD, but DRBD won't work because it requires a single-block device, and we're ruling out the option of exporting each disk as a DRBD device because that would get messy.
I don't want to use send/receive because they would be too slow, even if scripted.
Ideally, I'd like something relatively simple to avoid unnecessary complexity.
DevinM
(171 rep)
Nov 10, 2015, 12:23 AM
• Last activity: Jul 30, 2025, 04:05 PM
1
votes
0
answers
25
views
mdadm --monitor --program option not working
I am trying to make mdadm call into a simple bash script which writes a message in the kernel log in case of a state change. The script is as follows, ``` # cat /tmp/test.sh #!/bin/bash echo "raid array status change" > /dev/kmsg ``` I have added the following to the mdadm config file ``` PROGRAM /t...
I am trying to make mdadm call into a simple bash script which writes a message in the kernel log in case of a state change.
The script is as follows,
# cat /tmp/test.sh
#!/bin/bash
echo "raid array status change" > /dev/kmsg
I have added the following to the mdadm config file
PROGRAM /tmp/test.sh
ARRAY /dev/md0 UUID=
But when I do the test, it says the md0 is being picked up, and the correct program option is being used, but nothing gets printed in dmesg
mdadm --monitor --test /dev/md0 -1
Note that when I manually run the script, it prints the message in kernel log.
My questions
1. Does the above process of calling the program depend on the mail setting also? Because my mail config is not set.
2. Any idea what could be wrong or missing?
Haris
(113 rep)
Jul 30, 2025, 09:40 AM
0
votes
1
answers
3086
views
Convert non-RAID disk with data into RAID 1 disk (hardware controller)
I moved away from software RAID due to all the hassle it brings. After an OS reinstall, I am left with only one drive. I ordered a hardware RAID controller today, and when the controller arrives, I'd like to plug in the identical drives into the RAID controller and set up RAID 1 WITHOUT losing...
I moved away from software RAID due to all the hassle it brings. After an OS reinstall, I am left with only one drive. I ordered a hardware RAID controller today, and when the controller arrives, I'd like to plug in the identical drives into the RAID controller and set up RAID 1 WITHOUT losing any data or needing to reinstall the OS (Debian Jessie x86_64).
Output of
lsblk
:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 953M 0 part /boot
├─sda2 8:2 0 29.8G 0 part [SWAP]
└─sda3 8:3 0 900.8G 0 part
├─vgmain-lvroot 254:0 0 621.4G 0 lvm /
├─vgmain-lvmail 254:1 0 93.1G 0 lvm /var/vmail
├─vgmain-lvhome 254:2 0 93.1G 0 lvm /home
├─vgmain-lvtmp 254:3 0 18.6G 0 lvm /tmp
└─vgmain-lvvar 254:4 0 74.5G 0 lvm /var
sdb 8:16 0 931.5G 0 disk
Can I do this somehow by dd
ing the existing data to the clean drive while having it plugged into the RAID controller and set up as RAID 1? To clarify, let's say sda is the drive with my data, sdb is the drive which is not in use.
* Plug sda into the mobo sata controller
* Plug sdb into the RAID controller
* Define sdb as RAID 1 drive
* Boot from liveCD and dd
contents of sda → sdb
* Plug sda into RAID controller, define as RAID1
* RAID controller syncs the drives, (copies over sdb to sda) (?)
* Boot without problems?
Will dd
copy the drive in a way that mbr/partitions/etc. are preserved? Am I thinking in a completely stupid way of doing this?
I contacted the RAID controller manufacturer and asked if it has some kind of utility to convert a drive into 2 drives in RAID1, but they said no. If it's relevant in any way, the specific controller is a HighPoint RocketRAID 620 PCI-Express 2.0 x1 SATA III RAID card.
Axel Latvala
(109 rep)
Jun 13, 2016, 04:12 PM
• Last activity: Jul 25, 2025, 11:04 PM
4
votes
2
answers
2255
views
Ubuntu 18.04 VM in emergency/maintenance mode due to failed corrupted raided disk
I have a VM which has an attached raided device with fstab entry: /dev/md127 /mnt/blah ext4 nofail 0 2 The raided disks are corrupted and during startup the unit entered emergency/maintence mode, which means only the local host user could exit this mode and start it up normally. During normal startu...
I have a VM which has an attached raided device with fstab entry:
/dev/md127 /mnt/blah ext4 nofail 0 2
The raided disks are corrupted and during startup the unit entered emergency/maintence mode, which means only the local host user could exit this mode and start it up normally. During normal startup the following occurred in syslog:
systemd-fsck: /dev/md127 contains a file system with errors, check forced.
systemd-fsck: /dev/md127: Inodes that were part of a corrupted orphan linked list found.
systemd-fsck: /dev/md127: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
systemd-fsck: #011(i.e., without -a or -p options)
systemd-fsck: fsck failed with exit status 4.
systemd-fsck: Running request emergency.target/start/replace
systemd
: systemd-fsck@dev-md127.service: Main process exited, code=exited, status=1/FAILURE
systemd
: systemd-fsck@dev-md127.service: Failed with result 'exit-code'.
systemd
: Failed to start File System Check on /dev/md127.
systemd
: Dependency failed for /mnt/blah.
systemd
: Dependency failed for Provisioner client daemon.
My guess is that the OS goes to emergency/maintenance mode because of the corrupt raided disks:
systemctl --state=failed
UNIT LOAD ACTIVE SUB DESCRIPTION
● systemd-fsck@dev-md127.service loaded failed failed File System Check on /dev/md127
What i want is for the VM to startup regardless of whether the raided drives are corrupt/unmountable, so it shouldn't go to emergency/maintenance mode. I followed these posts to attempt at disabling emergency/maintenance mode:
- https://unix.stackexchange.com/questions/416640/how-to-disable-systemd-agressive-emergency-shell-behaviour
- https://unix.stackexchange.com/questions/326493/how-to-determine-exactly-why-systemd-enters-emergency-mode/393711#393711
- https://unix.stackexchange.com/questions/422319/emergency-mode-and-local-disk
I had to first create the directory
Here's the output from





local-fs.target.d
in /etc/systemd/system/
, which felt wrong. I then created a nofail.conf
in /etc/systemd/system/local-fs.target.d/nofail.conf
containing:
[Unit]
OnFailure=
After loading that drop file, I was able to confirm that the drop file was found by local-fs.target:
sudo systemctl status local-fs.target
● local-fs.target - Local File Systems
Loaded: loaded (/lib/systemd/system/local-fs.target; static; vendor preset: enabled)
Drop-In: /etc/systemd/system/local-fs.target.d
└─nofail.conf
Active: active since Tue 2019-01-08 12:36:41 UTC; 3h 55min ago
Docs: man:systemd.special(7)
BUT, after rebooting, the VM still ended up in emergency/maintenance mode. Have i missed something? Does the nofail.conf solution not work with raided disks?
----------
EDIT: I was able to get a print out of the logs when the system booted to emergency mode (sorry it's a screenshot since i don't have access to the host and had to ask the owner for it):

systemctl for systemd-fsck@dev-md127
:
sudo systemctl status --no-pager --full systemd-fsck@dev-md127
● systemd-fsck@dev-md127.service - File System Check on /dev/md127
Loaded: loaded (/lib/systemd/system/systemd-fsck@.service; static; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-01-10 12:05:44 UTC; 2h 57min ago
Docs: man:systemd-fsck@.service(8)
Process: 1025 ExecStart=/lib/systemd/systemd-fsck /dev/md127 (code=exited, status=1/FAILURE)
Main PID: 1025 (code=exited, status=1/FAILURE)
systemd
: Starting File System Check on /dev/md127...
systemd-fsck: /dev/md127 contains a file system with errors, check forced.
systemd-fsck: /dev/md127: Inodes that were part of a corrupted orphan linked list found.
systemd-fsck: /dev/md127: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
systemd-fsck: (i.e., without -a or -p options)
systemd-fsck: fsck failed with exit status 4.
systemd-fsck: Running request emergency.target/start/replace
systemd
: systemd-fsck@dev-md127.service: Main process exited, code=exited, status=1/FAILURE
systemd
: systemd-fsck@dev-md127.service: Failed with result 'exit-code'.
systemd
: Failed to start File System Check on /dev/md127.
As i pointed out earlier, i have nofail
set in /etc/fstab
. Now the questions are:
1. What is the dependency in the failed dependency in the screenshot?
2. If fsck fails on /dev/md127
, why does it enter emergency mode and how do i disable that?
EDIT 2:
A couple of other things i can add are:
1. the vm is a kvm vm
2. it's a software raid
Kind regards,
Ankur
Ankur22
(41 rep)
Jan 8, 2019, 04:39 PM
• Last activity: Jul 4, 2025, 11:09 AM
0
votes
1
answers
1907
views
Grub unlock luks encrypted btrfs raid0
The goal is to have grub unlock `/dev/nvme0n1p3` which contains a keyfile to unlock the 2 luks encrypted btrfs raid0 drives. If I can get it working, I'll create a tool that can accompany Linux installers to get it done easier. I keep getting dropped into the grub rescue prompt with: ``` No such dev...
The goal is to have grub unlock
/dev/nvme0n1p3
which contains a keyfile to unlock the 2 luks encrypted btrfs raid0 drives. If I can get it working, I'll create a tool that can accompany Linux installers to get it done easier.
I keep getting dropped into the grub rescue prompt with:
No such device: 2d6983f7-c10e-4b1a-b182-24d6f2b2a6c0
error: unknown filesystem.
So, it's not unlocking my luks. That's the UUID of /dev/mapper/cryptroot
and /dev/mapper/cryptroot2
(They share it since it's raid0). Idk why it's showing up as the first thing grub tries to do though. The first thing I want grub to unlock is 0df41a34-e267-491a-ac02-25758c26ec65
aka /dev/nvme0n1p3
(cryptkeys) in order to unlock the raid0 drives. Here's what I did...
## Setup
2 nvme drives.
- 2 NVMe drives.
- Garuda Linux (Arch-based).
- Grub 2.6 (Supports LUKS2).
- blkid
output:
/dev/loop1: TYPE="squashfs"
/dev/mapper/cryptroot2: UUID="2d6983f7-c10e-4b1a-b182-24d6f2b2a6c0" UUID_SUB="b2ee9dad-c9cb-4ec4-ae38-d28af19eb183" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/nvme0n1p3: UUID="0df41a34-e267-491a-ac02-25758c26ec65" TYPE="crypto_LUKS" PARTUUID="a49f7cdb-cbb6-44cd-b1e4-00b61dd1f00d"
/dev/nvme0n1p1: LABEL_FATBOOT="NO_LABEL" LABEL="NO_LABEL" UUID="A5AC-81DA" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="b0def085-1288-b746-9d7d-961354131dbc"
/dev/nvme0n1p2: UUID="802edb34-f481-4adf-9f98-3a80028d7cec" TYPE="crypto_LUKS" PARTLABEL="root" PARTUUID="9b945709-b51b-1c46-8ee3-6f3ba74c5a5b"
/dev/sdb2: SEC_TYPE="msdos" LABEL_FATBOOT="MISO_EFI" LABEL="MISO_EFI" UUID="EFD7-7387" BLOCK_SIZE="512" TYPE="vfat"
/dev/sdb1: BLOCK_SIZE="2048" UUID="2021-08-09-16-03-00-00" LABEL="GARUDA_GNOME_SOARING_" TYPE="iso9660"
/dev/loop2: TYPE="squashfs"
/dev/loop0: TYPE="squashfs"
/dev/mapper/cryptroot: UUID="2d6983f7-c10e-4b1a-b182-24d6f2b2a6c0" UUID_SUB="ef6be59d-a4be-4d00-93c2-0084530bf929" BLOCK_SIZE="4096" TYPE="btrfs"
/dev/nvme1n1: UUID="53517d3d-a638-48b9-af4f-125114e4f0c6" TYPE="crypto_LUKS"
/dev/zram0: LABEL="zram0" UUID="aa36a4d8-690e-4f2a-bfc9-e2fad1db8efb" TYPE="swap"
/dev/loop3: TYPE="squashfs"
## Procedures
1. Installed Garuda Linux to /dev/nvme0n1
which gave me the following partition layout on the first drive. I then created an ext4 partition (cryptkeys) in a luks container for storing keys and a luks container spanning the entire nvme1n1 for the btrfs raid:
NAME FSTYPE FLAGS
nvme0n1
├─nvme0n1p1 fat32 boot,esp
├─nvme0n1p2 crypto_LUKS
│ └─cryptroot btrfs
└─nvme0n1p3 crypto_LUKS
└─cryptkeys ext4
nvme1n1 crypto_LUKS
└─
└─cryptroot2 btrfs
2. Unlocked nvme0n1p2
and nvme1n1
mounting to /mnt/cryptroot
.
3. To convert to raid0 spanning 2 drives, ran:
btrfs device add /dev/mapper/cryptroot2 /mnt/cryptroot
btrfs balance start -dconvert=raid0 -mconvert=raid1 /mnt/cryptroot
4. Created a new keyfile for luks and added it to all luks containers except the one I named "cryptkeys" which is /dev/nvme0n1p3
. All luks containers can also be unlocked via the same password. nvme0n1p3
was mounted to /mnt/cryptkeys
and the keyfile copied to it:
dd bs=512 count=4 if=/dev/random of=/mnt/cryptroot/crypto_keyfile.bin
chmod 600 /mnt/cryptkeys/crypto_keyfile.bin
cryptsetup luksAddKey /dev/nvme0n1p2 cryptkeys/crypto_keyfile.bin
cryptsetup luksAddKey /dev/nvme1n1 cryptkeys/crypto_keyfile.bin
5. With the btrfs raid0 now mounted, chrooted into the new Garuda install via:
mkdir /mnt/newroot
mount -o subvol=@,compress=zstd /dev/mapper/cryptroot newroot
for i in /dev /dev/pts /proc /sys /run; do sudo mount --bind $i /mnt/newroot$i; done
mount /dev/nvme0n1p1 newroot/boot/efi
mount --bind /sys/firmware/efi/efivars newroot/sys/firmware/efi/efivars
chroot /mnt/newroot
6. Edited /etc/default/grub
to be:
# GRUB boot loader configuration
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Garuda"
GRUB_CMDLINE_LINUX_DEFAULT="quiet cryptdevice2=/dev/disk/by-uuid/0df41a34-e267-491a-ac02-25758c26ec65:cryptkeys:allow-discards cryptdevice3=/dev/disk/by-uuid/802edb34-f481-4adf-9f98-3a80028d7cec:cryptroot:allow-discards cryptdevice=/dev/disk/by-uuid/53517d3d-a638-48b9-af4f-125114e4f0c6:cryptroot2:allow-discards root=/dev/mapper/cryptroot splash rd.udev.log_priority=3 vt.global_cursor_default=0 systemd.unified_cgroup_hierarchy=1 loglevel=3"
GRUB_CMDLINE_LINUX=""
# Preload both GPT and MBR modules so that they are not missed
GRUB_PRELOAD_MODULES="part_gpt part_msdos"
# Uncomment to enable booting from LUKS encrypted devices
#GRUB_ENABLE_CRYPTODISK=y
# Set to 'countdown' or 'hidden' to change timeout behavior,
# press ESC key to display menu.
GRUB_TIMEOUT_STYLE=menu
# Uncomment to use basic console
GRUB_TERMINAL_INPUT=console
# Uncomment to disable graphical terminal
#GRUB_TERMINAL_OUTPUT=console
# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
GRUB_GFXMODE=auto
# Uncomment to allow the kernel use the same resolution used by grub
GRUB_GFXPAYLOAD_LINUX=keep
# Uncomment if you want GRUB to pass to the Linux kernel the old parameter
# format "root=/dev/xxx" instead of "root=/dev/disk/by-uuid/xxx"
#GRUB_DISABLE_LINUX_UUID=true
# Uncomment to disable generation of recovery mode menu entries
GRUB_DISABLE_RECOVERY=true
# Uncomment and set to the desired menu colors. Used by normal and wallpaper
# modes only. Entries specified as foreground/background.
#GRUB_COLOR_NORMAL="light-blue/black"
#GRUB_COLOR_HIGHLIGHT="light-cyan/blue"
# Uncomment one of them for the gfx desired, a image background or a gfxtheme
#GRUB_BACKGROUND="/path/to/wallpaper"
GRUB_THEME="/usr/share/grub/themes/garuda/theme.txt"
# Uncomment to get a beep at GRUB start
#GRUB_INIT_TUNE="480 440 1"
# Uncomment to make GRUB remember the last selection. This requires
# setting 'GRUB_DEFAULT=saved' above.
#GRUB_SAVEDEFAULT=true
# Uncomment to disable submenus in boot menu
#GRUB_DISABLE_SUBMENU=y
GRUB_DISABLE_OS_PROBER=false
GRUB_DISABLE_OS_PROBER=false
GRUB_ENABLE_CRYPTODISK=y
7. Copied hooks as:
# copy the original hook
cp /usr/lib/initcpio/install/encrypt /etc/initcpio/install/encrypt2
cp /usr/lib/initcpio/install/encrypt /etc/initcpio/install/encrypt3
cp /usr/lib/initcpio/hooks/encrypt /etc/initcpio/hooks/encrypt2
cp /usr/lib/initcpio/hooks/encrypt /etc/initcpio/hooks/encrypt3
# adapt the new hook to use different names and to NOT delete the keyfile
sed -i "s/cryptdevice/cryptdevice2/" /etc/initcpio/hooks/encrypt2
sed -i "s/cryptdevice/cryptdevice3/" /etc/initcpio/hooks/encrypt3
sed -i "s/cryptkey/cryptkey2/" /etc/initcpio/hooks/encrypt2
sed -i "s/cryptkey/cryptkey3/" /etc/initcpio/hooks/encrypt3
sed -i "s/rm -f \${ckeyfile}//" /etc/initcpio/hooks/encrypt2
sed -i "s/rm -f \${ckeyfile}//" /etc/initcpio/hooks/encrypt3
8. Added encrypt2
and encrypt3
to /etc/mkinitcpio.conf
before encrypt
hook. Also specified keyfile. mkinitcpio.conf
is now:
# vim:set ft=sh
# MODULES
# The following modules are loaded before any boot hooks are
# run. Advanced users may wish to specify all system modules
# in this array. For instance:
# MODULES=(intel_agp i915 amdgpu radeon nouveau)
MODULES=(intel_agp i915 amdgpu radeon nouveau)
# BINARIES
# This setting includes any additional binaries a given user may
# wish into the CPIO image. This is run last, so it may be used to
# override the actual binaries included by a given hook
# BINARIES are dependency parsed, so you may safely ignore libraries
BINARIES=()
# FILES
# This setting is similar to BINARIES above, however, files are added
# as-is and are not parsed in any way. This is useful for config files.
FILES="/crypto_keyfile.bin"
# HOOKS
# This is the most important setting in this file. The HOOKS control the
# modules and scripts added to the image, and what happens at boot time.
# Order is important, and it is recommended that you do not change the
# order in which HOOKS are added. Run 'mkinitcpio -H ' for
# help on a given hook.
# 'base' is _required_ unless you know precisely what you are doing.
# 'udev' is _required_ in order to automatically load modules
# 'filesystems' is _required_ unless you specify your fs modules in MODULES
# Examples:
## This setup specifies all modules in the MODULES setting above.
## No raid, lvm2, or encrypted root is needed.
# HOOKS=(base)
#
## This setup will autodetect all modules for your system and should
## work as a sane default
# HOOKS=(base udev autodetect block filesystems)
#
## This setup will generate a 'full' image which supports most systems.
## No autodetection is done.
# HOOKS=(base udev block filesystems)
#
## This setup assembles a pata mdadm array with an encrypted root FS.
## Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
# HOOKS=(base udev block mdadm encrypt filesystems)
#
## This setup loads an lvm2 volume group on a usb device.
# HOOKS=(base udev block lvm2 filesystems)
#
## NOTE: If you have /usr on a separate partition, you MUST include the
# usr, fsck and shutdown hooks.
HOOKS="base udev encrypt autodetect modconf block keyboard keymap consolefont plymouth encrypt2 encrypt3 encrypt filesystems"
# COMPRESSION
# Use this to compress the initramfs image. By default, zstd compression
# is used. Use 'cat' to create an uncompressed image.
#COMPRESSION="zstd"
#COMPRESSION="gzip"
#COMPRESSION="bzip2"
#COMPRESSION="lzma"
#COMPRESSION="xz"
#COMPRESSION="lzop"
#COMPRESSION="lz4"
# COMPRESSION_OPTIONS
# Additional options for the compressor
#COMPRESSION_OPTIONS=()
9. Ran:
mkinitcpio -p linux-zen
# initramfs includes the key, so only root should be able to read it
chmod 600 /boot/initramfs-linux-fallback.img
chmod 600 /boot/initramfs-linux.img
10. Changed /etc/crypttab
to:
# /etc/crypttab: mappings for encrypted partitions.
#
# Each mapped device will be created in /dev/mapper, so your /etc/fstab
# should use the /dev/mapper/ paths for encrypted devices.
#
# See crypttab(5) for the supported syntax.
#
# NOTE: Do not list your root (/) partition here, it must be set up
# beforehand by the initramfs (/etc/mkinitcpio.conf). The same applies
# to encrypted swap, which should be set up with mkinitcpio-openswap
# for resume support.
#
#
cryptkeys UUID=0df41a34-e267-491a-ac02-25758c26ec65 /crypto_keyfile.bin luks,discard,nofail
11. Changed /etc/fstab
to:
#
UUID=A5AC-81DA /boot/efi vfat umask=0077 0 2
/dev/mapper/cryptroot / btrfs subvol=/@,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /home btrfs subvol=/@home,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /root btrfs subvol=/@root,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /srv btrfs subvol=/@srv,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /var/cache btrfs subvol=/@cache,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /var/log btrfs subvol=/@log,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
/dev/mapper/cryptroot /var/tmp btrfs subvol=/@tmp,defaults,noatime,space_cache,autodefrag,compress=zstd 0 0
12. Finally, ran:
grub-mkconfig -o /boot/grub/grub.cfg
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=Garuda --recheck
exit
reboot
**An aside:** A few times that I ran grub-install
, the value of --bootloader-id
was arch-grub
before I changed it to Garuda
. I don't think it matters much except that now I have extra boot menu entries as idk how to get rid of them. Probably doesn't matter though. I get the error even when selecting the Garuda entry from the EFI boot menu.
**Note:** These procedures were adapted from this blog post . What's different is no luks encrypted boot partition and the addition of the cryptkeys partition instead.
xendi
(613 rep)
Aug 31, 2021, 12:56 AM
• Last activity: Jun 25, 2025, 05:07 AM
-5
votes
1
answers
169
views
How to protect against both bit rot and device failure with Btrfs
How can you protect simultaneously against bit rot and device failure with Btrfs? Because btrfs only checks data integrity on files when it reads them. The only solution I can think of is using two drives, each of which has two instances of the same data, a total of four instances of the same data....
How can you protect simultaneously against bit rot and device failure with Btrfs? Because btrfs only checks data integrity on files when it reads them.
The only solution I can think of is using two drives, each of which has two instances of the same data, a total of four instances of the same data. If one drive fails, the remaining one still has duplicated data.
But this profile seems not to exist.
Another impractical solution is to use RAID1 and execute a scrub after every write operation.
Yet another option is to have two drives, each partitioned in half, and use a RAID profile to have the four copies. But this is probably not good, for the same reasons why
dup
is preferable to RAID1 on two partitions of the same drive. I don't know those reasons, but there must be some, otherwise, people wouldn't have come up with dup
as an alternative to using RAID1 on two partitions of the same drive.
Please make the effort to understand the question before posting an answer which doesn't help. Because data integrity is only verified when files are read, it seems that raid-1 alone provides no mitigation against data corruption since the last time the file wasread or scrub was run.
= As of 16 June I still have no satisfactory answer =
user324831
(113 rep)
Jun 12, 2025, 02:58 PM
• Last activity: Jun 16, 2025, 03:37 PM
14
votes
0
answers
9625
views
State of LVM raid compared to mdadm
LVM and `mdadm`/`dmraid` both offer software RAID functionality on Linux. This is essentially a follow-up to a [question from 2014][1]. Back then, [@derobert][2] recommended using `mdadm` over LVM RAID due to its maturity — but that was more than four years ago. I imagine things may have changed sin...
LVM and
mdadm
/dmraid
both offer software RAID functionality on Linux. This is essentially a follow-up to a question from 2014 . Back then, @derobert recommended using mdadm
over LVM RAID due to its maturity — but that was more than four years ago. I imagine things may have changed since then.
However, I’ve never used LVM RAID before, and I couldn't find many recent experiences or discussions about it.
So, what’s the current state of LVM RAID? Has it become more mature? Have the flaws mentioned in @derobert’s post been resolved, or do they still exist? Specifically, how does it compare to mdadm
in terms of:
- Stability
- Features (grow, shrink, convert)
- Repair and recovery
- Community support
- Performance
I’d like to know if people actually use LVM RAID now, or if most still stick with mdadm
. Is it more advisable to use LVM on top of mdadm
for logical volume management, or is it now acceptable to let LVM manage the RAID as well? Would it even make sense to use LVM RAID instead of mdadm
, even if you don’t plan to take advantage of logical volume management?
I considered commenting on the original answer and asking @derobert for an update, but I decided to post a new question to reach more community members and gather fresh perspectives — not just update the original post to the present tense.
LukeLR
(342 rep)
Apr 29, 2019, 10:46 AM
• Last activity: Jun 10, 2025, 07:34 PM
2
votes
1
answers
2208
views
md: kicking non-fresh sdg from array! md/raid:md0: and then not enough operational devices (3/7 failed)
today I run in a disaster... I have a RAID 6 with 7 HDDs and yesterday one disk failed. After replacing the disk and did a rebuild over night I found out that a 2nd HDD was out of the RAID... So today I 've started to backup my Files on external Drives but then the copying stopped and as I've checke...
today I run in a disaster...
I have a RAID 6 with 7 HDDs and yesterday one disk failed.
After replacing the disk and did a rebuild over night I found out that a 2nd HDD was out of the RAID...
So today I 've started to backup my Files on external Drives but then the copying stopped and as I've checked why and saw in Webmins RAID that sdg was "down".
I shut down the server and checked the hardware and found out, that the backplate, where the HDDs are connected got lose...
After repairing it all drives are now back but my RAID 6 don't start anymore :-/
dmesg shows me:
md: kicking non-fresh sdg from array!
md: kicking non-fresh sdf from array!
md: kicking non-fresh sde from array!
md/raid:md0: not enough operational devices (3/7 failed)
...
and after many
md0: ADD_NEW_DISK not supported
I can read this:
EXT4-fs (md0): unable to read superblock
With
sudo mdadm --examine
I checked the sdg, sdf and sde and e and f shows "State clean
" where the sdg, which was "down" before repairing shows "Active
".
So 6 of 7 Devices shows "Clean" except the sdg.
Here is the list of the output of all devices:
Disk sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
Name : N5550:0 (local to host N5550)
Creation Time : Fri Oct 29 14:43:58 2021
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Data Offset : 259072 sectors
Super Offset : 8 sectors
Unused Space : before=258992 sectors, after=7344 sectors
State : clean
Device UUID : 9180f101:1dacdd9e:4adae9c4:fbeb2552
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 26 18:13:45 2022
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 38019182 - correct
Events : 256508
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAA.A.. ('A' == active, '.' == missing, 'R' == replacing)
Disk sdc
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
Name : N5550:0 (local to host N5550)
Creation Time : Fri Oct 29 14:43:58 2021
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Data Offset : 259072 sectors
Super Offset : 8 sectors
Unused Space : before=258992 sectors, after=7344 sectors
State : clean
Device UUID : 889c6877:5ee5c647:eebd209c:d9c6abcb
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 26 18:13:45 2022
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : a71ea53d - correct
Events : 256508
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAA.A.. ('A' == active, '.' == missing, 'R' == replacing)
Disk sdd
/dev/sdd:
MBR Magic : aa55
Partition : 3907026944 sectors at 2048 (type fd)
Disk sde
/dev/sde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
Name : N5550:0 (local to host N5550)
Creation Time : Fri Oct 29 14:43:58 2021
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Data Offset : 259072 sectors
Super Offset : 8 sectors
Unused Space : before=258992 sectors, after=7344 sectors
State : clean
Device UUID : 34198042:3d4c802b:36727b02:fdf65808
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 26 18:05:00 2022
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : f8fb6b18 - correct
Events : 256494
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAAA.. ('A' == active, '.' == missing, 'R' == replacing)
Disk sdf
/dev/sdf:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
Name : N5550:0 (local to host N5550)
Creation Time : Fri Oct 29 14:43:58 2021
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Data Offset : 259072 sectors
Super Offset : 8 sectors
Unused Space : before=258992 sectors, after=7344 sectors
State : clean
Device UUID : b2e8d640:1f21336f:88d823fe:66ef7be7
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Mar 23 14:46:56 2022
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 15cd05bb - correct
Events : 238681
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 4
Array State : AAAAAA. ('A' == active, '.' == missing, 'R' == replacing)
Disk sdg
/dev/sdg:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
Name : N5550:0 (local to host N5550)
Creation Time : Fri Oct 29 14:43:58 2021
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Data Offset : 259072 sectors
Super Offset : 8 sectors
Unused Space : before=258992 sectors, after=7344 sectors
State : active
Device UUID : 2bc06e22:49aa73e2:3cf7eb79:55df1180
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 26 17:57:06 2022
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 7f0ddb2a - correct
Events : 256372
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 5
Array State : AAAAAA. ('A' == active, '.' == missing, 'R' == replacing)
Disk sdh
/dev/sdh:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
Name : N5550:0 (local to host N5550)
Creation Time : Fri Oct 29 14:43:58 2021
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Data Offset : 259072 sectors
Super Offset : 8 sectors
Unused Space : before=258992 sectors, after=7344 sectors
State : clean
Device UUID : 7af89a18:52ef08ae:dec5ad7b:75626355
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 26 18:13:45 2022
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 17d7b107 - correct
Events : 256508
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 4
Array State : AAA.A.. ('A' == active, '.' == missing, 'R' == replacing)
I've tried to start the RAID with
mdadm --run /dev/md0
and get:
mdadm: failed to start array /dev/md0: Input/output error
But after I started it with this Webmin shows me then:
/dev/md0 active, FAILED, Not Started RAID6 (Dual Distributed Parity) 7.27 TiB
Its 7.27 from 9TB.
Any ideas how to get my RAID back to work again without data loss?
I've read about that I could add devices back again to the RAID but I'm unsure and wanted to ask before.
Any help would be appreciated!
**UPDATE**:
I forgot that one of the device is /dev/sdd1 and not /sdd!
Here the examine of it:
~~~
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
Name : N5550:0 (local to host N5550)
Creation Time : Fri Oct 29 14:43:58 2021
Raid Level : raid6
Raid Devices : 7
Avail Dev Size : 3906767872 (1862.89 GiB 2000.27 GB)
Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Data Offset : 259072 sectors
Super Offset : 8 sectors
Unused Space : before=258992 sectors, after=5120 sectors
State : clean
Device UUID : d8df004e:44ee4060:ba4d2c22:e7e6bdcb
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 26 18:13:45 2022
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 1c4e98a4 - correct
Events : 256508
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAA.A.. ('A' == active, '.' == missing, 'R' == replacing)
~~~
And here an mdadm -D /dev/md0
:
~~~
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 7
Persistence : Superblock is persistent
State : inactive
Working Devices : 7
Name : N5550:0 (local to host N5550)
UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
Events : 256494
Number Major Minor RaidDevice
- 8 64 - /dev/sde
- 8 32 - /dev/sdc
- 8 112 - /dev/sdh
- 8 80 - /dev/sdf
- 8 16 - /dev/sdb
- 8 49 - /dev/sdd1
- 8 96 - /dev/sdg
~~~
LeChatNoir
(21 rep)
Mar 26, 2022, 07:29 PM
• Last activity: Jun 2, 2025, 04:08 PM
5
votes
4
answers
1217
views
How can I limit the number of data stripes in a btrfs RAID-0 profile in order to better utilize the total available space?
### Problem background One great thing about btrfs is its ability to effectively use drives with different sizes, but I just learnt that this is not true for its default RAID-0 (striped) profile. I wanted to use RAID-0 with three drives: 8, 4, and 4 TB, and was hoping to get a total of 16 TB: The fi...
### Problem background
One great thing about btrfs is its ability to effectively use drives with different sizes, but I just learnt that this is not true for its default RAID-0 (striped) profile.
I wanted to use RAID-0 with three drives: 8, 4, and 4 TB, and was hoping to get a total of 16 TB: The first half of the 8 TB can be striped with the first 4 TB drive, and the second half can be striped with the second 4 TB drive.
However, according to the (very useful) [btrfs disk usage calculator](https://carfax.org.uk/btrfs-usage/) I would only get 12 TB: Each chunk would be striped on all three drives, and that leaves 4 TB unused on my 8 TB drive. This is also mentioned in an answer to the question https://unix.stackexchange.com/q/185686/145426 .
### Illustration
This is what I expected to happen:
…and this is what is actually happening:
### What does the manual say?
After perusing [mkfs.btrfs](https://btrfs.wiki.kernel.org/index.php/Manpage/mkfs.btrfs#PROFILES) I figure that this is because the default RAID-0 profile sets a _minimum_ number of devices to 2, but does not have an upper limit. This means that a data block will be striped on as many devices it can find in the pool. This can of course be a reasonable option and it makes complete sense.
While playing with the btrfs disk usage calculator I found that I can get what I want by setting the _maximum_ number of devices to 2. This would still stripe my data over two drives to get some extra speed, but limit the striping to two devices so that it can use a lot more of the available space. To me this is a very beneficial trade-off, and I assume I am not alone in thinking so.
### The question
I did not find a way to _change_ the maximum number of devices when creating a filesystem.
- Is this even possible?
- If so, how can I change it?
- If I _do_ change it, will the other tools understand the layout?


pipe
(893 rep)
Oct 14, 2020, 01:22 PM
• Last activity: May 31, 2025, 09:48 AM
0
votes
0
answers
17
views
mdadm: Cannot get array info for /dev/md127
i have one problem. I created RAID1 with two discs, then i simulated failure of one of the discs. It was sdb1. I restared the VM with one disc working in RAID array. Then i added new fresh disc to add to this array but i can't do that because i got error (mdadm: Cannot get array info for /dev/md127)...
i have one problem.
I created RAID1 with two discs, then i simulated failure of one of the discs. It was sdb1.
I restared the VM with one disc working in RAID array.
Then i added new fresh disc to add to this array but i can't do that because i got error (mdadm: Cannot get array info for /dev/md127)
The new disc is already formated.
How to add that new disc, everything that i tried doesn't work.
linuxenjoyer32123
(1 rep)
May 30, 2025, 04:23 PM
• Last activity: May 30, 2025, 04:32 PM
5
votes
1
answers
165
views
Replace Disk Raid 1 And Reconfigure to use full size - mdadm almalinux
I am facing an issue with my RAID 1 (mdadm softraid) on an AlmaLinux/CloudLinux OS server, which is a production server with live data. Here's the chronology of events: 1. Initially, I created a RAID 1 array with two 1TB NVMe disks (2 x 1TB). 2. At some point, the second NVMe disk failed. I replaced...
I am facing an issue with my RAID 1 (mdadm softraid) on an AlmaLinux/CloudLinux OS server, which is a production server with live data. Here's the chronology of events:
1. Initially, I created a RAID 1 array with two 1TB NVMe disks (2 x 1TB).
2. At some point, the second NVMe disk failed. I replaced it with a new
2TB NVMe disk. I then added this new 2TB NVMe disk to the RAID
array, but it was partitioned/configured to match the 1TB capacity
of the remaining active disk.
3. Currently, the first 1TB disk has failed and was automatically
kicked out by the RAID system when I rebooted the server. So, only
the 2TB NVMe disk (which is currently acting as a 1TB member of the
degraded RAID) remains.
**Replacement and Setup Plan**
I have already replaced the failed 1TB disk with a new 2TB NVMe disk. I want to utilize the full 2TB capacity since both disks are now 2 x 2TB.
[root@id1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md124 : active raid5 sdd2 sdc2 sda2
62945280 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk
md125 : active raid5 sdd1 sdc1 sda1
1888176128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 7/8 pages [28KB], 65536KB chunk
md126 : active raid5 sda3 sdc3 sdd3
2097152 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 nvme1n1p1
976628736 blocks super 1.2 [2/1] [_U]
bitmap: 8/8 pages [32KB], 65536KB chunk
unused devices:
---
mdadm --detail /dev/md127
[root@id1 ~]# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Tue Aug 29 05:57:10 2023
Raid Level : raid1
Array Size : 976628736 (931.39 GiB 1000.07 GB)
Used Dev Size : 976628736 (931.39 GiB 1000.07 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu May 29 01:33:09 2025
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : idweb.webserver.com:root (local to host idweb.webserver.com)
UUID : 3fb9f52f:45f39d12:e7bb3392:8eb1481f
Events : 33132451
Number Major Minor RaidDevice State
- 0 0 0 removed
2 259 2 1 active sync /dev/nvme1n1p1
---
lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
[root@id1 ~]# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE MOUNTPOINT LABEL
sda 931.5G
├─sda1 linux_raid_member 900.5G web1srv.serverhostweb.com:home2
│ └─md125 ext4 1.8T /home2
├─sda2 linux_raid_member 30G web1srv.serverhostweb.com:tmp
│ └─md124 ext4 60G /var/tmp
└─sda3 linux_raid_member 1G web1srv.serverhostweb.com:boot
└─md126 xfs 2G /boot
sdb ext4 5.5T
sdc 931.5G
├─sdc1 linux_raid_member 900.5G web1srv.serverhostweb.com:home2
│ └─md125 ext4 1.8T /home2
├─sdc2 linux_raid_member 30G web1srv.serverhostweb.com:tmp
│ └─md124 ext4 60G /var/tmp
└─sdc3 linux_raid_member 1G web1srv.serverhostweb.com:boot
└─md126 xfs 2G /boot
sdd 931.5G
├─sdd1 linux_raid_member 900.5G web1srv.serverhostweb.com:home2
│ └─md125 ext4 1.8T /home2
├─sdd2 linux_raid_member 30G web1srv.serverhostweb.com:tmp
│ └─md124 ext4 60G /var/tmp
└─sdd3 linux_raid_member 1G web1srv.serverhostweb.com:boot
└─md126 xfs 2G /boot
nvme0n1 1.8T
nvme1n1 1.8T
└─nvme1n1p1 linux_raid_member 931.5G web1srv.serverhostweb.com:root
└─md127 ext4 931.4G /
What are the steps to repair my soft RAID 1, maximize the storage to 2TB, and ensure the data remains safe?
I have some example step but not really sure, does the below step right?:
# Create a partition on the new disk with a full size of 2TB
fdisk /dev/nvme0n1
mdadm --manage /dev/md127 --add /dev/nvme0n1p1
# Wait for First Sync
# Fail and remove the old disk
mdadm --manage /dev/md127 --fail /dev/nvme1n1p1
mdadm --manage /dev/md127 --remove /dev/nvme1n1p1
# Repartition the old disk for full 2TB
gdisk /dev/nvme1n1
# Add back to RAID
mdadm --manage /dev/md127 --add /dev/nvme1n1p1
# Wait for Second Sync
# Expand RAID array to maximum
mdadm --grow /dev/md127 --size=max
# Verify new size
mdadm --detail /dev/md127
# Resize ext4 filesystem
resize2fs /dev/md127
# Update mdadm.conf
mdadm --detail --scan > /etc/mdadm.conf
# Update initramfs
dracut -f
Server Spec:
- Os Almalinux/Cloudlinux 8
Hendra Setyawan
(51 rep)
May 28, 2025, 07:01 PM
• Last activity: May 30, 2025, 01:02 AM
1
votes
1
answers
59
views
How to merge two directories with failover?
Lets say I have two devices: - `/dev/sda1` mounted to `/` (system partition) - `/dev/sdb1` mounted to `/media/data` (data partition, usb device may be unplugged) I want to merge/overlay/raid two directories like so: - `/media/data` is the primary directory - `/usr/data` is the backup/failover direct...
Lets say I have two devices:
-
/dev/sda1
mounted to /
(system partition)
- /dev/sdb1
mounted to /media/data
(data partition, usb device may be unplugged)
I want to merge/overlay/raid two directories like so:
- /media/data
is the primary directory
- /usr/data
is the backup/failover directory that exists on the system partition
The resulting directory (e.g /mnt/merged
) will consist of the above two directories so that:
- when writing a file to /mnt/merged
the file will be written to /media/data
- if the /dev/sdb1
is not available while writing (the usb storage is removed) then write to the backup /usr/data
and when the primary partition is plugged again move the data to the primary partition
- (optional) setup the second partition as a cache partition in case it is faster than the primary partition, so that reads and writes happen to the backup (faster) directory before moving to the primary directory
MOHAMMAD RASIM
(530 rep)
May 25, 2025, 12:08 PM
• Last activity: May 26, 2025, 03:53 PM
8
votes
3
answers
34864
views
Run smartctl on all disks of a server
My question is a quite simple , I want to run the command `smartctl -i -A` on all disks that the server have. Think that I've too much server with different number of disks and RAID Controllers, then I need to scan all drivers for a diagnosis. I'm thinking of running `smartctl --scan | awk '{print $...
My question is a quite simple , I want to run the command
Appreciate all help!!
smartctl -i -A
on all disks that the server have.
Think that I've too much server with different number of disks and RAID Controllers, then I need to scan all drivers for a diagnosis.
I'm thinking of running smartctl --scan | awk '{print $1}' >> test.log
, so if I open the test.log I'll have all the drives information in it.
After this I need to run some if or do constructions to scan with smartctl
all drivers.
I don't know if this is the best way to do this, since I need to identify the RAID Controller too.
Am heading in the right direction?
##Edit:
I'm used to use these commands to troubleshoot:
###Without RAID Controller
for i in {c..d}; do
echo "Disk sd$i" $SN $MD
smartctl -i -A /dev/sd$i |grep -E "^ "5"|^"197"|^"198"|"FAILING_NOW"|"SERIAL""
done
###PERC Controller
for i in {0..12}; do
echo "$i" $SN $MD
smartctl -i -A -T permissive /dev/sda -d megaraid,$i |grep -E "^ "5"|^"197"|^"198"|"FAILING_NOW"|"SERIAL""
done
/usr/sbin/megastatus –physical
/usr/sbin/megastatus --logical
###3ware Controller
for i in {0..10}; do
echo "Disk $i" $SN $MD
smartctl -i -A /dev/twa0 -d 3ware,$i |grep -E "^ "5"|^"197"|^"198"|"FAILING_NOW"|"SERIAL""
done
###SmartArray & Megaraid Controler:
smartctl –a –d cciss,0 /dev/cciss/c0d0
/opt/3ware/9500/tw_cli show
cd /tmp
###DD (Rewrite disk block (DESTROY DATA)):
dd if=/dev/zero of=/dev/HD* bs=4M
HD*: sda, sdb…
###Burning (Stress test (DESTROY DATA)):
/opt/systems/bin/vs-burnin --destructive --time= /tmp/burninlog.txt
###Dmesg&kernerrors:
tail /var/log/kernerrors
dmesg |grep –i –E “”ata”|”fault”|”error”
So what I'm trying to do is automate these commands.
I want that the script verify all disks that the host have and run the appropriate smartctl
command for the case.
Something like a menu with some options that let me choose if I want to run a smartctl
or some destructive command, if I choose to run smartctl
the script will scan all disks and runs the command according to the host configuration ( with / without RAID controller),
and if I choose to run a destructive command, the script will ask me to put the disk number that I want to do this.
----------
##Edit 2:
I resolved my problem with the following script:
#!/bin/bash
# Troubleshoot.sh
# A more elaborate version of Troubleshoot.sh.
SUCCESS=0
E_DB=99 # Error code for missing entry.
declare -A address
# -A option declares associative array.
if [ -f Troubleshoot.log ]
then
rm Troubleshoot.log
fi
if [ -f HDs.log ]
then
rm HDs.log
fi
smartctl --scan | awk '{print $1}' >> HDs.log
lspci | grep -i raid >> HDs.log
getArray ()
{
i=0
while read line # Read a line
do
array[i]=$line # Put it into the array
i=$(($i + 1))
done > Troubleshoot.log
smartctl -i -A $e >> Troubleshoot.log # Run smartctl into all disks that the host have
fi
done
exit $? # In this case, exit code = 99, since that is function return.
I don't know if this solution is the right or the best one, but works for me!Appreciate all help!!
ZeroNegative
(103 rep)
Mar 27, 2014, 12:11 PM
• Last activity: May 22, 2025, 05:45 PM
1
votes
0
answers
139
views
ZFS pool corrupted, shows /dev/sdb twice as disk used
Woke up to a nasty surprise a couple of days ago to see that my zpool couldn't be imported. After discovering that sdb had faulted I assumed it was time to replace the drive, so I hot swapped it. After trying to figure out what to do next for a while zfs commands became non responsive, I assumed tha...
Woke up to a nasty surprise a couple of days ago to see that my zpool couldn't be imported. After discovering that sdb had faulted I assumed it was time to replace the drive, so I hot swapped it. After trying to figure out what to do next for a while zfs commands became non responsive, I assumed that maybe it was trying to repair itself, left it running for over a day, since zfs commands were still unresponsive I then force restarted the server. Now
zpool import
shows sdb twice...
pool FAULTED corrupted data
raidz1-0 DEGRADED
sda ONLINE
sdb FAULTED corrupted data
sdb ONLINE
sde ONLINE
Can anyone tell me what is going on, and what the best way is to repair the pool? I have not yet checked the drive that faulted.
user3471547
(11 rep)
Jan 24, 2023, 12:47 AM
• Last activity: May 15, 2025, 03:04 PM
2
votes
1
answers
3718
views
lsblk shows non-existent md partitions after reboot
I'm getting weird behaviour while setting up an `mdadm` RAID1 array on debian 8.2. After I set-up the array, `lsblk` shows: simon@debian-server:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk `-sda1 8:1 0 931.5G 0 part `-md0 9:0 0 931.4G 0 raid1 sdb 8:16 0 931.5G 0 disk `-sd...
I'm getting weird behaviour while setting up an
mdadm
RAID1 array on debian 8.2.
After I set-up the array, lsblk
shows:
simon@debian-server:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
`-sda1 8:1 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
sdb 8:16 0 931.5G 0 disk
`-sdb1 8:17 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
sdc 8:32 0 232.9G 0 disk
|-sdc1 8:33 0 512M 0 part /boot/efi
|-sdc2 8:34 0 244M 0 part /boot
`-sdc3 8:35 0 232.2G 0 part
|-debian--server--vg-root 254:0 0 228.3G 0 lvm /
`-debian--server--vg-swap_1 254:1 0 3.9G 0 lvm [SWAP]
After a reboot, lsblk
shows:
simon@debian-server:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
`-sda1 8:1 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
|-md0p1 259:0 0 811.6G 0 md
`-md0p2 259:1 0 346.1G 0 md
sdb 8:16 0 931.5G 0 disk
`-sdb1 8:17 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
|-md0p1 259:0 0 811.6G 0 md
`-md0p2 259:1 0 346.1G 0 md
sdc 8:32 0 232.9G 0 disk
|-sdc1 8:33 0 512M 0 part /boot/efi
|-sdc2 8:34 0 244M 0 part /boot
`-sdc3 8:35 0 232.2G 0 part
|-debian--server--vg-root 254:0 0 228.3G 0 lvm /
`-debian--server--vg-swap_1 254:1 0 3.9G 0 lvm [SWAP]
I don't know where the md0p1 and md0p2 partitions are coming from. My /etc/fstab
and /etc/mdadm/mdadm.conf
both have nothing about this in them.
parted
shows one partition on md0
:
simon@debian-server:~$ sudo parted /dev/md0 print
Model: Linux Software RAID Array (md)
Disk /dev/md0: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 1000GB 1000GB ntfs
Any ideas where the md0p1 and md0p2 partitions are coming from?
I'm setting up the array by doing as follows:
- Delete existing device (I've done this a few times):
sudo mdadm --stop /dev/md0
sudo mdadm --remove /dev/md0
- Zero drives:
sudo dd if=/dev/zero of=/dev/sda bs=1M count=1024
sudo dd if=/dev/zero of=/dev/sdb bs=1M count=1024
- Create partition tables:
sudo parted /dev/sda mklabel gpt
sudo parted /dev/sdb mklabel gpt
- Create full-disk partitions:
sudo parted -a optimal /dev/sda mkpart primary '0%' '100%'
sudo parted -a optimal /dev/sdb mkpart primary '0%' '100%'
- Set raid flag on partitions:
sudo parted /dev/sda set 1 raid on
sudo parted /dev/sdb set 1 raid on
- Create RAID array:
sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1
- Add filesystem (I'm using NTFS, but the problem also happens with ext4)
sudo mkfs.ntfs -f /dev/md0
tangoecho
(21 rep)
Jan 13, 2016, 08:23 AM
• Last activity: May 12, 2025, 09:07 PM
0
votes
2
answers
147
views
solaris svm and raid5 : a way for expand on fly?
I know two methods for expand raid5 on solaris svm using UFS one is [this][1] another is to replace disk by disk,suppose i want to remove old small disk and replace with bigger disk using this procedure devfsadm cfgadm -c configure sata2/0 format -d c0t5d0 metadb -a -f c0t5d0s2 metareplace -e myraid...
I know two methods for expand raid5 on solaris svm using UFS
one is this
another is to replace disk by disk,suppose i want to remove old
small disk and replace with bigger disk
using this procedure
devfsadm
cfgadm -c configure sata2/0
format -d c0t5d0
metadb -a -f c0t5d0s2
metareplace -e myraid c0t4d0s2 c0t5d0s2
metadb -d c0t4d0s2
cfgadm -c unconfigure c0t4d0s2
I have replaced all disks with the method above and
my raid5 is online and ok
as metastat said
But after give
metadevadm -vr
and
growfs -M /raid /dev/md/rdsk/d44
The SIZE is the same as raid with old disks
and this is wrong because i replaced
disks with bigger disks.
On linux is really easy to replace
raid5 disk on fly and grow the raid5
(mdadm fail,add,grow,then pvresize..)
on fly,i miss something on solaris svm?
The first method is also good(concatenate+growfs)
but i want to replace disks old(small) with new(big).
Please don't answer zfs,for "study" reason i'm on ufs+svm
elbarna
(13690 rep)
Jul 20, 2016, 01:57 AM
• Last activity: May 10, 2025, 06:31 AM
0
votes
0
answers
48
views
Stuck while Rebuilding Intel VROC Raid 0+1
About a month ago, one of the 4 drives in my array went clicky. I was able to identify the 2nd drive (sdb of sd[a-d]) was the bad drive and thought I failed/removed it. After getting a replacement drive and getting as much data as possible, I don't think I'll get anywhere without editing imsm metada...
About a month ago, one of the 4 drives in my array went clicky. I was able to identify the 2nd drive (sdb of sd[a-d]) was the bad drive and thought I failed/removed it.
After getting a replacement drive and getting as much data as possible, I don't think I'll get anywhere without editing imsm metadata.
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sda(S) sdd(S) sdc(S)
15603 blocks super external:imsm
unused devices:
I took a closer look at "mdadm -E /dev/md0"
/dev/md0:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.2.01
Orig Family : 6453e91b
Family : 6453e91b
Generation : 0003c070
Creation Time : Unknown
Attributes : All supported
UUID : bb0620fd:8274f3f7:498f64a4:dacbd25f
Checksum : 8e303583 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 1
Disk00 Serial : WD-WCC3F3ARCLKS
State : active
Id : 00000000
Usable Size : 1953514766 (931.51 GiB 1000.20 GB)
UUID : 5a9f14bb:a252fd06:f08cf7cf:b920b29e
RAID Level : 10
Members : 4
Slots : [__UU]
Failed disk : 0
This Slot : 0 (out-of-sync)
Sector Size : 512
Array Size : 3906994176 (1863.00 GiB 2000.38 GB)
Per Dev Size : 1953499136 (931.50 GiB 1000.19 GB)
Sector Offset : 0
Num Stripes : 7630848
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : failed
Dirty State : clean
RWH Policy : off
Volume ID : 1
Disk01 Serial : 57E07H1MS:0
State : active
Id : ffffffff
Usable Size : 1953514766 (931.51 GiB 1000.20 GB)
Disk02 Serial : WD-WCC6Y0RL73N1
State : active
Id : 00000002
Usable Size : 1953514766 (931.51 GiB 1000.20 GB)
Disk03 Serial : WD-WCC6Y0LPDK7T
State : active
Id : 00000003
Usable Size : 1953514766 (931.51 GiB 1000.20 GB)
Interesting! So It looks like the metadata is coming from /dev/sda, Disk00. This is the first time I've noticed 4 drives, and that "Disk01 Serial : 57E07H1MS:0" doesn't look "WD" like the other "WD" drives. I wouldn't be surprised if I missed something in the --remove process.
I've had access to the "Intel virtual raid on cpu intel vroc for linux user guide" pdf for lots of insight. The "update-subarray" stuff looks like what I'll need to research next to figure out [Volume0]'s Failed disk and Disk01's metadata. I'd expect something like
# mdadm --update-subarray=0 --update-id=ffffffff --state=failed /dev/md0
but that was from piecing together some examples I found in the "Linux-Intel-VROC-TPS-335893.pdf" I found.
Any help getting the array --run-ing again would be appreciated.
Jeff Archambeault
(1 rep)
May 7, 2025, 02:26 AM
10
votes
2
answers
1341
views
md raid5: translate md internal sector numbers to offsets
**TL;DR summary**: Translate an md sector number into offsets(s) within the `/dev/mdX` device, and how to investigate it with `xfs_db`. The sector number is from `sh->sector` in [`linux/drivers/md/raid5.c:handle_parity_checks5()`][1]. I don't know MD internals, so I don't know exactly what to do wit...
**TL;DR summary**: Translate an md sector number into offsets(s) within the
/dev/mdX
device, and how to investigate it with xfs_db
. The sector number is from sh->sector
in linux/drivers/md/raid5.c:handle_parity_checks5()
.
I don't know MD internals, so I don't know exactly what to do with the output from the printk
logging I added.
Offsets into the component devices (for dd
or a hex editor/viewer) would also be interesting.
I suppose I should ask this on the Linux-raid mailing list. Is it subscribers-only, or can I post without subscribing?
----
I have xfs directly on top of MD RAID5 of 4 disks in my desktop (no LVM). A recent scrub detected a non-zero mismatch_cnt
(8 in fact, because md operates on 4kiB pages at a time).
This is a RAID5, not RAID1/RAID10 where mismatch_cnt
!= 0 can happen during normal operation . (The other links at the bottom of this wiki page might be useful to some people.)
I could just blindly repair
, but then I'd have no idea which file to check for possible corruption, besides losing any chance to choose which way to reconstruct. Frostschutz's answer on a similar question is the only suggestion I found for tracking back to a difference in the filesystem. It's cumbersome and slow, and I'd rather use something better to narrow it down to a few files first.
---
### Kernel patch to add logging
Bizarrely, md's check feature doesn't report where an error was found . **I added a printk
in md/raid5.c to log sh->sector
in the if
branch that increments mddev->resync_mismatches
in handle_parity_checks5()
** (tiny patch published on github , originally based on 4.5-rc4 from kernel.org.) For this to be ok for general use, it would probably need to avoid flooding the logs in repairs with a lot of mismatches (maybe only log if the new value of resync_mismatches
is sector * 512 a linear address inside
/dev/md/t-r5 (aka
/dev/md125)? Is it a sector number within each component device (so it refers to three data and one parity sector)? I'm guessing the latter, since a parity-mismatch in RAID5 means N-1 sectors of the md device are in peril, offset from each other by the stripe unit. Is sector 0 the very start of the component device, or is it the sector after the superblock or something? Was there more information in
handle_parity_checks5()` that I should have calculated / logged?
If I wanted to get just the mismatching blocks, is this correct?
dd if=/dev/sda6 of=mmblock.0 bs=512 count=8 skip=4294708224
dd if=/dev/sdb6 of=mmblock.1 bs=512 count=8 skip=4294708224
dd if=/dev/sda6 of=mmblock.2 bs=512 count=8 skip=4294708224
dd if=/dev/sdd of=mmblock.3 bs=512 count=8 skip=4294708224 ## not a typo: my 4th component is a smaller full-disk
# i.e.
sec_block() { for dev in {a,b,c}6 d; do dd if=/dev/sd"$dev" of="sec$1.$dev" skip="$1" bs=512 count=8;done; }; sec_block 123456
I'm guessing not, because I get 4k of zeros from all four raid components, and 0^0 == 0
, so that should be the correct parity, right?
One other place I've seen mention of using sector addresses in md is for sync_min
and sync_max
(in sysfs). Neil Brown on the linux-raid list , in response to a question about a failed drive with sector numbers from hdrecover
, where Neil used the full-disk sector number as an MD sector number. That's not right is it? Wouldn't md sector numbers be relative to the component devices (partitions in that case), not the full device that the partition is a part of?
----
### linear sector to XFS filename:
Before realizing that the md sector number was probably for the components, not the RAID device, I tried using it in read-only xfs_db
:
Dave Chinner's very brief suggestion on how to find how XFS is using a given block didn't seem to work at all for me. (I would have expected some kind of result, for some sector, since the number shouldn't be beyond the end of the device even if it's not the mismatched sector)
# xfs_db -r /dev/md/t-r5
xfs_db> convert daddr 4294708224 fsblock
0x29ad5e00 (699227648)
xfs_db> blockget -nv -b 699227648
xfs_db> blockuse -n # with or without -c 8
must run blockget first
huh? What am I doing wrong here? I guess this should be a separate question. I'll replace this with a link if/when I ask it or find an answer to this part somewhere else.
My RAID5 is essentially idle, with no write activity and minimal read (and noatime
, so reads aren't producing writes).
----
### Extra stuff about my setup, nothing important here
Many of my files are video or other compressed data that give an effective way to tell whether the data is correct or not (either internal checksums in the file format, or just whether it decodes without errors). That would make this read-only loopback method viable, once I know which file to check. I didn't want to run a 4-way diff of every file in the filesystem to find the mismatch first, though, when the kernel has the necessary information while checking, and could easily log it.
----
my /proc/mdstat
for my bulk-data array:
md125 : active raid5 sdd[3] sda6 sdb6[1] sdc6[4]
7325273088 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/19 pages [0KB], 65536KB chunk
It's on partitions on three Toshiba 3TB drives, and a non-partitioned WD25EZRS green-power (slow) drive which I'm replacing with another Toshiba. (Using [mdadm --replace
](https://unix.stackexchange.com/a/104052/79808) to do it online with no gaps in redundancy. I realized after one copy that I should check the RAID health before as well as after, to detect problems. That's when I detected the mismatch. It's possible it's been around for a long time, since I had some crashes almost a year ago, but I don't have old logs and mdadm doesn't seem to send mail about this by default (Ubuntu 15.10).
My other filesystems are on RAID10f2 devices made from earlier partitions on the three larger HDs (and RAID0 for /var/tmp). The RAID5 is just for bulk-storage, not /home
or /
.
My drives are all fine: SMART error counts are 0 all bad-block counters on all drives, and short + long SMART self-tests passed.
----
near-duplicates of this question which don't have answers:
* https://unix.stackexchange.com/questions/256514/what-chunks-are-mismatched-in-a-linux-md-array
* http://www.spinics.net/lists/raid/msg49459.html
* MDADM mismatch_cnt > 0. Any way to identify which blocks are in disagreement?
* Other things already linked inline, but most notably frostschutz's read-only loopback idea .
* scrubbing on the Arch wiki RAID page
Peter Cordes
(6645 rep)
Feb 29, 2016, 04:01 AM
• Last activity: May 6, 2025, 03:40 PM
7
votes
1
answers
3500
views
Grub won't boot from GPT RAID (gave up waiting for root device)
I'm having problems with booting a Debian 8 system on which I migrated the root partition from a single hard drive to a RAID1 (mdraid). On every boot, I get the following grub error: Gave up waiting for root device. Common problems: - Boot args (cat /proc/cmdline) - Check rootdelay= (did the system...
I'm having problems with booting a Debian 8 system on which I migrated the root partition from a single hard drive to a RAID1 (mdraid).
On every boot, I get the following grub error:
Gave up waiting for root device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/disk/by-uuid/2ab18cb4-a23d-4e5c-b37d-cbd3077b878c does not exist.
Dropping to a shell!
modprobe: module ehci-orion not found in modules.dep
(initramfs)
_/dev/md0_ is not started, so it can't find the root partition:
(initramfs) ls /dev/md*
ls: /dev/md*: No such file or directory
(initramfs)
I can, however, start the raid manually just fine:
(initramfs) mdadm --assemble --scan
mdadm: /dev/md0 has been started with 2 drives.
(initramfs) ls /dev/md*
/dev/md0
The system will only boot if I manually create the directory _/dev/disk/by-uuid_ and link _md0_:
(initramfs) mkdir /dev/disk/by-uuid
(initramfs) ln -s /dev/md0 /dev/disk/by-uuid/2ab18cb4-a23d-4e5c-b37d-cbd3077b878c
I hope someone can help me figure out why grub doesn't start the md device by itself. I searched the internet and tried about everything I could find but no luck. I'm really lost right now.
I want to boot via _BIOS-legacy_, not _UEFI_.
The only two connected hard drives (SSD!) are formatted with a _GPT_ partition table and the following partitions (exactly the same):
1 1049kB 2097kB 1049kB bios_grub
2 2150MB 12,9GB 10,7GB ext4 raid
(_grub-pc_ needs the first partition to boot from _GPT_ drives)
The Raid1 (v0.90 metadata) is formatted directly as _ext4_.
Through a live system chroot, I installed _grub-pc_ to _/dev/sda_ and _/dev/sdb_, changed my _fstab_, ran
update-grub
andupdate-initramfs -u -k all
.
blkid
:
/dev/sda2: UUID="b59d3baf-346b-568d-03a2-8b26060640c5" TYPE="linux_raid_member" PARTUUID="0609ba5b-9065-41f8-80ed-6832e3236ec9"
/dev/sdb2: UUID="b59d3baf-346b-568d-03a2-8b26060640c5" TYPE="linux_raid_member" PARTUUID="24ee1040-02dd-4867-b4da-5be11d59bdcd"
/dev/md0: UUID="2ab18cb4-a23d-4e5c-b37d-cbd3077b878c" TYPE="ext4"
/dev/sda1: PARTUUID="df5161cf-b5b3-422c-9ed2-90a7750ac265"
/dev/sdb1: PARTUUID="7d20b55b-ba50-4187-b05e-ae1f18b21de3"
_mdadm.conf_ contains (only!) the content from mdadm --detail --scan
:
ARRAY /dev/md0 metadata=0.90 UUID=b59d3baf:346b568d:03a28b26:060640c5
Here is an excerpt from my _/boot/grub/grub.cfg_:
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod part_gpt
insmod diskfilter
insmod mdraid09
insmod ext2
set root='mduuid/b59d3baf346b568d03a28b26060640c5'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint='mduuid/b59d3baf346b568d03a28b26060640c5' 2ab18cb4-a23d-4e5c-b37d-cbd3077b878c
else
search --no-floppy --fs-uuid --set=root 2ab18cb4-a23d-4e5c-b37d-cbd3077b878c
fi
echo 'Linux 3.16.0-4-amd64 wird geladen …'
linux /boot/vmlinuz-3.16.0-4-amd64 root=UUID=2ab18cb4-a23d-4e5c-b37d-cbd3077b878c ro rootdelay=20
echo 'Initiale Ramdisk wird geladen …'
initrd /boot/initrd.img-3.16.0-4-amd64
Ansgar
(200 rep)
Jan 10, 2017, 08:16 PM
• Last activity: May 4, 2025, 08:04 AM
Showing page 1 of 20 total questions