Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
4
answers
3324
views
Ubuntu RAID Configuration Devices list is empty
I want to install Ubuntu Server 22.04. During the installation process, I want to do the storage configuration for the software RAID as mentioned in this YT-Video (https://www.youtube.com/watch?v=rJzHpc1kQW4). Done steps: - Both local disks are selected as "Use As Boot Device" and "Add As Another Bo...
I want to install Ubuntu Server 22.04.
During the installation process, I want to do the storage configuration for the software RAID as mentioned in this YT-Video (https://www.youtube.com/watch?v=rJzHpc1kQW4) .
Done steps:
- Both local disks are selected as "Use As Boot Device" and "Add As Another Boot Device"
- Free space on both disks -> Add GPT Partition -> Format: Leave unformatted
- When I now click on "Create software RAID (md)" the "Devices" list is empty. (compared with 3:20 in Video)

Am I missing something? And perhaps someone has a tip?
We further recognized in the md menu:
- we are able to maneuver an invisble menu
- if we select randomly with arrow keys and space the drives that aren't visible
- sometimes the usb stick will be selected
- sometimes the wanted drives
- we were able identify the correct drives via the printed size
- if the correct drive was selected it seems the correct raid selection could be created
seems like a bug in the menu programm.
K1LLUM1N471
Nov 17, 2023, 01:46 PM
• Last activity: Jul 29, 2025, 02:27 PM
7
votes
3
answers
12012
views
md raid not mounted by dracut
Background ===== I'm running Centos 7. Originally, it was running on a single disk that looked something like this: 1 200M EFI System (/boot/efi) 2 500M Microsoft basic (/boot) 3 465.1G Linux LVM LVM VG centos - LVM LV ext4 centos-root (/) - LVM LV swap centos-swap (swap) This was just a temporary s...
Background
=====
I'm running Centos 7. Originally, it was running on a single disk that looked something like this:
1 200M EFI System (/boot/efi)
2 500M Microsoft basic (/boot)
3 465.1G Linux LVM
LVM VG centos
- LVM LV ext4 centos-root (/)
- LVM LV swap centos-swap (swap)
This was just a temporary solution as it was originally supposed to be installed on a Linux software RAID1 array. I got around to migrating it today. This is what it currently looks like:
Both new disks have this partition layout:
1 200M EFI System (/boot/efi)
2 457.6G Linux RAID /dev/md0 RAID1 (for boot and LVM)
3 8G Linux RAID /dev/md1 RAID0 (so 16GB total, for swap)
/dev/md0 looks like this:
1 500M Linux filesystem (/boot)
2 457G Linux LVM (centos-root is migrated to this)
LVM now has only one LV, centos-root
/etc/mdadm.conf
looks like this:
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=main.centos.local:0 UUID=5b5057b4:4235ba4b:5342dfda:acf63302
devices=/dev/sda2,/dev/sdb2
ARRAY /dev/md1 level=raid0 num-devices=2 metadata=1.2 name=main.centos.local:1 UUID=f82a8c99:9b391d83:4efc9456:9e9bad98
devices=/dev/sda3,/dev/sdb3
/etc/fstab
looks like this:
/dev/mapper/centos-root / xfs defaults 0 0
UUID=fcb5f82f-ce6b-460b-800f-329e010bc403 /boot xfs defaults 0 0
UUID=C532-14AE /boot/efi vfat umask=0077,shortname=winnt 0 0
/dev/md1 swap swap defaults 0 0
blkid
outputs this (relevant entries only):
/dev/sdb1: SEC_TYPE="msdos" UUID="C532-14AE" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="ed301bbd-c15c-40af-ae75-bf238d0e6270"
/dev/sda1: SEC_TYPE="msdos" UUID="C532-14AE" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="f3a76412-41a0-4e04-9b04-ad1c159133cf"
/dev/md0p1: LABEL="boot" UUID="fcb5f82f-ce6b-460b-800f-329e010bc403" TYPE="xfs" PARTLABEL="primary" PARTUUID="df8d6481-c6ce-423a-b5d5-205d355e5653"
/dev/md0p2: UUID="7LfywM-oPHy-MTEt-swlI-EVbZ-opTo-m82E6R" TYPE="LVM2_member" PARTLABEL="primary" PARTUUID="19e7f9d5-a955-4036-8338-03a748faa1f6"
/dev/mapper/centos-root: UUID="deaa9788-b487-4991-adf7-2945788fb6cd" TYPE="xfs"
I have a script which automatically mounts the other EFI partition to /boot/efi_[device]
, and when the kernel is updated, the grub.cfg gets copied to this partition to keep everything in sync.
/dev/sda1
and /dev/sdb1
are kept in sync by the script (I've verified this), so it shouldn't be an issue that fstab mounts either one to /boot/efi
(this also means that if one drive was removed due to failure, the system is still guaranteed to boot). I could have put swap in a LV to simplify things, but the RAID0 gets better performance (for what it's worth) and I get an extra 16GB of space.
I migrated the LV from the old drive to the new PV using the following commands:
pvcreate /dev/md0p2
vgextend centos /dev/md0p2
pvmove /dev/sdg3
vgreduce centos /dev/sdg3
Then I regenerated the initramfs with dracut
(after backing up the original), and finally regenerated grub.cfg. Afterwards, I mounted the new /boot
and /boot/efi
partitions and copied everything over.
Problem
=====
After disconnecting the old drive and booting, dracut fails to find my RAID arrays, and of course the /boot
partition and my LVG as well. It appears that it's simply not calling mdadm --assemble
on /dev/md0
and /dev/md
. I'm able to do just that from the dracut
prompt, after which lvm_scan
finds my LVG, I can link /dev/centos/root
to /dev/root
, and the system continues booting without any problems once exiting the prompt. Everything seems to be exactly where it should be.
There was a kernel update available, so I tried installing it (assuming I messed something up the first time around when regenerating the initramfs and grub.cfg files), but no dice. System still fails in the exact same way. This is true when I boot from either EFI partition manually (as it should be since the two are identical).
Link to rdsosreport.txt on pastebin
What am I missing here? How do I get dracut to assemble my arrays?
dghodgson
(301 rep)
Feb 27, 2016, 04:03 AM
• Last activity: Jul 5, 2025, 07:38 AM
2
votes
1
answers
83
views
New disk array on linux
My son and I are about to head off on and adventure converting 4 disks into one array. Let me give you some background on what the layout looks like today. We are running gentoo linux and have 4 10TB disks. Two of them currently have data on them, but not in an array. Two disks are unused at this ti...
My son and I are about to head off on and adventure converting 4 disks into one array.
Let me give you some background on what the layout looks like today. We are running gentoo linux and have 4 10TB disks.
Two of them currently have data on them, but not in an array. Two disks are unused at this time.
What I would like to do is:
* create an array, either software raid (mdadm) or ZFS pool
* mount that new array on an alternate mount point (i.e. /mnt/blah)
* copy the data from the other disks on to this new array,
* then finally add the two older disks into that new array.
I'm not worried about fault tolerance but space and performance would be great. What is the best pathway to reach this goal?
Bejiita78
Jun 30, 2025, 05:27 PM
• Last activity: Jul 2, 2025, 11:07 AM
2
votes
2
answers
2123
views
Slow transfer speed through Samba using software RAID
I have a mini PC (Intel Celeron J4005, 4GB RAM, Intel Gigabit NIC), configured with: - Ubuntu (5.4.0-81-generic, installed on sda) - Samba (version 4.11.6-Ubuntu) - FTP (vsftpd, no encryption) - RAID5 (mdadm, md0: sdb-sdc-sdd, USB-SATA) The RAID array is shared via Samba and FTP, but I want to elimi...
I have a mini PC (Intel Celeron J4005, 4GB RAM, Intel Gigabit NIC), configured with:
- Ubuntu (5.4.0-81-generic, installed on sda)
- Samba (version 4.11.6-Ubuntu)
- FTP (vsftpd, no encryption)
- RAID5 (mdadm, md0: sdb-sdc-sdd, USB-SATA)
The RAID array is shared via Samba and FTP, but I want to eliminate FTP, all major clients are Windows machines.
The problem is that I get way slower speeds through Samba share than FTP:
| Device | Method | Read Speed (Mbyte/s, one large file) |
|-| -|-|
| md0 | local | ~220 |
| md0 | LAN, FTP | ~115 (network limit) |
| md0 | LAN, Samba | ~48 |
| md0 | LAN, Samba, second run (cached in memory) | ~115 (network limit) |
| sda | LAN, Samba | ~115 (network limit) |
I tried with default Samba settings and with the current one (attached below), but I got the same result. I flushed the cache between tests.
iostat output sample (LAN, Samba, first run):
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
md0 793.00 433408.00 0.00 0.00 0.00 546.54 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 254.00 16768.00 8.00 3.05 14.74 66.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.27 84.80
sdc 171.00 16896.00 93.00 35.23 2.99 98.81 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.32 60.80
sdd 161.00 16640.00 101.00 38.55 11.74 103.35 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.57 96.00
iostat output sample (LAN, FTP, first run):
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
md0 1828.00 292480.00 0.00 0.00 0.00 160.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 458.00 39040.00 153.00 25.04 1.66 85.24 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.18 75.60
sdc 457.00 38976.00 152.00 24.96 1.45 85.29 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.09 70.40
sdd 457.00 38976.00 152.00 24.96 1.59 85.29 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.15 75.20
I have no clue what the problem can be, can someone help me, or at least where I should start investigating?
----------
Samba config:
[global]
workgroup = WORKGROUP
min protocol = SMB3
log level = 1
socket options = TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536 IPTOS_LOWDELAY SO_KEEPALIVE
use sendfile = true
aio read size = 65536
aio write size = 65536
read raw = yes
write raw = yes
getwd cache = yes
acl allow execute always = true
log file = /var/log/samba/log.%m
max log size = 1000
logging = file
server role = standalone server
obey pam restrictions = yes
unix password sync = yes
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
pam password change = yes
map to guest = bad user
[Share]
path = /media/hdd
writable = yes
valid users = myuser
directory mode = 0770
create mode = 0660
RAID array configuration:
/dev/md0:
Version : 1.2
Creation Time : Tue Sep 7 13:19:26 2021
Raid Level : raid5
Array Size : 976441344 (931.21 GiB 999.88 GB)
Used Dev Size : 488220672 (465.60 GiB 499.94 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Sep 9 14:37:52 2021
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : bitmap
Filesystem info:
root@MiniPC:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 371M 12M 360M 3% /run
/dev/sda2 58G 3.4G 55G 6% /
tmpfs 1.9G 12K 1.9G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 500M 79M 422M 16% /var/cache/apt
tmpfs 500M 0 500M 0% /tmp
tmpfs 500M 0 500M 0% /var/backups
tmpfs 500M 2.2M 498M 1% /var/log
tmpfs 500M 0 500M 0% /var/tmp
/dev/sda1 511M 5.3M 506M 2% /boot/efi
/dev/md0 917G 356G 562G 39% /media/hdd
root@MiniPC:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 59.6G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 59.1G 0 part /
sdb 8:16 0 465.7G 0 disk
└─sdb1 8:17 0 465.7G 0 part
└─md0 9:0 0 931.2G 0 raid5 /media/hdd
sdc 8:32 0 465.8G 0 disk
└─sdc1 8:33 0 465.8G 0 part
└─md0 9:0 0 931.2G 0 raid5 /media/hdd
sdd 8:48 0 465.8G 0 disk
└─sdd1 8:49 0 465.8G 0 part
└─md0 9:0 0 931.2G 0 raid5 /media/hdd
S-Zoli
(21 rep)
Sep 14, 2021, 01:36 PM
• Last activity: Jul 2, 2025, 01:02 AM
14
votes
0
answers
9625
views
State of LVM raid compared to mdadm
LVM and `mdadm`/`dmraid` both offer software RAID functionality on Linux. This is essentially a follow-up to a [question from 2014][1]. Back then, [@derobert][2] recommended using `mdadm` over LVM RAID due to its maturity — but that was more than four years ago. I imagine things may have changed sin...
LVM and
mdadm
/dmraid
both offer software RAID functionality on Linux. This is essentially a follow-up to a question from 2014 . Back then, @derobert recommended using mdadm
over LVM RAID due to its maturity — but that was more than four years ago. I imagine things may have changed since then.
However, I’ve never used LVM RAID before, and I couldn't find many recent experiences or discussions about it.
So, what’s the current state of LVM RAID? Has it become more mature? Have the flaws mentioned in @derobert’s post been resolved, or do they still exist? Specifically, how does it compare to mdadm
in terms of:
- Stability
- Features (grow, shrink, convert)
- Repair and recovery
- Community support
- Performance
I’d like to know if people actually use LVM RAID now, or if most still stick with mdadm
. Is it more advisable to use LVM on top of mdadm
for logical volume management, or is it now acceptable to let LVM manage the RAID as well? Would it even make sense to use LVM RAID instead of mdadm
, even if you don’t plan to take advantage of logical volume management?
I considered commenting on the original answer and asking @derobert for an update, but I decided to post a new question to reach more community members and gather fresh perspectives — not just update the original post to the present tense.
LukeLR
(342 rep)
Apr 29, 2019, 10:46 AM
• Last activity: Jun 10, 2025, 07:34 PM
0
votes
0
answers
37
views
RAID6 Array Inactive, Multiple Missing Drives
Alright, hopefully someone is able to help me out here: My RAID6 array in mdadm just failed today. Sequence of events as follows: 1) PLEX server is not functioning, though it appeared that everything else was working fine... seemed to be an network issue, so I restarted the server/comp. 2) While res...
Alright, hopefully someone is able to help me out here:
My RAID6 array in mdadm just failed today. Sequence of events as follows:
1) PLEX server is not functioning, though it appeared that everything else was working fine... seemed to be an network issue, so I restarted the server/comp.
2) While restarting, the computer seemed to hang up on boot up... I let it run, went out of the room and when I came back, my kid had turned off the power, said it looked "frozen"... 6 year olds...
3) Restarted again, booted up fine, PLEX server connected, everything seemed fine. Fast forward several hours, in use, no issues.
4) Problem with Plex again, server not finding files, I look, >80% of files are missing now (Plex itself can still connect to devices, so seems the original issue may be unrelated to RAID problem).
5) Stupidly shut down and attempt reboot, during shut down a bunch of error messages pop up, but before I can take a picture or read them clearly, computer completes shutdown and screen goes black.
6) Restart computer and the RAID6 array is gone.
My guess this is not directly related to the earlier issues, other than maybe that the "freeze" and hard shutdown might have exacerbated a drive on the edge.
What I have been able to ascertain at this point:
1) All 7 drives in the array show up under lsblk, ran smartctl and they all seem okay (though definitely old).
2) On running cat /proc/mdstat I find two arrays, 1 is my old array which is functioning fine, and the other is called md127 which is an incorrect number. The correct should be md111 (I believe).
3) I can find under md127 that it is a 7 drive array and only 4 devices are connected, which are 4 of the drives from the lost array.
I did check cable connections (do not have an extra known good set unfortunately), but on rebooting, the 4 listed drives connected to MD127 have changed to other drives in the array (E C B G instead of C A D B)
Lastly, I can see that there was something that happened this evening around 17:10. Using mdadm --examine, the Update Time for 1 drive (sdc) is back in February, for two other drives (sde sdg) at 17:10:41, and then at 17:10:56 for the last 4 drives (sdb, sdd, sdf, sdh). The sdc has 9184 events, sde and sdg have 64641, and the other 4 all have 64644 events.
Sorry for the wall of text, but I will freely admit that I am utterly lost at this point. Any help or direction would be greatly appreciated. The only lead that I have been able to find is to attempt to either run/create the array again, but not sure if that would work and I am concerned about data loss (which I realize may already be a foregone conclusion, but grasping at straws). I suspect that I need to add the missing drives back to the array, but again am not sure how to do so (especially since I am not clear on what exact order they should be added).
Thank you all again for any help.
Update:
On another reboot trying to problem solve, the md127 array is now showing all 7 disks as part of it:
cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid6 sdi sdo sdl sdm sdj sdk sdn 19534430720 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/7] [UUUUUUU] bitmap: 0/30 pages [0KB], 65536KB chunk md127 : inactive sdc(S) sdh(S) sdg(S) sdd(S) sdf(S) sde(S) sdb(S) 54697261416 blocks super 1.2 unused devices:The other one, md0, is unrelated other array, it is working fine. Not sure where to go from here. I believe the [S] after each drive means it is treating it as a spare? I also tried the following:
sudo mdadm --run /dev/md127 mdadm: failed to start array /dev/md/snothplex:111: Input/output errorEdit #2... Fixed-ish? Here is output of --detail and --examine:
sudo mdadm --detail --scan --verbose INACTIVE-ARRAY /dev/md127 level=raid6 num-devices=7 metadata=1.2 name=snothplex:111 UUID=58f4414e:13ba3568:b83070b2:b3f561d2 devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde ARRAY /dev/md/snothplex:0 level=raid6 num-devices=7 metadata=1.2 name=snothplex:0 UUID=1b0a0747:27bc69f4:f298f7ae:591d022e devices=/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdm,/dev/sdn,/dev/sdo
sudo mdadm --examine --scan --verbose ARRAY /dev/md/111 level=raid6 metadata=1.2 num-devices=7 UUID=58f4414e:13ba3568:b83070b2:b3f561d2 name=snothplex:111 devices=/dev/sdh,/dev/sdf,/dev/sdb,/dev/sdd,/dev/sdc,/dev/sde,/dev/sdg ARRAY /dev/md/0 level=raid6 metadata=1.2 num-devices=7 UUID=1b0a0747:27bc69f4:f298f7ae:591d022e name=snothplex:0 devices=/dev/sdo,/dev/sdn,/dev/sdk,/dev/sdj,/dev/sdi,/dev/sdl,/dev/sdmI attempted to do --assemble --force:
sudo mdadm --assemble --force /dev/md111 mdadm: /dev/md111 not identified in config file.
sudo mdadm --assemble --force /dev/md127 mdadm: Found some drive for an array that is already active: /dev/md/snothplex:111 mdadm: giving up.I then stopped the array (again referencing the incorrect md127):
samuel3940@snothplex:~$ sudo mdadm --stop /dev/md127 mdadm: stopped /dev/md127And then tried assemble again:
samuel3940@snothplex:~$ sudo mdadm --assemble --force /dev/md127 mdadm: Fail create md127 when using /sys/module/md_mod/parameters/new_array mdadm: forcing event count in /dev/sdf(1) from 64641 upto 64644 mdadm: forcing event count in /dev/sdg(3) from 64641 upto 64644 mdadm: clearing FAULTY flag for device 1 in /dev/md127 for /dev/sdf mdadm: clearing FAULTY flag for device 6 in /dev/md127 for /dev/sdg mdadm: Marking array /dev/md127 as 'clean' mdadm: /dev/md127 has been started with 6 drives (out of 7).And it works. Sorta. Obviously the oldest failed drive is not initialized, but the files are all back. So currently I am pulling off any crucial data. It has also gone into a resync, unsurprisingly, but I figure doing read is fine (just no write). Otherwise, I suppose it is time to get a new drive or two, wait for the resync to finish, and cross my fingers it doesn't fail again before I can get an alternative setup. Thank you again and will update if anything changes/how the resync goes.
Samuel Nothnagel
(1 rep)
Jun 2, 2025, 03:17 AM
• Last activity: Jun 2, 2025, 03:25 PM
1
votes
1
answers
59
views
How to merge two directories with failover?
Lets say I have two devices: - `/dev/sda1` mounted to `/` (system partition) - `/dev/sdb1` mounted to `/media/data` (data partition, usb device may be unplugged) I want to merge/overlay/raid two directories like so: - `/media/data` is the primary directory - `/usr/data` is the backup/failover direct...
Lets say I have two devices:
-
/dev/sda1
mounted to /
(system partition)
- /dev/sdb1
mounted to /media/data
(data partition, usb device may be unplugged)
I want to merge/overlay/raid two directories like so:
- /media/data
is the primary directory
- /usr/data
is the backup/failover directory that exists on the system partition
The resulting directory (e.g /mnt/merged
) will consist of the above two directories so that:
- when writing a file to /mnt/merged
the file will be written to /media/data
- if the /dev/sdb1
is not available while writing (the usb storage is removed) then write to the backup /usr/data
and when the primary partition is plugged again move the data to the primary partition
- (optional) setup the second partition as a cache partition in case it is faster than the primary partition, so that reads and writes happen to the backup (faster) directory before moving to the primary directory
MOHAMMAD RASIM
(530 rep)
May 25, 2025, 12:08 PM
• Last activity: May 26, 2025, 03:53 PM
2
votes
1
answers
3718
views
lsblk shows non-existent md partitions after reboot
I'm getting weird behaviour while setting up an `mdadm` RAID1 array on debian 8.2. After I set-up the array, `lsblk` shows: simon@debian-server:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk `-sda1 8:1 0 931.5G 0 part `-md0 9:0 0 931.4G 0 raid1 sdb 8:16 0 931.5G 0 disk `-sd...
I'm getting weird behaviour while setting up an
mdadm
RAID1 array on debian 8.2.
After I set-up the array, lsblk
shows:
simon@debian-server:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
`-sda1 8:1 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
sdb 8:16 0 931.5G 0 disk
`-sdb1 8:17 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
sdc 8:32 0 232.9G 0 disk
|-sdc1 8:33 0 512M 0 part /boot/efi
|-sdc2 8:34 0 244M 0 part /boot
`-sdc3 8:35 0 232.2G 0 part
|-debian--server--vg-root 254:0 0 228.3G 0 lvm /
`-debian--server--vg-swap_1 254:1 0 3.9G 0 lvm [SWAP]
After a reboot, lsblk
shows:
simon@debian-server:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
`-sda1 8:1 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
|-md0p1 259:0 0 811.6G 0 md
`-md0p2 259:1 0 346.1G 0 md
sdb 8:16 0 931.5G 0 disk
`-sdb1 8:17 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
|-md0p1 259:0 0 811.6G 0 md
`-md0p2 259:1 0 346.1G 0 md
sdc 8:32 0 232.9G 0 disk
|-sdc1 8:33 0 512M 0 part /boot/efi
|-sdc2 8:34 0 244M 0 part /boot
`-sdc3 8:35 0 232.2G 0 part
|-debian--server--vg-root 254:0 0 228.3G 0 lvm /
`-debian--server--vg-swap_1 254:1 0 3.9G 0 lvm [SWAP]
I don't know where the md0p1 and md0p2 partitions are coming from. My /etc/fstab
and /etc/mdadm/mdadm.conf
both have nothing about this in them.
parted
shows one partition on md0
:
simon@debian-server:~$ sudo parted /dev/md0 print
Model: Linux Software RAID Array (md)
Disk /dev/md0: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 1000GB 1000GB ntfs
Any ideas where the md0p1 and md0p2 partitions are coming from?
I'm setting up the array by doing as follows:
- Delete existing device (I've done this a few times):
sudo mdadm --stop /dev/md0
sudo mdadm --remove /dev/md0
- Zero drives:
sudo dd if=/dev/zero of=/dev/sda bs=1M count=1024
sudo dd if=/dev/zero of=/dev/sdb bs=1M count=1024
- Create partition tables:
sudo parted /dev/sda mklabel gpt
sudo parted /dev/sdb mklabel gpt
- Create full-disk partitions:
sudo parted -a optimal /dev/sda mkpart primary '0%' '100%'
sudo parted -a optimal /dev/sdb mkpart primary '0%' '100%'
- Set raid flag on partitions:
sudo parted /dev/sda set 1 raid on
sudo parted /dev/sdb set 1 raid on
- Create RAID array:
sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1
- Add filesystem (I'm using NTFS, but the problem also happens with ext4)
sudo mkfs.ntfs -f /dev/md0
tangoecho
(21 rep)
Jan 13, 2016, 08:23 AM
• Last activity: May 12, 2025, 09:07 PM
25
votes
5
answers
121598
views
How to correctly install GRUB on a soft RAID 1?
In my setup, I have two disks that are each formatted in the following way: (GPT) 1) 1MB BIOS_BOOT 2) 300MB LINUX_RAID 3) * LINUX_RAID The boot partitions are mapped in /dev/md0, the rootfs in /dev/md1. md0 is formatted with ext2, md1 with XFS. (I understand that formatting has to be done on the md...
In my setup, I have two disks that are each formatted in the following way:
(GPT)
1) 1MB BIOS_BOOT
2) 300MB LINUX_RAID
3) * LINUX_RAID
The boot partitions are mapped in /dev/md0, the rootfs in /dev/md1. md0 is formatted with ext2, md1 with XFS. (I understand that formatting has to be done on the md devices and not on sd - please tell me if this is wrong).
How do I setup GRUB correctly so that if one drive fails, the other will still boot? And by extension, that a replacement drive will automatically include GRUB, too? If this is even possible, of course.
vic
(2302 rep)
Sep 17, 2015, 05:40 PM
• Last activity: May 12, 2025, 02:31 PM
5
votes
2
answers
13715
views
mdadm RAID implementation with GPT partitioning
My current idea is to create one software array, class RAID-6, with 4 member drives, using `mdadm`. Specifically, the drives would be 1 TB HDDs on SATA in a small server Dell T20. Operating System is [GNU/Linux Debian][2] 8.6 (later upgraded: [Jessie][3] ⟶ [Stretch][4] ⟶ [Buster][5]) That would make...
My current idea is to create one software array, class RAID-6, with 4 member drives, using
mdadm
.
Specifically, the drives would be 1 TB HDDs on SATA in a small server Dell T20.
Operating System is GNU/Linux Debian 8.6 (later upgraded: Jessie ⟶ Stretch ⟶ Buster )
That would make 2 TB of disk space with 2 TB of parity in my case.
***
I would also like to have it with GPT partition table, for that to work, I am unsure how to proceed specifically supposing I would prefer to do this purely over the terminal.
As I never created a RAID array, could you guide me on how I should proceed?
***
Notes:
- This array will serve for the sole data only. No boot or OS on it.
- I opted for RAID-6 due to the purpose of this array. Two drive failures the array must be able to survive. Since I am limited by hardware to 4 drives, there is no alternative to RAID-6 that I know of. (However ugly the RAID-6 slowdown may seem, it does not matter in this array.)
Vlastimil Burián
(30505 rep)
Oct 22, 2016, 04:52 AM
• Last activity: Apr 28, 2025, 08:29 AM
33
votes
6
answers
170146
views
lvm devices under /dev/mapper missing
I'm using Debian squeeze, and running LVM on top of software RAID 1. I just accidentally just discovered that most of the links under `/dev/mapper` are missing, though my system seems to be still functioning correctly. I'm not sure what happened. The only thing I can imagine that caused it was my fa...
I'm using Debian squeeze, and running LVM on top of software RAID 1.
I just accidentally just discovered that most of the links under
/dev/mapper
are missing,
though my system seems to be still functioning correctly.
I'm not sure what happened.
The only thing I can imagine that caused it was my failed attempt to get a LXC fedora container to work.
I ended up deleting a directory /cgroup/laughlin
, corresponding to the container,
but I can't imagine why that should have caused the problem.
/dev/mapper
looked (I made some changes, see below) approximately like
orwell:/dev/mapper# ls -la
total 0
drwxr-xr-x 2 root root 540 Apr 12 05:08 .
drwxr-xr-x 22 root root 4500 Apr 12 05:08 ..
crw------- 1 root root 10, 59 Apr 8 10:32 control
lrwxrwxrwx 1 root root 7 Mar 29 08:28 debian-root -> ../dm-0
lrwxrwxrwx 1 root root 8 Apr 12 03:32 debian-video -> ../dm-23
debian-video corresponds to a LV I had just created.
However, I have quite a number of VGs on my system, corresponding to 4 VGs spread across 4 disks. vgs
gives
orwell:/dev/mapper# vgs
VG #PV #LV #SN Attr VSize VFree
backup 1 2 0 wz--n- 186.26g 96.26g
debian 1 7 0 wz--n- 465.76g 151.41g
olddebian 1 12 0 wz--n- 186.26g 21.26g
testdebian 1 3 0 wz--n- 111.75g 34.22g
I tried running
/dev/mapper# vgscan --mknodes
and some devices were created (see output below), but they aren't symbolic links to the dm devices as they should be,
so I'm not sure if this is useless or worse. Would they get in the way of recreation of the correct links?
Should I delete these devices again?
I believe that udev creates these links, so would a reboot fix this problem,
or would I get an unbootable system? What should I do to fix this?
Are there any diagnostics/sanity checks I should run to make sure there aren't
other problems I haven't noticed? Thanks in advance for any assistance.
orwell:/dev/mapper# ls -la
total 0
drwxr-xr-x 2 root root 540 Apr 12 05:08 .
drwxr-xr-x 22 root root 4500 Apr 12 05:08 ..
brw-rw---- 1 root disk 253, 1 Apr 12 05:08 backup-local_src
brw-rw---- 1 root disk 253, 2 Apr 12 05:08 backup-video
crw------- 1 root root 10, 59 Apr 8 10:32 control
brw-rw---- 1 root disk 253, 15 Apr 12 05:08 debian-boot
brw-rw---- 1 root disk 253, 16 Apr 12 05:08 debian-home
brw-rw---- 1 root disk 253, 22 Apr 12 05:08 debian-lxc_laughlin
brw-rw---- 1 root disk 253, 21 Apr 12 05:08 debian-lxc_squeeze
lrwxrwxrwx 1 root root 7 Mar 29 08:28 debian-root -> ../dm-0
brw-rw---- 1 root disk 253, 17 Apr 12 05:08 debian-swap
lrwxrwxrwx 1 root root 8 Apr 12 03:32 debian-video -> ../dm-23
brw-rw---- 1 root disk 253, 10 Apr 12 05:08 olddebian-etch_template
brw-rw---- 1 root disk 253, 13 Apr 12 05:08 olddebian-fedora
brw-rw---- 1 root disk 253, 8 Apr 12 05:08 olddebian-feisty
brw-rw---- 1 root disk 253, 9 Apr 12 05:08 olddebian-gutsy
brw-rw---- 1 root disk 253, 4 Apr 12 05:08 olddebian-home
brw-rw---- 1 root disk 253, 11 Apr 12 05:08 olddebian-lenny
brw-rw---- 1 root disk 253, 7 Apr 12 05:08 olddebian-msi
brw-rw---- 1 root disk 253, 5 Apr 12 05:08 olddebian-oldchresto
brw-rw---- 1 root disk 253, 3 Apr 12 05:08 olddebian-root
brw-rw---- 1 root disk 253, 14 Apr 12 05:08 olddebian-suse
brw-rw---- 1 root disk 253, 6 Apr 12 05:08 olddebian-vgentoo
brw-rw---- 1 root disk 253, 12 Apr 12 05:08 olddebian-wsgi
brw-rw---- 1 root disk 253, 20 Apr 12 05:08 testdebian-boot
brw-rw---- 1 root disk 253, 18 Apr 12 05:08 testdebian-home
brw-rw---- 1 root disk 253, 19 Apr 12 05:08 testdebian-root
Faheem Mitha
(36008 rep)
Apr 12, 2011, 12:35 AM
• Last activity: Apr 2, 2025, 07:35 AM
1
votes
0
answers
181
views
How can I remove raid from my system?
I want to remove raid from my system as I am low on storage and I want to recover the second disk. How can I recover the second disk, I tried but to no avail, here is my current state : root@miirabox ~ # cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [r...
I want to remove raid from my system as I am low on storage and I want to recover the second disk.
How can I recover the second disk, I tried but to no avail, here is my current state :
root@miirabox ~ # cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 nvme1n1p3 nvme0n1p3
965467456 blocks super 1.2 [2/2] [UU]
bitmap: 8/8 pages [32KB], 65536KB chunk
md0 : active raid1 nvme1n1p1 nvme0n1p1
33520640 blocks super 1.2 [2/2] [UU]
md1 : active raid1 nvme0n1p2(F) nvme1n1p2
1046528 blocks super 1.2 [2/1] [_U]
unused devices:
root@miirabox ~ # sudo mdadm --detail --scan
ARRAY /dev/md/1 metadata=1.2 name=rescue:1 UUID=36e3a554:de955adc:98504c1a:836763fb
ARRAY /dev/md/0 metadata=1.2 name=rescue:0 UUID=b7eddc10:a40cc141:c349f876:39fa07d2
ARRAY /dev/md/2 metadata=1.2 name=rescue:2 UUID=2eafee34:c51da1e0:860a4552:580258eb
root@miirabox ~ # mdadm -E /dev/nvme0n1p1
/dev/nvme0n1p1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : b7eddc10:a40cc141:c349f876:39fa07d2
Name : rescue:0
Creation Time : Sun Sep 10 16:52:20 2023
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 67041280 sectors (31.97 GiB 34.33 GB)
Array Size : 33520640 KiB (31.97 GiB 34.33 GB)
Data Offset : 67584 sectors
Super Offset : 8 sectors
Unused Space : before=67432 sectors, after=0 sectors
State : clean
Device UUID : 5f8a86c6:80e71724:98ee2d01:8a295f5a
Update Time : Thu Sep 19 19:31:55 2024
Bad Block Log : 512 entries available at offset 136 sectors
Checksum : f2954bfe - correct
Events : 60
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@miirabox ~ # mdadm -E /dev/nvme0n1p2
/dev/nvme0n1p2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 36e3a554:de955adc:98504c1a:836763fb
Name : rescue:1
Creation Time : Sun Sep 10 16:52:20 2023
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB)
Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB)
Data Offset : 4096 sectors
Super Offset : 8 sectors
Unused Space : before=4016 sectors, after=0 sectors
State : clean
Device UUID : 8d8e044d:543e1869:9cd0c1ee:2b644e57
Update Time : Thu Sep 19 19:07:25 2024
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 4ce9a898 - correct
Events : 139
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@miirabox ~ # mdadm -E /dev/nvme0n1p3
/dev/nvme0n1p3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 2eafee34:c51da1e0:860a4552:580258eb
Name : rescue:2
Creation Time : Sun Sep 10 16:52:20 2023
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1930934960 sectors (920.74 GiB 988.64 GB)
Array Size : 965467456 KiB (920.74 GiB 988.64 GB)
Used Dev Size : 1930934912 sectors (920.74 GiB 988.64 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=48 sectors
State : clean
Device UUID : 68758969:5218958f:9c991c6b:12bfdca1
Internal Bitmap : 8 sectors from superblock
Update Time : Thu Sep 19 19:32:42 2024
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 4a44ff36 - correct
Events : 13984
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@miirabox ~ # mdadm -E /dev/nvme1n1p1
/dev/nvme1n1p1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : b7eddc10:a40cc141:c349f876:39fa07d2
Name : rescue:0
Creation Time : Sun Sep 10 16:52:20 2023
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 67041280 sectors (31.97 GiB 34.33 GB)
Array Size : 33520640 KiB (31.97 GiB 34.33 GB)
Data Offset : 67584 sectors
Super Offset : 8 sectors
Unused Space : before=67432 sectors, after=0 sectors
State : clean
Device UUID : 0dfdf4af:d88b2bf1:0764dcbd:1179639e
Update Time : Thu Sep 19 19:33:07 2024
Bad Block Log : 512 entries available at offset 136 sectors
Checksum : a9ca2845 - correct
Events : 60
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@miirabox ~ # mdadm -E /dev/nvme1n1p2
/dev/nvme1n1p2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 36e3a554:de955adc:98504c1a:836763fb
Name : rescue:1
Creation Time : Sun Sep 10 16:52:20 2023
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB)
Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB)
Data Offset : 4096 sectors
Super Offset : 8 sectors
Unused Space : before=4016 sectors, after=0 sectors
State : clean
Device UUID : 228202fa:0491e478:b0a0213b:0484d5e3
Update Time : Thu Sep 19 19:24:14 2024
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : e29be2bc - correct
Events : 141
Device Role : Active device 1
Array State : .A ('A' == active, '.' == missing, 'R' == replacing)
root@miirabox ~ # mdadm -E /dev/nvme1n1p3
/dev/nvme1n1p3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 2eafee34:c51da1e0:860a4552:580258eb
Name : rescue:2
Creation Time : Sun Sep 10 16:52:20 2023
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1930934960 sectors (920.74 GiB 988.64 GB)
Array Size : 965467456 KiB (920.74 GiB 988.64 GB)
Used Dev Size : 1930934912 sectors (920.74 GiB 988.64 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=48 sectors
State : clean
Device UUID : 431be888:cb298461:ba2a0000:4b5294fb
Internal Bitmap : 8 sectors from superblock
Update Time : Thu Sep 19 19:33:21 2024
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 2a2ddb09 - correct
Events : 13984
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@miirabox ~ # mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 10 16:52:20 2023
Raid Level : raid1
Array Size : 33520640 (31.97 GiB 34.33 GB)
Used Dev Size : 33520640 (31.97 GiB 34.33 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu Sep 19 19:34:08 2024
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : rescue:0
UUID : b7eddc10:a40cc141:c349f876:39fa07d2
Events : 60
Number Major Minor RaidDevice State
0 259 1 0 active sync /dev/nvme0n1p1
1 259 5 1 active sync /dev/nvme1n1p1
root@miirabox ~ # mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sun Sep 10 16:52:20 2023
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu Sep 19 19:24:14 2024
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Consistency Policy : resync
Name : rescue:1
UUID : 36e3a554:de955adc:98504c1a:836763fb
Events : 141
Number Major Minor RaidDevice State
- 0 0 0 removed
1 259 6 1 active sync /dev/nvme1n1p2
0 259 2 - faulty /dev/nvme0n1p2
root@miirabox ~ # mdadm -D /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Sun Sep 10 16:52:20 2023
Raid Level : raid1
Array Size : 965467456 (920.74 GiB 988.64 GB)
Used Dev Size : 965467456 (920.74 GiB 988.64 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Sep 19 19:34:46 2024
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : rescue:2
UUID : 2eafee34:c51da1e0:860a4552:580258eb
Events : 13984
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n1p3
1 259 7 1 active sync /dev/nvme1n1p3
root@miirabox ~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 4K 1 loop /snap/bare/5
loop2 7:2 0 74.3M 1 loop /snap/core22/1586
loop3 7:3 0 40.4M 1 loop
loop4 7:4 0 269.8M 1 loop /snap/firefox/4793
loop5 7:5 0 74.3M 1 loop /snap/core22/1612
loop6 7:6 0 91.7M 1 loop /snap/gtk-common-themes/1535
loop8 7:8 0 38.8M 1 loop /snap/snapd/21759
loop9 7:9 0 271.2M 1 loop /snap/firefox/4848
loop10 7:10 0 504.2M 1 loop /snap/gnome-42-2204/172
loop12 7:12 0 505.1M 1 loop /snap/gnome-42-2204/176
loop13 7:13 0 38.7M 1 loop /snap/snapd/21465
nvme0n1 259:0 0 953.9G 0 disk
├─nvme0n1p1 259:1 0 32G 0 part
│ └─md0 9:0 0 32G 0 raid1 [SWAP]
├─nvme0n1p2 259:2 0 1G 0 part
│ └─md1 9:1 0 1022M 0 raid1
└─nvme0n1p3 259:3 0 920.9G 0 part
└─md2 9:2 0 920.7G 0 raid1 /
nvme1n1 259:4 0 953.9G 0 disk
├─nvme1n1p1 259:5 0 32G 0 part
│ └─md0 9:0 0 32G 0 raid1 [SWAP]
├─nvme1n1p2 259:6 0 1G 0 part
│ └─md1 9:1 0 1022M 0 raid1
└─nvme1n1p3 259:7 0 920.9G 0 part
└─md2 9:2 0 920.7G 0 raid1 /
root@miirabox ~ # cat /etc/fstab
proc /proc proc defaults 0 0
# /dev/md/0
UUID=e9dddf2b-f061-403e-a12f-d98915569492 none swap sw 0 0
# /dev/md/1
UUID=d32210de-6eb0-4459-85a7-6665294131ee /boot ext3 defaults 0 0
# /dev/md/2
UUID=7abe3389-fe7d-4024-a57e-e490f5e04880 / ext4 defaults 0 0
This is what I managed to do :
root@miirabox ~ # df -h
df: /run/user/1000/gvfs: Transport endpoint is not connected
Filesystem Size Used Avail Use% Mounted on
tmpfs 6.3G 5.7M 6.3G 1% /run
/dev/md2 906G 860G 0 100% /
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/md1 989M 271M 667M 29% /boot
tmpfs 6.3G 132K 6.3G 1% /run/user/134
tmpfs 32G 648K 32G 1% /run/qemu
tmpfs 6.3G 244K 6.3G 1% /run/user/1000
tmpfs 6.3G 116K 6.3G 1% /run/user/140
root@miirabox ~ # cat cat /proc/mdstat
cat: cat: No such file or directory
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 nvme1n1p3 nvme0n1p3
965467456 blocks super 1.2 [2/2] [UU]
bitmap: 8/8 pages [32KB], 65536KB chunk
md0 : active raid1 nvme1n1p1 nvme0n1p1
33520640 blocks super 1.2 [2/2] [UU]
md1 : active raid1 nvme0n1p2 nvme1n1p2
1046528 blocks super 1.2 [2/2] [UU]
root@miirabox ~ # umount /dev/md1
root@miirabox ~ # umount /dev/md2
root@miirabox ~ # umount /dev/md0
umount: /dev/md0: not mounted.
root@miirabox ~ # mdadm --fail /dev/md1 /dev/nvme0n1p2
mdadm: set /dev/nvme0n1p2 faulty in /dev/md1
root@miirabox ~ # mdadm --remove /dev/md1
root@miirabox ~ # mdadm --fail /dev/md1 /dev/nvme1n1p2
mdadm: set device faulty failed for /dev/nvme1n1p2: Device or resource busy
root@miirabox ~ # sudo mdadm --stop /dev/md1
mdadm: Cannot get exclusive access to /dev/md1:Perhaps a running process, mounted filesystem or active volume group?
root@miirabox ~ # sudo vgdisplay
root@miirabox ~ # lvdisplay
I was following a guide and could not proceed.
Please do not hesitate if you want more details. Thanks in advance.
EDIT : I apologise I was not very clear, it's that I have 2TB and my system is only using 1TB (os+data) and the rest is used by raid. I just want to remove raid and recover the second 1TB so I can get the 2TB.
Miira ben sghaier
(111 rep)
Sep 19, 2024, 07:45 PM
• Last activity: Mar 24, 2025, 08:33 AM
2
votes
1
answers
80
views
Reassemble Raid5 Array after disabling TPM
Edit: Both /dev/sdd and /dev/sde are missing super blocks. I assume this cannot be fixed. I am whipping the drives and starting over. I just finished coping 8TB worth of data to a new raid5 array. I just turned off TPM in my bios, and this array was no longer readable. I would like to fix this rathe...
Edit: Both /dev/sdd and /dev/sde are missing super blocks. I assume this cannot be fixed. I am whipping the drives and starting over.
I just finished coping 8TB worth of data to a new raid5 array. I just turned off TPM in my bios, and this array was no longer readable. I would like to fix this rather than starting over. I tried to reassemble it, and got this error.
$ sudo mdadm --assemble /dev/md0 /dev/sda /dev/sdb /dev/sdd /dev/sde -f
mdadm: No super block found on /dev/sdd (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdd
mdadm: /dev/sdd has no superblock - assembly aborted
Here's what examining /sdd resulted in.
$sudo mdadm -E /dev/sdd
/dev/sdd:
MBR Magic : aa55
Partition : 4294967295 sectors at 1 (type ee)
Here's some more diagnostics:
sudo mdadm --examine /dev/sd*
/dev/sda:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 7844a579:00996056:06c4e1dd:0e70ebcb
Name : scott-LinuxMint:0 (local to host scott-LinuxMint)
Creation Time : Thu Jan 2 12:50:26 2025
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 7813772976 sectors (3.64 TiB 4.00 TB)
Array Size : 11720659392 KiB (10.92 TiB 12.00 TB)
Used Dev Size : 7813772928 sectors (3.64 TiB 4.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=48 sectors
State : clean
Device UUID : 0febcd7e:7581f3c8:7b5962c5:cbddee7c
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jan 3 22:05:37 2025
Bad Block Log : 512 entries available at offset 24 sectors
Checksum : 852d7efe - correct
Events : 6116
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 7844a579:00996056:06c4e1dd:0e70ebcb
Name : scott-LinuxMint:0 (local to host scott-LinuxMint)
Creation Time : Thu Jan 2 12:50:26 2025
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 7813772976 sectors (3.64 TiB 4.00 TB)
Array Size : 11720659392 KiB (10.92 TiB 12.00 TB)
Used Dev Size : 7813772928 sectors (3.64 TiB 4.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=48 sectors
State : clean
Device UUID : d2280c55:cf16ae93:aaa5e4a0:71e30dbb
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Jan 3 22:05:37 2025
Bad Block Log : 512 entries available at offset 24 sectors
Checksum : 3fc7a3f1 - correct
Events : 6116
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
MBR Magic : aa55
Partition : 4294967295 sectors at 1 (type ee)
/dev/sdc1:
MBR Magic : aa55
Partition : 1836016416 sectors at 1936269394 (type 4f)
Partition : 544437093 sectors at 1917848077 (type 73)
Partition : 544175136 sectors at 1818575915 (type 2b)
Partition : 54974 sectors at 2844524554 (type 61)
/dev/sdd:
MBR Magic : aa55
Partition : 4294967295 sectors at 1 (type ee)
mdadm: No md superblock detected on /dev/sdd1.
/dev/sde:
MBR Magic : aa55
Partition : 4294967295 sectors at 1 (type ee)
mdadm: No md superblock detected on /dev/sde1.
And the drive seems healthy.
$sudo smartctl -d ata -a /dev/sdd
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.0-51-generic] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Seagate Skyhawk
Device Model: ST4000VX007-2DT166
Serial Number: ZDH61N4Z
LU WWN Device Id: 5 000c50 0b4cf0507
Firmware Version: CV11
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5980 rpm
Form Factor: 3.5 inches
Device is: In smartctl database 7.3/5528
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Fri Jan 3 23:01:40 2025 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 591) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 633) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x50bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 075 064 044 Pre-fail Always - 30305794
3 Spin_Up_Time 0x0003 094 093 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 276
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 095 060 045 Pre-fail Always - 3166340513
9 Power_On_Hours 0x0032 069 069 000 Old_age Always - 27536h+49m+43.964s
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 104
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
188 Command_Timeout 0x0032 100 099 000 Old_age Always - 7864440
189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0022 081 047 040 Old_age Always - 19 (Min/Max 19/19)
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 117
193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 2608
194 Temperature_Celsius 0x0022 019 053 000 Old_age Always - 19 (0 6 0 0 0)
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 27376h+00m+21.311s
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 247975821685
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 124682775664
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
The above only provides legacy SMART information - try 'smartctl -x' for more
Let me know if you can help. I am very new to this.
Edit: added this fdisk test. I do have another unrelated drive, /dev/sdc.
$ sudo fdisk -l /dev/sd?
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sda: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000VX007-2DT1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0384604C-4E8B-4E0A-8423-2139A918120C
Device Start End Sectors Size Type
/dev/sda1 2048 7814035455 7814033408 3.6T Linux filesystem
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000VX007-2DT1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C079AF04-F6C8-4FB3-9E12-FEFCC65D008F
Device Start End Sectors Size Type
/dev/sdb1 2048 7814035455 7814033408 3.6T Linux filesystem
Disk /dev/sdc: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: HGST HDN728080AL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 5237C016-4DE9-408A-A37B-F1F59F33776E
Device Start End Sectors Size Type
/dev/sdc1 2048 15627233279 15627231232 7.3T Microsoft basic data
Disk /dev/sdd: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000VX007-2DT1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 56B6E76B-3B41-486B-8857-AD2BEA8D589A
Device Start End Sectors Size Type
/dev/sdd1 2048 7814035455 7814033408 3.6T Linux filesystem
Disk /dev/sde: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000VX007-2DT1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B306782C-5C23-4C41-A6B9-79AF1FCC6F0E
Device Start End Sectors Size Type
/dev/sde1 2048 7814035455 7814033408 3.6T Linux filesystem
Scott Mayo
(21 rep)
Jan 4, 2025, 04:32 AM
• Last activity: Jan 10, 2025, 11:07 PM
1
votes
1
answers
51
views
Is software raid5 created by mdadm in debian compatable with openbsd softraid
I have created software raid5 using mdadm in Debian Linux. Now I want to switch to OpenBSD and interested if I will be able to mount my raid5 under new system?
I have created software raid5 using mdadm in Debian Linux. Now I want to switch to OpenBSD and interested if I will be able to mount my raid5 under new system?
gio
(19 rep)
Jan 3, 2025, 02:08 AM
• Last activity: Jan 3, 2025, 01:23 PM
0
votes
1
answers
55
views
Where is the documentation for /sys/block/md*/md/fail_last_dev
What can we do with this file: `/sys/block/md0/md/fail_last_dev`? I cannot find any information about it in the `md` man page. In the section `SYSFS INTERFACE` it says: `This interface is documented more fully in the file Documentation/md.txt`. But I cannot find `fail_last_dev` there either. It's on...
What can we do with this file:
/sys/block/md0/md/fail_last_dev
?
I cannot find any information about it in the md
man page. In the section SYSFS INTERFACE
it says: This interface is documented more fully in the file Documentation/md.txt
. But I cannot find fail_last_dev
there either. It's only in the code.
ctx
(2782 rep)
Dec 3, 2024, 04:21 PM
• Last activity: Dec 4, 2024, 01:17 AM
1
votes
0
answers
18
views
Not sure if lsblk showing correct partitions after restoring RAID1
One of my disk (`nvme0n1`) fails, so it was replaced. Now `lsblk` shows ``` nvme0n1 259:0 0 476.9G 0 disk ├─nvme0n1p1 259:5 0 511M 0 part ├─nvme0n1p2 259:6 0 475.9G 0 part │ └─md2 9:2 0 475.8G 0 raid1 / └─nvme0n1p3 259:7 0 512M 0 part nvme1n1 259:1 0 476.9G 0 disk ├─nvme1n1p1 259:2 0 511M 0 part /bo...
One of my disk (
nvme0n1
) fails, so it was replaced. Now lsblk
shows
nvme0n1 259:0 0 476.9G 0 disk
├─nvme0n1p1 259:5 0 511M 0 part
├─nvme0n1p2 259:6 0 475.9G 0 part
│ └─md2 9:2 0 475.8G 0 raid1 /
└─nvme0n1p3 259:7 0 512M 0 part
nvme1n1 259:1 0 476.9G 0 disk
├─nvme1n1p1 259:2 0 511M 0 part /boot/efi
├─nvme1n1p2 259:3 0 475.9G 0 part
│ └─md2 9:2 0 475.8G 0 raid1 /
└─nvme1n1p3 259:4 0 512M 0 part [SWAP]
But I'm afraid that nvme0n1p3
is not mounted as SWAP as nvme1n1p3
, and the same situation is with nvme0n1p1
.
What I do after replacing disk is:
sgdisk --backup=nvme1n1.sgdisk /dev/nvme1n1
sgdisk --load-backup=nvme1n1.sgdisk /dev/nvme0n1
sgdisk -G /dev/nvme0n1
mdadm --manage /dev/md2 --add /dev/nvme0n1p2
Is that correct configuration? If nvme1n1
will fail, system will boot correctly?
webard
(11 rep)
Nov 11, 2024, 02:31 AM
• Last activity: Nov 11, 2024, 02:32 AM
2
votes
0
answers
118
views
Rebuilding RAID1 with Luks after system reinstall
My current Linux Mint 20.3 is soon running out of support, so I figured it's time to install a fresh Linux Mint 22. Currently I have mdadm running two 4TB drives on RAID1 with LUKS. It's been very many years since I set this thing up (and last time while upgrading to Mint 20, I corrupted my superblo...
My current Linux Mint 20.3 is soon running out of support, so I figured it's time to install a fresh Linux Mint 22. Currently I have mdadm running two 4TB drives on RAID1 with LUKS. It's been very many years since I set this thing up (and last time while upgrading to Mint 20, I corrupted my superblock rebuilding after install...), so I figured this time I make sure I got my to-do list right, before I proceed with the operation.
0. First unmount RAID directory
> sudo umount /storage (my mountpoint for /dev/md0)
1. Stop existing RAID Array on source system
> sudo mdadm --stop /dev/md0
(Install new Linux Mint)
2. Assemble RAID1 back
> sudo mdadm --assemble /dev/md0 /dev/sda /dev/sdb
3. Unlock existing devices
> sudo cryptsetup luksOpen /dev/md1 raidcrypt
4. add /etc/fstab mount
> /dev/mapper/raidcrypt /storage ext4 defaults 0 2
Am I missing something critical? Should this retrieve my RAID1 setup on a new install? Obviously I will be installing the operating system on a different dedicated SSD drive.
If I understood, I definitely need to avoid 'mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb' which would create a new superblock, correct?
Rattletrap K
(31 rep)
Oct 10, 2024, 01:42 PM
• Last activity: Nov 2, 2024, 07:50 AM
1
votes
1
answers
106
views
Failed to umount directory from my RAID1 device (device busy)
I have a system that has 1x SSD and 2x 4TB HDD working under software RAID1. (I'm currently upgrading from Linux Mint 20 to 22, another topic regarding that issue) On my RAID1 I have LUKS encrypted filesystem, which is mounted under /storage I'm trying to umount /storage by ```sudo umount /storage``...
I have a system that has 1x SSD and 2x 4TB HDD working under software RAID1. (I'm currently upgrading from Linux Mint 20 to 22, another topic regarding that issue)
On my RAID1 I have LUKS encrypted filesystem, which is mounted under /storage
I'm trying to umount /storage by
umount /storage
, but I receive an error : /storage: target is busy.
I tried also:
- stopping the mdadm first by mdadm --stop /dev/md0
, but it returns get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?
- closing the encrypted volume by cryptsetup luksClose raidcrypt
, and it returns raidcrypt is still in use.
- stopping my docker container, and removing it from starting automatically during boot (| grep /storage
noted that docker was running before I stopped it, but stopping it doesn't help with my initial problem)
- stopped my pCloud device from mounting during boot
- removed my swap from /etc/fstab which was mounted in as 8GB swapfile under /storage/swap/.swapfile
I'm bit worried to use -l
here. I'm wondering if I should just remove /dev/mapper/raidcrypt /storage ext4 defaults 0 2
from my /etc/fstab.
Rattletrap K
(31 rep)
Oct 31, 2024, 11:03 PM
• Last activity: Nov 1, 2024, 12:24 AM
0
votes
0
answers
66
views
How to Remove encrypted(LUKS) soft RAID forcely in Linux?
I built a soft RAID0 with two SSDs(Samsung-980Pro), and encrypted it with LUKS, since I had only one PC a few years ago. Now I bought a server, and directly disassemble one SSD from the old PC(without removing it from the soft RAID array), and attach it onto the server. But the SSD on the new server...
I built a soft RAID0 with two SSDs(Samsung-980Pro), and encrypted it with LUKS, since I had only one PC a few years ago.
Now I bought a server, and directly disassemble one SSD from the old PC(without removing it from the soft RAID array), and attach it onto the server.
But the SSD on the new server can **NOT** be re-partitioned, and the other SSD on the old PC also can not be removed from the array, since the array status is
inactive
.
I even can **NOT** remove the whole RAID device from Linux, since it can not be opened with cryptsetup open
.
It is a boring job to re-disassemble/re-assemble the SSD between computers, as that the SSD must be pluged under many other devices.
Is there a way, I can forcely re-initialize these two SSDs respectively on two different computers, regardless whatever they storage?
Thanks!
Leon
(203 rep)
Oct 13, 2024, 02:52 PM
• Last activity: Oct 13, 2024, 05:52 PM
0
votes
0
answers
37
views
RAID10: Help, how did I configure this thing? Data is there, but my knowledge is not
First off, I'm certainly a moron. I created a RAID 10 array a number of months ago and I hadn't rebooted my computer to ensure that it would startup properly. At this point, I don't even remember what tool I used to create it, but I'm pretty sure it was command line, possibly lvm. It used sda, sdb,...
First off, I'm certainly a moron. I created a RAID 10 array a number of months ago and I hadn't rebooted my computer to ensure that it would startup properly. At this point, I don't even remember what tool I used to create it, but I'm pretty sure it was command line, possibly lvm. It used sda, sdb, sdc, and sdd drives. I have confirmed that I can read from each drive with the head command. I'm getting the error "not enough operational mirrors" with mdadm.
When I run lsblk it shows an md1 partition for sdc and sdd. Both sda and sdb have the exact same PTUUID when I run blkid and have PTTYPE="gpt". sdc and sdc have the same label, same UUID (but different from sda and sdb), different UUID_SUB labels and both have the TYPE="linux_raid_member". Also from lsblk, sda, sdb, sdc and sdd all report 10.9T, but the md1 partition on sdc and sdd along with a number of other block devices sde-sdk report 0B. Nothing is marked read only. Each drive is actually 3TB, not the 12 reported.
What I'm trying to do is figure out how to get everything to run as one logical device again like it was. I suspect that I just need to tell my computer how I previously configured this and put it in the right configuration file. My computer is booting in emergency mode and can't get past this because that RAID device, while not essential for booting, is essential for all the user accounts.
Help appreciated! This is my first question here since I'm more of a software person, so I apologize if I am not giving the right info. I can't easily get the printouts of running commands though because the computer with the issue is pretty broken until this gets fixed.
Mouna Apperson
(101 rep)
Oct 10, 2024, 05:23 AM
Showing page 1 of 20 total questions