Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
3086 views
Convert non-RAID disk with data into RAID 1 disk (hardware controller)
I moved away from software RAID due to all the hassle it brings. After an OS reinstall, I am left with only one drive. I ordered a hardware RAID controller today, and when the controller arrives, I'd like to plug in the identical drives into the RAID controller and set up RAID 1 WITHOUT losing...
I moved away from software RAID due to all the hassle it brings. After an OS reinstall, I am left with only one drive. I ordered a hardware RAID controller today, and when the controller arrives, I'd like to plug in the identical drives into the RAID controller and set up RAID 1 WITHOUT losing any data or needing to reinstall the OS (Debian Jessie x86_64). Output of lsblk:
NAME              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                 8:0    0 931.5G  0 disk
├─sda1              8:1    0   953M  0 part /boot
├─sda2              8:2    0  29.8G  0 part [SWAP]
└─sda3              8:3    0 900.8G  0 part
  ├─vgmain-lvroot 254:0    0 621.4G  0 lvm  /
  ├─vgmain-lvmail 254:1    0  93.1G  0 lvm  /var/vmail
  ├─vgmain-lvhome 254:2    0  93.1G  0 lvm  /home
  ├─vgmain-lvtmp  254:3    0  18.6G  0 lvm  /tmp
  └─vgmain-lvvar  254:4    0  74.5G  0 lvm  /var
sdb                 8:16   0 931.5G  0 disk
Can I do this somehow by dding the existing data to the clean drive while having it plugged into the RAID controller and set up as RAID 1? To clarify, let's say sda is the drive with my data, sdb is the drive which is not in use. * Plug sda into the mobo sata controller * Plug sdb into the RAID controller * Define sdb as RAID 1 drive * Boot from liveCD and dd contents of sda → sdb * Plug sda into RAID controller, define as RAID1 * RAID controller syncs the drives, (copies over sdb to sda) (?) * Boot without problems? Will dd copy the drive in a way that mbr/partitions/etc. are preserved? Am I thinking in a completely stupid way of doing this? I contacted the RAID controller manufacturer and asked if it has some kind of utility to convert a drive into 2 drives in RAID1, but they said no. If it's relevant in any way, the specific controller is a HighPoint RocketRAID 620 PCI-Express 2.0 x1 SATA III RAID card.
Axel Latvala (109 rep)
Jun 13, 2016, 04:12 PM • Last activity: Jul 25, 2025, 11:04 PM
2 votes
1 answers
2611 views
grub2 lvm2 raid1 /boot
Is it possible to boot from a system where /boot is located within an lvm2 raid1 partition. I've tried a variety of configurations, but I have yet to discover how to do it. I am using two 2TB disks. Each disk contains a GPT partition table with a 1MB bios_grub partition and a 2TB partition. The larg...
Is it possible to boot from a system where /boot is located within an lvm2 raid1 partition. I've tried a variety of configurations, but I have yet to discover how to do it. I am using two 2TB disks. Each disk contains a GPT partition table with a 1MB bios_grub partition and a 2TB partition. The large 2TB partition on each disk is allocated as a physical volume to lvm2. I am using Ubuntu 14.04 LTS as my OS. Initially I configured Ubuntu with two 5GB logical volumes. The first one for / and the second for /home. The Ubuntu setup did not have options to configure these logical volumes with a segment type of raid1. So, I just installed it with what it defaulted to, which was linear. This worked fine and the system booted without any issues. I then rebooted into a live CD environment, and converted the two partitions into raid1 with the following commands. lvconvert --type raid1 -m1 /dev/vg_storage/os_root lvconvert --type raid1 -m1 /dev/vg_storage/os_home These operations completed without any errors. I then monitored the progress of lvm2 mirroring both of these logical volumes until copy% was 100% root@ubuntu:~# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert os_home vg_storage rwi-a-r-- 4.66g 100.00 os_root vg_storage rwi-a-r-- 4.66g 100.00 Now the system fails to boot. I get the following error immediately after BIOS attempts to boot from the first of the two disks. And I am left with a grub rescue prompt. error: disk 'lvmid/L1VIor-PKIM-mtCO-TUQ2-iWe2-ndnY-df2wOu/yCDXMZ-2q4X-jbJJ-qZhI-sHNL-hrjw-Q5bg6v' not found. Entering rescue mode... grub rescue> I'm thinking there is a grub2 module that isn't being loaded. One that supports the raid1 functionality of lvm2. Either that or such support does not yet exist within grub2.
Davidian1024 (21 rep)
Feb 27, 2015, 03:58 PM • Last activity: Jun 19, 2025, 01:11 PM
0 votes
0 answers
17 views
mdadm: Cannot get array info for /dev/md127
i have one problem. I created RAID1 with two discs, then i simulated failure of one of the discs. It was sdb1. I restared the VM with one disc working in RAID array. Then i added new fresh disc to add to this array but i can't do that because i got error (mdadm: Cannot get array info for /dev/md127)...
i have one problem. I created RAID1 with two discs, then i simulated failure of one of the discs. It was sdb1. I restared the VM with one disc working in RAID array. Then i added new fresh disc to add to this array but i can't do that because i got error (mdadm: Cannot get array info for /dev/md127) The new disc is already formated. How to add that new disc, everything that i tried doesn't work.
linuxenjoyer32123 (1 rep)
May 30, 2025, 04:23 PM • Last activity: May 30, 2025, 04:32 PM
2 votes
1 answers
6817 views
Mounting LVM partition on a RAID1 disk from a bricked QNAP NAS
I have a QNAP HS-251+ NAS with two 3TB WD Red NAS Hard Drives in RAID1 configuration. I'm quite acquainted with Linux and CLI tools, but never used mdadm or LVM so, despite knowing (now) that QNAP uses them, I have no expertise, or even knowledge about how QNAP manages to create the RAID1 assemble....
I have a QNAP HS-251+ NAS with two 3TB WD Red NAS Hard Drives in RAID1 configuration. I'm quite acquainted with Linux and CLI tools, but never used mdadm or LVM so, despite knowing (now) that QNAP uses them, I have no expertise, or even knowledge about how QNAP manages to create the RAID1 assemble. Some weeks ago, during a firmware update, the NAS stopped booting. I'm on my way to solve that problem, but I would like to recover my data before attempting to plug my disks again in the NAS (QNAP HelpDesk has been basically useless). I expected to be able to mount the partition easily simply plugin one of the disks on my laptop (Kubuntu), but that was when I discovered that things were more complex than that. This is the relevant information I managed to extract:
xabi@XV-XPS15:~$ sudo dmesg
[...]
[17272.730964] usb 3-1: new high-speed USB device number 3 using xhci_hcd
[17272.884120] usb 3-1: New USB device found, idVendor=059b, idProduct=0475, bcdDevice= 0.00
[17272.884138] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=5
[17272.884144] usb 3-1: Product: USB to ATA/ATAPI Bridge
[17272.884149] usb 3-1: Manufacturer: JMicron
[17272.884153] usb 3-1: SerialNumber: DCC4108FFFFF
[17272.891498] usb-storage 3-1:1.0: USB Mass Storage device detected
[17272.892117] scsi host6: usb-storage 3-1:1.0
[17273.907765] scsi 6:0:0:0: Direct-Access     WDC WD30 EFRX-68EUZN0          PQ: 0 ANSI: 2 CCS
[17273.908085] sd 6:0:0:0: Attached scsi generic sg2 type 0
[17273.908261] sd 6:0:0:0: [sdc] 1565565872 512-byte logical blocks: (802 GB/747 GiB)
[17273.909041] sd 6:0:0:0: [sdc] Write Protect is off
[17273.909046] sd 6:0:0:0: [sdc] Mode Sense: 34 00 00 00
[17273.909789] sd 6:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[17273.976961] sd 6:0:0:0: [sdc] Attached SCSI disk
xabi@XV-XPS15:~$
xabi@XV-XPS15:~$ sudo fdisk -l /dev/sdc
Disk /dev/sdc: 746,52 GiB, 801569726464 bytes, 1565565872 sectors
Disk model: EFRX-68EUZN0    
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Dispositivo Boot Start        End    Sectors Size Id Tipo
/dev/sdc1            1 4294967295 4294967295   2T ee GPT
xabi@XV-XPS15:~$ sudo mount /dev/sdc1 /mnt/
mount: /mnt: special device /dev/sdc1 does not exist.
xabi@XV-XPS15:~$
xabi@XV-XPS15:~$ sudo parted -l
[...]
Error: Invalid argument during seek for read on /dev/sdc
Retry/Ignore/Cancel? i                                                    
Error: The backup GPT table is corrupt, but the primary appears OK, so that will be used.
OK/Cancel? ok                                                             
Model: WDC WD30 EFRX-68EUZN0 (scsi)
Disk /dev/sdc: 802GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags: 

xabi@XV-XPS15:~$
xabi@XV-XPS15:~$ sudo lsblk 
[...]
sdc      8:32   0 746,5G  0 disk 
xabi@XV-XPS15:~$
Using the free version of a commercial tool (UFS RAID Recovery) I've obtained advanced information about the partitions on the disk: Disk Information and even recovered the lvm backup config
# Generated by LVM2 version 2.02.138(2)-git (2015-12-14): Fri Feb 11 09:43:53 2022

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing '/sbin/vgchange vg288 --addtag cacheVersion:3'"

creation_host = "XV-NAS"	# Linux XV-NAS 5.10.60-qnap #1 SMP Tue Dec 21 10:57:31 CST 2021 x86_64
creation_time = 1644569033	# Fri Feb 11 09:43:53 2022

vg288 {
	id = "g0F3zh-N3aN-1vEQ-qYU4-6Jhv-xgLC-d4hU43"
	seqno = 199
	format = "lvm2"			# informational
	status = ["RESIZEABLE", "READ", "WRITE"]
	flags = []
	tags = ["PV:DRBD", "PoolType:Static", "StaticPoolRev:2", "cacheVersion:3"]
	extent_size = 8192		# 4 Megabytes
	max_lv = 0
	max_pv = 0
	metadata_copies = 0

	physical_volumes {

		pv0 {
			id = "ZcFzMw-ZgzA-fOr0-FGgI-E3hc-uwKu-GzWSKn"
			device = "/dev/drbd1"	# Hint only

			status = ["ALLOCATABLE"]
			flags = []
			dev_size = 5840621112	# 2.71975 Terabytes
			pe_start = 2048
			pe_count = 712966	# 2.71975 Terabytes
		}
	}

	logical_volumes {

		lv544 {
			id = "NRzjb8-Dgw4-oHSj-rjtF-2e8H-KQZ2-YPRre3"
			status = ["READ", "WRITE", "VISIBLE"]
			flags = []
			creation_host = "NAS0D80FB"
			creation_time = 1494716166	# 2017-05-14 00:56:06 +0200
			read_ahead = 8192
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 7129	# 27.8477 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 0
				]
			}
		}

		lv1 {
			id = "1ND0gN-Lcgx-ALcO-Ui17-hzzg-2soX-LGr1rl"
			status = ["READ", "WRITE", "VISIBLE"]
			flags = []
			creation_host = "NAS0D80FB"
			creation_time = 1494716174	# 2017-05-14 00:56:14 +0200
			read_ahead = 8192
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 705837	# 2.69255 Terabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 7129
				]
			}
		}
	}
}
Despite that, I haven't been able to mount the relevant volume (vg288-lv1). As I said, I'm not familiar with LVM, there are some inconsistencies with what I've seen in other posts (no sdc1 device, parted can't find the partitions, the reported size doesn't match the real one...) and I'm not sure how to start to solve this methodically. I've tried using the backup config as a template to restore the lvm volumes and failed, but, as I said, I'm not sure that I'm doing things correctly, so I feel I need some guiding on where to start, as the results of my tests until now will only add confusion. Thanks.
Xabier Villar (21 rep)
Mar 28, 2022, 05:48 PM • Last activity: May 25, 2025, 12:03 AM
0 votes
0 answers
47 views
Total newbie stuff-up with LVM and RAID1
Many years ago, I created a RAID1 LVM data disk. During a re-installation, I deleted the volume group under `cockatoo--vg-data` associated with `sdb1` and `sdc1` and just have the partitions left. ```none sudo vgck WARNING: wrong checksum 0 in mda header on /dev/sdb1 at 4096 WARNING: wrong magic num...
Many years ago, I created a RAID1 LVM data disk. During a re-installation, I deleted the volume group under cockatoo--vg-data associated with sdb1 and sdc1 and just have the partitions left.
sudo vgck 
  WARNING: wrong checksum 0 in mda header on /dev/sdb1 at 4096
  WARNING: wrong magic number in mda header on /dev/sdb1 at 4096
  WARNING: wrong version 0 in mda header on /dev/sdb1 at 4096
  WARNING: wrong start sector 0 in mda header on /dev/sdb1 at 4096
  WARNING: bad metadata header on /dev/sdb1 at 4096.
  WARNING: scanning /dev/sdb1 mda1 failed to read metadata summary.
  WARNING: repair VG metadata on /dev/sdb1 with vgck --updatemetadata.
  WARNING: scan failed to get metadata summary from /dev/sdb1 PVID M79lVqgFLXRVIH2I37SFvAZuVn2WK1an
  WARNING: wrong checksum 0 in mda header on /dev/sdc1 at 4096
  WARNING: wrong magic number in mda header on /dev/sdc1 at 4096
  WARNING: wrong version 0 in mda header on /dev/sdc1 at 4096
  WARNING: wrong start sector 0 in mda header on /dev/sdc1 at 4096
  WARNING: bad metadata header on /dev/sdc1 at 4096.
  WARNING: scanning /dev/sdc1 mda1 failed to read metadata summary.
  WARNING: repair VG metadata on /dev/sdc1 with vgck --updatemetadata.
  WARNING: scan failed to get metadata summary from /dev/sdc1 PVID 8c8tc8YQp3usUGAKZjJIIBvq8MpJa8Pn
sudo blkid
/dev/mapper/cockatoo--vg-swap_1: UUID="aca2a570-a1c3-43e7-90d0-62c962b039b1" TYPE="swap"
/dev/sdb1: UUID="M79lVq-gFLX-RVIH-2I37-SFvA-ZuVn-2WK1an" TYPE="LVM2_member" PARTUUID="0001da39-01"
/dev/mapper/cockatoo--vg-root: UUID="c3caa8d3-144d-4931-8a46-ddc9e0e1bc17" BLOCK_SIZE="4096" TYPE="ext4"
/dev/sdc1: UUID="8c8tc8-YQp3-usUG-AKZj-JIIB-vq8M-pJa8Pn" TYPE="LVM2_member" PARTUUID="00028915-01"
/dev/sda5: UUID="oH6Wrt-Weui-2dgF-kmc0-cdjC-mN9k-RJc74H" TYPE="LVM2_member" PARTUUID="a555c2a7-05"
/dev/sda1: UUID="de5546d8-f3cd-43d8-bd16-2cd3e112d4eb" BLOCK_SIZE="1024" TYPE="ext2" PARTUUID="a555c2a7-01"
sudo head -c 1M /dev/sdb1 | strings -w

LABELONE
LVM2 001M79lVqgFLXRVIH2I37SFvAZuVn2WK1an
data_vol {
id = "Az8yjq-yS76-8D07-9dGe-Cf6x-gc9l-ci2roA"
seqno = 1
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192
max_lv = 0
max_pv = 0
metadata_copies = 0

physical_volumes {

pv0 {
id = "M79lVq-gFLX-RVIH-2I37-SFvA-ZuVn-2WK1an"
device = "/dev/sdb1"

status = ["ALLOCATABLE"]
flags = []
dev_size = 3907026944
pe_start = 2048
pe_count = 476931
}

pv1 {
id = "8c8tc8-YQp3-usUG-AKZj-JIIB-vq8M-pJa8Pn"
device = "/dev/sdc1"

status = ["ALLOCATABLE"]
flags = []
dev_size = 3907026944
pe_start = 2048
pe_count = 476931
}
}

}
# Generated by LVM2 version 2.02.104(2) (2013-11-13): Mon Mar  3 21:53:40 2014

contents = "Text Format Volume Group"
version = 1

description = ""

creation_host = "cockatoo"	# Linux cockatoo 3.13-1-amd64 #1 SMP Debian 3.13.4-1 (2014-02-22) x86_64
creation_time = 1393883620	# Mon Mar  3 21:53:40 2014


bitm
sudo head -c 4M /dev/sdc1 | strings -w
LABELONE
LVM2 0018c8tc8YQp3usUGAKZjJIIBvq8MpJa8Pn
data_vol {
id = "Az8yjq-yS76-8D07-9dGe-Cf6x-gc9l-ci2roA"
seqno = 1
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192
max_lv = 0
max_pv = 0
metadata_copies = 0

physical_volumes {

pv0 {
id = "M79lVq-gFLX-RVIH-2I37-SFvA-ZuVn-2WK1an"
device = "/dev/sdb1"

status = ["ALLOCATABLE"]
flags = []
dev_size = 3907026944
pe_start = 2048
pe_count = 476931
}

pv1 {
id = "8c8tc8-YQp3-usUG-AKZj-JIIB-vq8M-pJa8Pn"
device = "/dev/sdc1"

status = ["ALLOCATABLE"]
flags = []
dev_size = 3907026944
pe_start = 2048
pe_count = 476931
}
}

}
# Generated by LVM2 version 2.02.104(2) (2013-11-13): Mon Mar  3 21:53:40 2014

contents = "Text Format Volume Group"
version = 1

description = ""

creation_host = "cockatoo"	# Linux cockatoo 3.13-1-amd64 #1 SMP Debian 3.13.4-1 (2014-02-22) x86_64
creation_time = 1393883620	# Mon Mar  3 21:53:40 2014


bitm
How can I recover any data from these disks? Any help would be greatly appreciated. Many thanks.
Testy Testy (1 rep)
Apr 15, 2025, 11:43 AM • Last activity: Apr 15, 2025, 12:43 PM
1 votes
0 answers
181 views
How can I remove raid from my system?
I want to remove raid from my system as I am low on storage and I want to recover the second disk. How can I recover the second disk, I tried but to no avail, here is my current state : root@miirabox ~ # cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [r...
I want to remove raid from my system as I am low on storage and I want to recover the second disk. How can I recover the second disk, I tried but to no avail, here is my current state : root@miirabox ~ # cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 nvme1n1p3 nvme0n1p3 965467456 blocks super 1.2 [2/2] [UU] bitmap: 8/8 pages [32KB], 65536KB chunk md0 : active raid1 nvme1n1p1 nvme0n1p1 33520640 blocks super 1.2 [2/2] [UU] md1 : active raid1 nvme0n1p2(F) nvme1n1p2 1046528 blocks super 1.2 [2/1] [_U] unused devices: root@miirabox ~ # sudo mdadm --detail --scan ARRAY /dev/md/1 metadata=1.2 name=rescue:1 UUID=36e3a554:de955adc:98504c1a:836763fb ARRAY /dev/md/0 metadata=1.2 name=rescue:0 UUID=b7eddc10:a40cc141:c349f876:39fa07d2 ARRAY /dev/md/2 metadata=1.2 name=rescue:2 UUID=2eafee34:c51da1e0:860a4552:580258eb root@miirabox ~ # mdadm -E /dev/nvme0n1p1 /dev/nvme0n1p1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b7eddc10:a40cc141:c349f876:39fa07d2 Name : rescue:0 Creation Time : Sun Sep 10 16:52:20 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 67041280 sectors (31.97 GiB 34.33 GB) Array Size : 33520640 KiB (31.97 GiB 34.33 GB) Data Offset : 67584 sectors Super Offset : 8 sectors Unused Space : before=67432 sectors, after=0 sectors State : clean Device UUID : 5f8a86c6:80e71724:98ee2d01:8a295f5a Update Time : Thu Sep 19 19:31:55 2024 Bad Block Log : 512 entries available at offset 136 sectors Checksum : f2954bfe - correct Events : 60 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) root@miirabox ~ # mdadm -E /dev/nvme0n1p2 /dev/nvme0n1p2: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 36e3a554:de955adc:98504c1a:836763fb Name : rescue:1 Creation Time : Sun Sep 10 16:52:20 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB) Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB) Data Offset : 4096 sectors Super Offset : 8 sectors Unused Space : before=4016 sectors, after=0 sectors State : clean Device UUID : 8d8e044d:543e1869:9cd0c1ee:2b644e57 Update Time : Thu Sep 19 19:07:25 2024 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 4ce9a898 - correct Events : 139 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) root@miirabox ~ # mdadm -E /dev/nvme0n1p3 /dev/nvme0n1p3: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 2eafee34:c51da1e0:860a4552:580258eb Name : rescue:2 Creation Time : Sun Sep 10 16:52:20 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1930934960 sectors (920.74 GiB 988.64 GB) Array Size : 965467456 KiB (920.74 GiB 988.64 GB) Used Dev Size : 1930934912 sectors (920.74 GiB 988.64 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=48 sectors State : clean Device UUID : 68758969:5218958f:9c991c6b:12bfdca1 Internal Bitmap : 8 sectors from superblock Update Time : Thu Sep 19 19:32:42 2024 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 4a44ff36 - correct Events : 13984 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) root@miirabox ~ # mdadm -E /dev/nvme1n1p1 /dev/nvme1n1p1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b7eddc10:a40cc141:c349f876:39fa07d2 Name : rescue:0 Creation Time : Sun Sep 10 16:52:20 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 67041280 sectors (31.97 GiB 34.33 GB) Array Size : 33520640 KiB (31.97 GiB 34.33 GB) Data Offset : 67584 sectors Super Offset : 8 sectors Unused Space : before=67432 sectors, after=0 sectors State : clean Device UUID : 0dfdf4af:d88b2bf1:0764dcbd:1179639e Update Time : Thu Sep 19 19:33:07 2024 Bad Block Log : 512 entries available at offset 136 sectors Checksum : a9ca2845 - correct Events : 60 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) root@miirabox ~ # mdadm -E /dev/nvme1n1p2 /dev/nvme1n1p2: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 36e3a554:de955adc:98504c1a:836763fb Name : rescue:1 Creation Time : Sun Sep 10 16:52:20 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB) Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB) Data Offset : 4096 sectors Super Offset : 8 sectors Unused Space : before=4016 sectors, after=0 sectors State : clean Device UUID : 228202fa:0491e478:b0a0213b:0484d5e3 Update Time : Thu Sep 19 19:24:14 2024 Bad Block Log : 512 entries available at offset 16 sectors Checksum : e29be2bc - correct Events : 141 Device Role : Active device 1 Array State : .A ('A' == active, '.' == missing, 'R' == replacing) root@miirabox ~ # mdadm -E /dev/nvme1n1p3 /dev/nvme1n1p3: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 2eafee34:c51da1e0:860a4552:580258eb Name : rescue:2 Creation Time : Sun Sep 10 16:52:20 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1930934960 sectors (920.74 GiB 988.64 GB) Array Size : 965467456 KiB (920.74 GiB 988.64 GB) Used Dev Size : 1930934912 sectors (920.74 GiB 988.64 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=48 sectors State : clean Device UUID : 431be888:cb298461:ba2a0000:4b5294fb Internal Bitmap : 8 sectors from superblock Update Time : Thu Sep 19 19:33:21 2024 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 2a2ddb09 - correct Events : 13984 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) root@miirabox ~ # mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Sep 10 16:52:20 2023 Raid Level : raid1 Array Size : 33520640 (31.97 GiB 34.33 GB) Used Dev Size : 33520640 (31.97 GiB 34.33 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Sep 19 19:34:08 2024 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Name : rescue:0 UUID : b7eddc10:a40cc141:c349f876:39fa07d2 Events : 60 Number Major Minor RaidDevice State 0 259 1 0 active sync /dev/nvme0n1p1 1 259 5 1 active sync /dev/nvme1n1p1 root@miirabox ~ # mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Sun Sep 10 16:52:20 2023 Raid Level : raid1 Array Size : 1046528 (1022.00 MiB 1071.64 MB) Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Sep 19 19:24:14 2024 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Consistency Policy : resync Name : rescue:1 UUID : 36e3a554:de955adc:98504c1a:836763fb Events : 141 Number Major Minor RaidDevice State - 0 0 0 removed 1 259 6 1 active sync /dev/nvme1n1p2 0 259 2 - faulty /dev/nvme0n1p2 root@miirabox ~ # mdadm -D /dev/md2 /dev/md2: Version : 1.2 Creation Time : Sun Sep 10 16:52:20 2023 Raid Level : raid1 Array Size : 965467456 (920.74 GiB 988.64 GB) Used Dev Size : 965467456 (920.74 GiB 988.64 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu Sep 19 19:34:46 2024 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : bitmap Name : rescue:2 UUID : 2eafee34:c51da1e0:860a4552:580258eb Events : 13984 Number Major Minor RaidDevice State 0 259 3 0 active sync /dev/nvme0n1p3 1 259 7 1 active sync /dev/nvme1n1p3 root@miirabox ~ # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 4K 1 loop /snap/bare/5 loop2 7:2 0 74.3M 1 loop /snap/core22/1586 loop3 7:3 0 40.4M 1 loop loop4 7:4 0 269.8M 1 loop /snap/firefox/4793 loop5 7:5 0 74.3M 1 loop /snap/core22/1612 loop6 7:6 0 91.7M 1 loop /snap/gtk-common-themes/1535 loop8 7:8 0 38.8M 1 loop /snap/snapd/21759 loop9 7:9 0 271.2M 1 loop /snap/firefox/4848 loop10 7:10 0 504.2M 1 loop /snap/gnome-42-2204/172 loop12 7:12 0 505.1M 1 loop /snap/gnome-42-2204/176 loop13 7:13 0 38.7M 1 loop /snap/snapd/21465 nvme0n1 259:0 0 953.9G 0 disk ├─nvme0n1p1 259:1 0 32G 0 part │ └─md0 9:0 0 32G 0 raid1 [SWAP] ├─nvme0n1p2 259:2 0 1G 0 part │ └─md1 9:1 0 1022M 0 raid1 └─nvme0n1p3 259:3 0 920.9G 0 part └─md2 9:2 0 920.7G 0 raid1 / nvme1n1 259:4 0 953.9G 0 disk ├─nvme1n1p1 259:5 0 32G 0 part │ └─md0 9:0 0 32G 0 raid1 [SWAP] ├─nvme1n1p2 259:6 0 1G 0 part │ └─md1 9:1 0 1022M 0 raid1 └─nvme1n1p3 259:7 0 920.9G 0 part └─md2 9:2 0 920.7G 0 raid1 / root@miirabox ~ # cat /etc/fstab proc /proc proc defaults 0 0 # /dev/md/0 UUID=e9dddf2b-f061-403e-a12f-d98915569492 none swap sw 0 0 # /dev/md/1 UUID=d32210de-6eb0-4459-85a7-6665294131ee /boot ext3 defaults 0 0 # /dev/md/2 UUID=7abe3389-fe7d-4024-a57e-e490f5e04880 / ext4 defaults 0 0 This is what I managed to do : root@miirabox ~ # df -h df: /run/user/1000/gvfs: Transport endpoint is not connected Filesystem Size Used Avail Use% Mounted on tmpfs 6.3G 5.7M 6.3G 1% /run /dev/md2 906G 860G 0 100% / tmpfs 32G 0 32G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/md1 989M 271M 667M 29% /boot tmpfs 6.3G 132K 6.3G 1% /run/user/134 tmpfs 32G 648K 32G 1% /run/qemu tmpfs 6.3G 244K 6.3G 1% /run/user/1000 tmpfs 6.3G 116K 6.3G 1% /run/user/140 root@miirabox ~ # cat cat /proc/mdstat cat: cat: No such file or directory Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 nvme1n1p3 nvme0n1p3 965467456 blocks super 1.2 [2/2] [UU] bitmap: 8/8 pages [32KB], 65536KB chunk md0 : active raid1 nvme1n1p1 nvme0n1p1 33520640 blocks super 1.2 [2/2] [UU] md1 : active raid1 nvme0n1p2 nvme1n1p2 1046528 blocks super 1.2 [2/2] [UU] root@miirabox ~ # umount /dev/md1 root@miirabox ~ # umount /dev/md2 root@miirabox ~ # umount /dev/md0 umount: /dev/md0: not mounted. root@miirabox ~ # mdadm --fail /dev/md1 /dev/nvme0n1p2 mdadm: set /dev/nvme0n1p2 faulty in /dev/md1 root@miirabox ~ # mdadm --remove /dev/md1 root@miirabox ~ # mdadm --fail /dev/md1 /dev/nvme1n1p2 mdadm: set device faulty failed for /dev/nvme1n1p2: Device or resource busy root@miirabox ~ # sudo mdadm --stop /dev/md1 mdadm: Cannot get exclusive access to /dev/md1:Perhaps a running process, mounted filesystem or active volume group? root@miirabox ~ # sudo vgdisplay root@miirabox ~ # lvdisplay I was following a guide and could not proceed. Please do not hesitate if you want more details. Thanks in advance. EDIT : I apologise I was not very clear, it's that I have 2TB and my system is only using 1TB (os+data) and the rest is used by raid. I just want to remove raid and recover the second 1TB so I can get the 2TB.
Miira ben sghaier (111 rep)
Sep 19, 2024, 07:45 PM • Last activity: Mar 24, 2025, 08:33 AM
1 votes
1 answers
672 views
Recover data from RAID1 with one disk in rebuilding state
I am attempting to recover data on a MDADM RAID1 array with one HDD that suffered a mechanical failure and another that is stuck in a rebuilding state. I have two 2TB HDDs that were installed on a machine running under a mdadm RAID1 array named /dev/md1. Sda1 and Sdb1 were both part of the array. Sd...
I am attempting to recover data on a MDADM RAID1 array with one HDD that suffered a mechanical failure and another that is stuck in a rebuilding state. I have two 2TB HDDs that were installed on a machine running under a mdadm RAID1 array named /dev/md1. Sda1 and Sdb1 were both part of the array. Sda had a mechanical failure and it's been replaced with a new HDD with the same capacity, on which a new partition of the same size was created. Upon attempting to add sda1 to the array I've received the following error:
sudo mdadm /dev/md1 --manage --add /dev/sda1
mdadm: cannot load array metadata from /dev/md1
What I've tried --------------- I've been following this guide (https://ahelpme.com/linux/recovering-md-array-and-mdadm-cannot-get-array-info-for-dev-md0/) to attempt to 'activate' the array. These are the steps of the guide: 1. Remove ALL current configuration by issuing multiple stop commands with mdadm, no inactive raids or any raids should be reported in “/proc/mdstat”. 2. Rename mdadm configuration files in /etc/mdadm/mdadm.conf 3. Rescan for MD devices with mdadm. The mdadm will load the configuration from your disks. 4. Add the missing partitions to your software raid devices. I've executed the first two steps. On running step 3, the raid was discovered but, unlike the guide, it seems sdb1 is in rebuilding state and the array cannot be started because of it. I'm reluctant to try anything involving --force since I'm unsure as to the exact state of the data and the fact that, unfortunately, the entirety of the data is very precious. My questions ------------ * How can I recover the data? * How can I know what files on the disk that is 'Rebuilding' are 'incomplete' or corrupted? * Why, when running
mdadm --misc --detail /dev/md1
it says raid0 but when running
mdadm -E /dev/sdb1
it says raid1? Any and all help is highly appreciated. System info -----------
:/$ sudo mdadm --assemble --scan --verbose

mdadm: looking for devices for further assembly
mdadm: no recogniseable superblock on /dev/loop10
mdadm: no recogniseable superblock on /dev/loop9
mdadm: no recogniseable superblock on /dev/loop8
mdadm: no recogniseable superblock on /dev/sdc2
mdadm: Cannot assemble mbr metadata on /dev/sdc1
mdadm: Cannot assemble mbr metadata on /dev/sdc
mdadm: no recogniseable superblock on /dev/sda1
mdadm: Cannot assemble mbr metadata on /dev/sda
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdb
mdadm: No super block found on /dev/loop7 (Expected magic a92b4efc, got 118a6b61)
mdadm: no RAID superblock on /dev/loop7
mdadm: No super block found on /dev/loop6 (Expected magic a92b4efc, got e7e108a6)
mdadm: no RAID superblock on /dev/loop6
mdadm: No super block found on /dev/loop5 (Expected magic a92b4efc, got 3a23b8f9)
mdadm: no RAID superblock on /dev/loop5
mdadm: No super block found on /dev/loop4 (Expected magic a92b4efc, got 3a23b8f9)
mdadm: no RAID superblock on /dev/loop4
mdadm: No super block found on /dev/loop3 (Expected magic a92b4efc, got e7e108a6)
mdadm: no RAID superblock on /dev/loop3
mdadm: No super block found on /dev/loop2 (Expected magic a92b4efc, got a6eff301)
mdadm: no RAID superblock on /dev/loop2
mdadm: No super block found on /dev/loop1 (Expected magic a92b4efc, got e06997af)
mdadm: no RAID superblock on /dev/loop1
mdadm: /dev/sdb1 is identified as a member of /dev/md/1, slot 1.
mdadm: no uptodate device for slot 0 of /dev/md/1
mdadm: added /dev/sdb1 to /dev/md/1 as 1
user514025 (11 rep)
Feb 11, 2022, 12:42 PM • Last activity: Mar 20, 2025, 11:51 PM
12 votes
4 answers
22394 views
My RAID 1 always renames itself to /dev/md127 after rebooting | DEBIAN 10
## PROBLEM ## I create a RAID 1 configuration, I name it /dev/md1, but when I reboot, the name always changes to /dev/md127
## PROBLEM ## I create a RAID 1 configuration, I name it /dev/md1, but when I reboot, the name always changes to /dev/md127
Adrián Jaramillo (459 rep)
Oct 2, 2019, 11:22 PM • Last activity: Mar 17, 2025, 07:12 PM
0 votes
2 answers
99 views
Is this possible to create raid1 from 8TB and 4TB+4TB disks, eg. stripe 4+4 + mirror 8?
I have one 8TB and two 4TB disks. I am curious if I am able to create 8TB raid from 8+4+4 disks? I tried this `sudo mkfs.btrfs -f -m raid1 -d raid1 /dev/sdc1 /dev/sdd1 /dev/cdb1` but this creates only 4TB raid. From my point of view it should be technically possible to crate 8TB stripe from two 4TB...
I have one 8TB and two 4TB disks. I am curious if I am able to create 8TB raid from 8+4+4 disks? I tried this sudo mkfs.btrfs -f -m raid1 -d raid1 /dev/sdc1 /dev/sdd1 /dev/cdb1 but this creates only 4TB raid. From my point of view it should be technically possible to crate 8TB stripe from two 4TB disks and then create 8TB mirror. Is this possible to achieve with btrfs tools?
Eugen Konkov (437 rep)
Feb 17, 2025, 05:24 PM • Last activity: Feb 18, 2025, 12:56 PM
0 votes
1 answers
44 views
Remove a disk from a Raid1 array and use it as cold backup?
My RAID1 array is currently running with 3 disks (2x6TB and 1x3TB). I want to remove the smaller disk and wonder if it makes sense to put it in the safe as cold backup. The idea is to be able to mount it as degraded RAID1 array, independent of the running RAID. Question 1: Should I set the disk to f...
My RAID1 array is currently running with 3 disks (2x6TB and 1x3TB). I want to remove the smaller disk and wonder if it makes sense to put it in the safe as cold backup. The idea is to be able to mount it as degraded RAID1 array, independent of the running RAID. Question 1: Should I set the disk to faulty before removing it? Question 2: Will I be able to mount the backup disk on the same device where the RAID is running or will mdadm detect the disk to belong to the array already there? Linux kernel is 6.8.0, mdadm is v4.3.
treuss (303 rep)
Feb 2, 2025, 10:47 PM • Last activity: Feb 2, 2025, 11:12 PM
1 votes
0 answers
67 views
No space left on device but only 50% space and 1% inodes used
This is on a Iomega IX200 NAS which had been expanded to 4TB disks from the original 2TB. It all looks good. But when I try to save data to a new file I get the "No space left on device error". I get this whether I try to save a file via the NAS drive share on another device or vi the SSH session on...
This is on a Iomega IX200 NAS which had been expanded to 4TB disks from the original 2TB. It all looks good. But when I try to save data to a new file I get the "No space left on device error". I get this whether I try to save a file via the NAS drive share on another device or vi the SSH session on the NAS itself. df -h reports:
Filesystem            Size  Used Avail Use% Mounted on
rootfs                 50M  3.7M   47M   8% /
/dev/root.old         6.5M  2.1M  4.4M  33% /initrd
none                   50M  3.7M   47M   8% /
/dev/md0_vg/BFDlv     4.0G  624M  3.2G  17% /boot
/dev/loop0            592M  538M   54M  91% /mnt/apps
/dev/loop1            4.9M  2.2M  2.5M  48% /etc
/dev/loop2            260K  260K     0 100% /oem
tmpfs                 122M     0  122M   0% /mnt/apps/lib/init/rw
tmpfs                 122M     0  122M   0% /dev/shm
/dev/mapper/md0_vg-vol1
                       16G  1.5G   15G  10% /mnt/system
/dev/mapper/5244dd0f_vg-lv58141b0d
                      3.7T  2.0T  1.7T  55% /mnt/pools/A/A0
/mnt/pools/A/A0 is the one that provisions the storage. df -h -i:
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
rootfs                   31K     567     30K    2% /
/dev/root.old           1.7K     130    1.6K    8% /initrd
none                     31K     567     30K    2% /
/dev/md0_vg/BFDlv       256K      20    256K    1% /boot
/dev/loop0               25K     25K      11  100% /mnt/apps
/dev/loop1              1.3K    1.1K     139   89% /etc
/dev/loop2                21      21       0  100% /oem
tmpfs                    31K       4     31K    1% /mnt/apps/lib/init/rw
tmpfs                    31K       1     31K    1% /dev/shm
/dev/mapper/md0_vg-vol1
                         17M    9.7K     16M    1% /mnt/system
/dev/mapper/5244dd0f_vg-lv58141b0d
                        742M    2.6M    739M    1% /mnt/pools/A/A0
When the partition was grown I ran lvresize and xfs_grow, after which it started showing as having 3.7Tb capacity. Disks/partitions:
$ parted -l

Model: Seagate ST4000VN008-2DR1 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      36.9kB  21.5GB  21.5GB               primary       
 2      21.5GB  4001GB  3979GB               primary       


Model: Seagate ST4000VN008-2DR1 (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      36.9kB  21.5GB  21.5GB               primary       
 2      21.5GB  4001GB  3979GB               primary       


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/5244dd0f_vg-lv58141b0d: 3979GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End     Size    File system  Flags
 1      0.00B  3979GB  3979GB  xfs               


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/md0_vg-vol1: 17.2GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End     Size    File system  Flags
 1      0.00B  17.2GB  17.2GB  xfs               


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/md0_vg-BFDlv: 4295MB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End     Size    File system  Flags
 1      0.00B  4295MB  4295MB  ext2              


Error: /dev/mtdblock0: unrecognised disk label                            

Error: /dev/mtdblock1: unrecognised disk label                            

Error: /dev/mtdblock2: unrecognised disk label                            

Error: /dev/mtdblock3: unrecognised disk label                            

Error: /dev/md0: unrecognised disk label                                  

Error: /dev/md1: unrecognised disk label
When I ran mdadm --detail I noticed the second partition of the RAID1 pair was set to 'removed':
mdadm --detail /dev/md1
/dev/md1:
        Version : 01.00
  Creation Time : Mon Mar  7 08:45:49 2011
     Raid Level : raid1
     Array Size : 3886037488 (3706.01 GiB 3979.30 GB)
  Used Dev Size : 7772074976 (7412.03 GiB 7958.60 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Thu Jan 23 03:29:04 2025
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : ix2-200-DC386F:1
           UUID : 8a192f2c:9829df88:a6961d81:20478f62
         Events : 365631

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed   #### <<<< THIS ONE is /dev/sdb2
       2       8        2        1      active sync   /dev/sda2
When I do an examine on /dev/sdb2 I get:
$ mdadm --examine /dev/sdb2
mdadm: No md superblock detected on /dev/sdb2.
I was wondering if there is still, within the firmware, a limit (the original sized disks had a capacity of 1.8 Tb). But am now thinking the 'removed' disk is the problem, but would that explain why I can only use 1.7Tb of a 3.4Tb filesystem? /dev/sda2 reports as 'clean'. Edit: I tried mdadm --zero-superblock /dev/sdb2 Followed by mdadm --manage /dev/md1 --add /dev/sdb2 But I got the error mdadm: add new device failed for /dev/sdb2 as 3: Invalid argument.
Pro West (111 rep)
Jan 23, 2025, 11:44 AM • Last activity: Jan 23, 2025, 01:23 PM
0 votes
0 answers
43 views
Cannot umount systemd mounted luks container in raid
I have mounted raid1 luks non-root partition which is automated through created mount unit. Recently I've discovered that the FS is corrupted and decided to fsck it remotely without rebooting the server. So, I've stopped the unit. And tried fsck it, and got the response that the device is still seen...
I have mounted raid1 luks non-root partition which is automated through created mount unit. Recently I've discovered that the FS is corrupted and decided to fsck it remotely without rebooting the server. So, I've stopped the unit. And tried fsck it, and got the response that the device is still seen mounted. The same happens when I try to umount manually: it refers that the resource is busy, and only lazy umount works but the result is the same, as with stopping the unit. So, I've decided to change the Options=defaults,noauto in the mount unit. Stopping/starting leads to the the errors. In the log I see: kernel: [258607.858951] dm-1: Can't mount, would change RO state. So, how to do it properly to umount the partition and fsck it without rebooting to live mode wich cannot be controlled remotely?
Oleksa (151 rep)
Dec 21, 2024, 01:41 PM
3 votes
2 answers
174 views
RAID array reverts to the old disk after reboot
I have replaced my RAID array from RAID 1 with 2 x 4TB disks to 2 x 10TB disks using mdadm. The process can be summarized as follows: add the 2 new disks to the RAID array, wait for sync, remove the 2 old disks from array, grow and extend the file system. Everything worked fine. However, I did not u...
I have replaced my RAID array from RAID 1 with 2 x 4TB disks to 2 x 10TB disks using mdadm. The process can be summarized as follows: add the 2 new disks to the RAID array, wait for sync, remove the 2 old disks from array, grow and extend the file system. Everything worked fine. However, I did not unplug or wipe the old disks. A few days later, one disk is failed, I remove one disk from array (mdadm --remove /dev/md125 /dev/sdf1) and I rebooted the server, but the RAID reverted to the old configuration (the RAID array consisted of the 2 old disks, with data and mount points reverting to their state before the change raid array). Can I re-create md125 to fix this? Summary: Old: /dev/md125 (sdc1 + sdd1) New: /dev/md125 (sdf1 + sde1) Remove sdf1 and reboot server. After Reboot: /dev/md125 (sdc1 + sdd1) OS: Centos 7 Disk (lsblk) when replace raid array:
sdc         8:32   0   3.7T  0 disk
└─sdc1      8:33   0   3.7T  0 part
sdd         8:48   0   3.7T  0 disk
└─sdd1      8:49   0   3.7T  0 part
sde         8:64   0  10.9T  0 disk
└─sde1      8:65   0  10.9T  0 part
  └─md125   9:125  0   3.7T  0 raid1 /data
sdf         8:80   0  10.9T  0 disk
└─sdf1      8:81   0  10.9T  0 part
  └─md125   9:125  0   3.7T  0 raid1 /data
I expected the configuration to remain the same after the reboot, but it turned out like this:
sdc         8:32   0   3.7T  0 disk
└─sdc1      8:33   0   3.7T  0 part
    └─md125   9:125  0   3.7T  0 raid1 /data
sdd         8:48   0   3.7T  0 disk
└─sdd1      8:49   0   3.7T  0 part
    └─md125   9:125  0   3.7T  0 raid1 /data
sde         8:64   0  10.9T  0 disk
└─sde1      8:65   0  10.9T  0 part
sdf         8:80   0  10.9T  0 disk
└─sdf1      8:81   0  10.9T  0 part
mdstat before reboot:
Personalities : [raid1]  
md125 : active raid1 sde1  
      11718752256 blocks super 1.2 [2/1] [_U]  
      bitmap: 22/22 pages [88KB], 262144KB chunk
blkid and mdstat.conf now:
# grep a5c2d1ec blkid.txt 
/dev/sdc1: UUID="a5c2d1ec-fa7f-bba4-4c83-bfb2027ab635" UUID_SUB="a411ac50-ac7c-3210-c7f9-1d6ab27926eb" LABEL="localhost:data" TYPE="linux_raid_member" PARTUUID="270b5cba-f8f4-4863-9f4a-f1c35c8088bf"
/dev/sdf1: UUID="a5c2d1ec-fa7f-bba4-4c83-bfb2027ab635" UUID_SUB="4ac242ae-d6a2-0021-cd71-a9a7a357a3bb" LABEL="localhost:data" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="4ca94453-20e7-4bde-ad3c-9afedbfa8cdb"
/dev/sde1: UUID="a5c2d1ec-fa7f-bba4-4c83-bfb2027ab635" UUID_SUB="1095abff-9bbf-c705-839d-c0e9e8f68624" LABEL="localhost:data" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="7bd23a9e-3c6f-464c-aeda-690a93716465"
/dev/sdd1: UUID="a5c2d1ec-fa7f-bba4-4c83-bfb2027ab635" UUID_SUB="0d6ef39a-03af-ffd4-beb8-7c396ddf4489" LABEL="localhost:data" TYPE="linux_raid_member" PARTUUID="be2ddc1a-14bc-410f-ae16-ada38d861eb3"
# cat /etc/mdadm.conf
ARRAY /dev/md/boot metadata=1.2 name=localhost:boot UUID=ad0e75a7:f80bd9a7:6fea9e4d:7cf9db57
ARRAY /dev/md/root metadata=1.2 name=localhost:root UUID=266e79a9:224eb9ed:f4a11322:025564be
ARRAY /dev/md/data metadata=1.2 spares=1 name=localhost:storage UUID=a5c2d1ec:fa7fbba4:4c83bfb2:027ab635
Thanks
Khang Nguyen Phuc (39 rep)
Nov 27, 2024, 10:19 AM • Last activity: Nov 27, 2024, 09:56 PM
1 votes
1 answers
276 views
mdadm RAID1: Unable to replace(--add) failed drive
I have a RAID1 array of two hard disks which recently lost one drive, but I can't seem to simply replace the broken one. `mdadm --detail` has reported that the first drive slot *sda* has been removed (which it physically has) the second drive slot sdb is the one that is still working (active, sync)....
I have a RAID1 array of two hard disks which recently lost one drive, but I can't seem to simply replace the broken one. mdadm --detail has reported that the first drive slot *sda* has been removed (which it physically has) the second drive slot sdb is the one that is still working (active, sync). The *sdb* was recently replaced, so the data is safe. But when I try to add the disk with mdadm /dev/md127 --add /dev/sdx it returns an error:
$ sudo mdadm /dev/md127 --add /dev/sdx1
mdadm: add new device failed for /dev/sdx1 as 3: Invalid argument
dmesg shows these:
[  xx.xx] md: sdx1 does not have a valid v1.2 superblock, not importing!
[  xx.xx] md: md_import_device returned -22
I confirmed with parted that the partition size of sdx1 is exactly the same as sdb1 (sectors and byte). Also after --add the drive, although an error occurs, lsblk shows that there is already metadata for the array on sda1. Translated with DeepL
酸柠檬猹 (33 rep)
Nov 23, 2024, 03:00 PM • Last activity: Nov 26, 2024, 11:01 PM
1 votes
0 answers
18 views
Not sure if lsblk showing correct partitions after restoring RAID1
One of my disk (`nvme0n1`) fails, so it was replaced. Now `lsblk` shows ``` nvme0n1 259:0 0 476.9G 0 disk ├─nvme0n1p1 259:5 0 511M 0 part ├─nvme0n1p2 259:6 0 475.9G 0 part │ └─md2 9:2 0 475.8G 0 raid1 / └─nvme0n1p3 259:7 0 512M 0 part nvme1n1 259:1 0 476.9G 0 disk ├─nvme1n1p1 259:2 0 511M 0 part /bo...
One of my disk (nvme0n1) fails, so it was replaced. Now lsblk shows
nvme0n1     259:0    0 476.9G  0 disk
├─nvme0n1p1 259:5    0   511M  0 part
├─nvme0n1p2 259:6    0 475.9G  0 part
│ └─md2       9:2    0 475.8G  0 raid1 /
└─nvme0n1p3 259:7    0   512M  0 part
nvme1n1     259:1    0 476.9G  0 disk
├─nvme1n1p1 259:2    0   511M  0 part  /boot/efi
├─nvme1n1p2 259:3    0 475.9G  0 part
│ └─md2       9:2    0 475.8G  0 raid1 /
└─nvme1n1p3 259:4    0   512M  0 part  [SWAP]
But I'm afraid that nvme0n1p3 is not mounted as SWAP as nvme1n1p3, and the same situation is with nvme0n1p1. What I do after replacing disk is:
sgdisk --backup=nvme1n1.sgdisk /dev/nvme1n1
sgdisk --load-backup=nvme1n1.sgdisk /dev/nvme0n1
sgdisk -G /dev/nvme0n1
mdadm --manage /dev/md2 --add /dev/nvme0n1p2
Is that correct configuration? If nvme1n1 will fail, system will boot correctly?
webard (11 rep)
Nov 11, 2024, 02:31 AM • Last activity: Nov 11, 2024, 02:32 AM
1 votes
1 answers
2909 views
slow access/reads of linux md raid1 array
I have a Linux md-raid raid1 array (ext4 fs), with 2 3TB disks. The array has been showing significant slowness in access and read times over the last few months. Doing an `ls` on a directory w/ less than 20 records can sometimes take 2-3 minutes to return. It seems to spend a lot of time with a sta...
I have a Linux md-raid raid1 array (ext4 fs), with 2 3TB disks. The array has been showing significant slowness in access and read times over the last few months. Doing an ls on a directory w/ less than 20 records can sometimes take 2-3 minutes to return. It seems to spend a lot of time with a state of "checking," but even when the state is "clean," access and read times are very slow. I don't find any errors being reported in the system logs. The only thing of note is that the FS has been close to full for a while now. The output of mdadm -D /dev/md127 shows:
/dev/md127:
     Version : 1.2
     Creation Time : Thu Jun 20 11:34:21 2019
        Raid Level : raid1
        Array Size : 2930132992 (2794.39 GiB 3000.46 GB)
     Used Dev Size : 2930132992 (2794.39 GiB 3000.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun Sep 26 13:58:50 2021
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : giles:meta  (local to host giles)
              UUID : 638efea5:1e7b07d2:78fec1dc:d919dccf
            Events : 8359

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
Any thoughts on what might be causing this or suggestions on debugging it? I'm in the process of copying the data to a new set of larger drives and it has only copied 301GB in over 48 hours.
baronmog (21 rep)
Sep 26, 2021, 10:55 PM • Last activity: Nov 8, 2024, 09:02 AM
1 votes
2 answers
97 views
Converting sw md device from raid0 to raid1 with only one disk online
Is it possible to convert an existing raid0 md device with only one disk to a raid1 md device with one disk in order to add a second mirror disk later? It should be possible online.
Is it possible to convert an existing raid0 md device with only one disk to a raid1 md device with one disk in order to add a second mirror disk later? It should be possible online.
WilliWuff (11 rep)
Sep 13, 2024, 08:51 AM • Last activity: Sep 22, 2024, 12:45 PM
0 votes
0 answers
122 views
Ubuntu Server 22.04 Replace Failed SSD in mdadm RAID1 Boot Disk
I have an Ubuntu 22.04 server that has its boot disk on a mdadm RAID1 array consisting of two 240 GB SSDs (/dev/sda & /dev/sdb). This mdadm array was setup using curtin during the initial install. In addition only the boot, root & swap file systems are on this array - all other files are on a ZFS RA...
I have an Ubuntu 22.04 server that has its boot disk on a mdadm RAID1 array consisting of two 240 GB SSDs (/dev/sda & /dev/sdb). This mdadm array was setup using curtin during the initial install. In addition only the boot, root & swap file systems are on this array - all other files are on a ZFS RAID10 array. One of the disks (/dev/sda) has now failed completely and needs to be replaced. While the system continues to run on the other disk (/dev/sdb), it will only boot on the failed disk (/dev/sda). This presents somewhat of a problem since I will need to reboot the system on /dev/sdb after I have shutdown the system and replaced /dev/sda. Both /dev/sda & /dev/sdb have up to date EFI & /boot partitions. I am currently planning the replacement and would appreciate any advice. So far, I think I will need to do the following: 1. mark the partitions as failed using mdadm 2. remove the failed partitions from the array using mdadm 3. set /dev/sdb to be the boot disk 4. shutdown the system 5. physically remove the failed disk and replace it with a new disk 6. restart the system 7. partition the new disk using sfdisk 8. add the new partitions to the existing arrays using mdadm 9. copy the files from the EFI partition to the new disk 10. update grub Most of the process looks pretty straight forward. It is the step 3 and 10 that deal with booting that I am not sure about. Below are the details of my setup: fdisk /dev/sdb (both disk are partitioned the same)
Device        Start       End   Sectors   Size Type
/dev/sdb1      2048   2203647   2201600     1G EFI System
/dev/sdb2   2203648   4300799   2097152     1G Linux filesystem
/dev/sdb3   4300800  71409663  67108864    32G Linux filesystem
/dev/sdb4  71409664 468858879 397449216 189.5G Linux filesystem
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md125 : active raid1 sdb4 sda4(F)
      198592512 blocks super 1.2 [2/1] [U_]
      bitmap: 2/2 pages [8KB], 65536KB chunk

md126 : active raid1 sdb3 sda3(F)
      33520640 blocks super 1.2 [2/1] [_U]
      
md127 : active raid1 sdb2 sda2(F)
      1046528 blocks super 1.2 [2/1] [U_]
cat /etc/fstab
#                
# / was on /dev/md125p1 during curtin installation
/dev/disk/by-id/md-uuid-7f83998a:b81f586c:e3e6497a:9a9e36ce-part1 / ext4 defaults 0 1
/dev/disk/by-id/md-uuid-56619c5a:2fc620ba:3642eeae:73fd6319-part1 none swap sw 0 0
# /boot was on /dev/md127p1 during curtin installation
/dev/disk/by-id/md-uuid-78148d71:a0c26fd8:9ee89f4c:bfa69120-part1 /boot ext4 defaults 0 1
# /boot/efi was on /dev/sda1 during curtin installation
/dev/disk/by-uuid/D72E-12F9 /boot/efi vfat defaults 0 1
lsblk
8:0    0 223.6G  0 disk  
├─sda1          8:1    0     1G  0 part  
├─sda2          8:2    0     1G  0 part  
│ └─md127       9:127  0  1022M  0 raid1 
│   └─md127p1 259:1    0  1020M  0 part  /boot
├─sda3          8:3    0    32G  0 part  
│ └─md126       9:126  0    32G  0 raid1 
│   └─md126p1 259:0    0    32G  0 part  [SWAP]
└─sda4          8:4    0 189.5G  0 part  
  └─md125       9:125  0 189.4G  0 raid1 
    └─md125p1 259:2    0 189.4G  0 part  /
sdb             8:16   0 223.6G  0 disk  
├─sdb1          8:17   0     1G  0 part  /boot/efi
├─sdb2          8:18   0     1G  0 part  
│ └─md127       9:127  0  1022M  0 raid1 
│   └─md127p1 259:1    0  1020M  0 part  /boot
├─sdb3          8:19   0    32G  0 part  
│ └─md126       9:126  0    32G  0 raid1 
│   └─md126p1 259:0    0    32G  0 part  [SWAP]
└─sdb4          8:20   0 189.5G  0 part  
  └─md125       9:125  0 189.4G  0 raid1 
    └─md125p1 259:2    0 189.4G  0 part  /
deltatech (1 rep)
Sep 16, 2024, 10:39 PM
2 votes
1 answers
3585 views
btrfs raid1: How to check it is function normally?
I have `RAID1` configured as described [here](http://www.beginninglinux.com/btrfs) $ sudo mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc Then I mount device and write some data: sudo mount /dev/sdb /mnt echo "some data" > /mnt/123 I have next picture: ``` # btrfs fi show Label: none uuid: 01e1a8a1-b...
I have RAID1 configured as described [here](http://www.beginninglinux.com/btrfs) $ sudo mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc Then I mount device and write some data: sudo mount /dev/sdb /mnt echo "some data" > /mnt/123 I have next picture:
# btrfs fi show
Label: none  uuid: 01e1a8a1-be78-47d0-8dc2-2e293265b31b
	Total devices 2 FS bytes used 896.00KiB
	devid    1 size 3.64TiB used 4.04GiB path /dev/sde
	devid    2 size 3.64TiB used 4.04GiB path /dev/sdb
Then I unplug one device and see next:
# btrfs fi show
warning, device 2 is missing
warning, device 2 is missing
parent transid verify failed on 22020096 wanted 11 found 8
parent transid verify failed on 22020096 wanted 11 found 8
Ignoring transid failure
Label: none  uuid: 01e1a8a1-be78-47d0-8dc2-2e293265b31b
	Total devices 2 FS bytes used 896.00KiB
	devid    1 size 3.64TiB used 2.01GiB path /dev/sde
	*** Some devices missing
[When I mount](https://unix.stackexchange.com/a/496853/129967) /dev/sde device I see no data: # mount -o degraded /dev/sdb /mnt # cd /mnt # ls **Thus I am not sure that btrfs function normally.** 1. Why I do not see stored data? 2. Why btrfs fi show displays different info? c/p from posts above:
devid    1 size 3.64TiB used 4.04GiB path /dev/sde
VS  (after removing one device)
devid    1 size 3.64TiB used 2.01GiB path /dev/sde
As you can see used is differ =(
Eugen Konkov (437 rep)
Oct 7, 2019, 03:15 PM • Last activity: Aug 29, 2024, 03:00 PM
0 votes
2 answers
1433 views
What is the difference between BTRFS RAID1 and BTRFS RAID5 on 3+ devices?
According to [_"Examining btrfs, Linux’s perpetually half-finished filesystem"_](https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/), BTRFS RAID1 is said to be _"guaranteed redundancy—copies of all blocks will be **saved on two separate devices**"_....
According to [_"Examining btrfs, Linux’s perpetually half-finished filesystem"_](https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/) , BTRFS RAID1 is said to be _"guaranteed redundancy—copies of all blocks will be **saved on two separate devices**"_. It goes on to say that with BTRFS on both RAID1 and RAID5 you can have devices of different sizes. You can also have more than 3 devices with both. Assuming you have three disks, what is the difference in btrfs between RAID1 and RAID5? They both protect against failure of one drive in the array.
Evan Carroll (34663 rep)
Jul 31, 2023, 04:37 AM • Last activity: Jul 24, 2024, 10:12 PM
Showing page 1 of 20 total questions