mdadm RAID5 array became "active, FAILED" and now no longer mounts
0
votes
1
answer
132
views
Today my RAID5 array status from
mdadm --detail /dev/md0
became active, FAILED
(I have mobile notifications setup)
Most of the files were present but some were missing. This happened before and I solved it by rebooting the machine (I think the status was different tho)
No luck this time, now the RAID no longer works
Here are some details about my setup:
mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid5
Total Devices : 4
Persistence : Superblock is persistent
State : inactive
Working Devices : 4
Name : RHomeServer:0 (local to host RHomeServer)
UUID : d88446ae:f9f5c759:b4ac531c:933f8c62
Events : 104300
Number Major Minor RaidDevice
- 8 64 - /dev/sde
- 8 32 - /dev/sdc
- 8 48 - /dev/sdd
- 8 16 - /dev/sdb
cat /etc/mdadm/mdadm.conf
ARRAY /dev/md0 metadata=1.2 name=RHomeServer:0 UUID=d88446ae:f9f5c759:b4ac531c:933f8c62
sudo blkid | grep sd
/dev/sdd: UUID="d88446ae-f9f5-c759-b4ac-531c933f8c62" UUID_SUB="ce9975e7-ec99-8630-acad-b3d090287950" LABEL="RHomeServer:0" TYPE="linux_raid_member"
/dev/sdb: UUID="d88446ae-f9f5-c759-b4ac-531c933f8c62" UUID_SUB="f20836f9-8960-0b38-e1c3-ea22cba58014" LABEL="RHomeServer:0" TYPE="linux_raid_member"
/dev/sde: UUID="d88446ae-f9f5-c759-b4ac-531c933f8c62" UUID_SUB="d49ff5be-6b1b-8e8d-26b1-52e90bb05ce2" LABEL="RHomeServer:0" TYPE="linux_raid_member"
/dev/sdc: UUID="d88446ae-f9f5-c759-b4ac-531c933f8c62" UUID_SUB="863075ad-9e12-6cfd-3dbf-35017b1f408d" LABEL="RHomeServer:0" TYPE="linux_raid_member"
mdadm --examine /dev/sd[b-e]
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : d88446ae:f9f5c759:b4ac531c:933f8c62
Name : RHomeServer:0 (local to host RHomeServer)
Creation Time : Thu Aug 10 16:54:57 2023
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 35156391936 sectors (16.37 TiB 18.00 TB)
Array Size : 52734587904 KiB (49.11 TiB 54.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264096 sectors, after=0 sectors
State : clean
Device UUID : f20836f9:89600b38:e1c3ea22:cba58014
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Jul 16 18:13:26 2024
Bad Block Log : 512 entries available at offset 80 sectors
Checksum : dd7e994d - correct
Events : 104300
Layout : left-symmetric
Chunk Size : 512K
Device Role : spare
Array State : .... ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : d88446ae:f9f5c759:b4ac531c:933f8c62
Name : RHomeServer:0 (local to host RHomeServer)
Creation Time : Thu Aug 10 16:54:57 2023
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 35156391936 sectors (16.37 TiB 18.00 TB)
Array Size : 52734587904 KiB (49.11 TiB 54.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264096 sectors, after=0 sectors
State : active
Device UUID : 863075ad:9e126cfd:3dbf3501:7b1f408d
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Jul 16 18:13:26 2024
Bad Block Log : 512 entries available at offset 80 sectors
Checksum : ae28e804 - correct
Events : 104300
Layout : left-symmetric
Chunk Size : 512K
Device Role : spare
Array State : .... ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : d88446ae:f9f5c759:b4ac531c:933f8c62
Name : RHomeServer:0 (local to host RHomeServer)
Creation Time : Thu Aug 10 16:54:57 2023
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 35156391936 sectors (16.37 TiB 18.00 TB)
Array Size : 52734587904 KiB (49.11 TiB 54.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264096 sectors, after=0 sectors
State : active
Device UUID : ce9975e7:ec998630:acadb3d0:90287950
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Jul 16 18:13:26 2024
Bad Block Log : 512 entries available at offset 80 sectors
Checksum : adfad01f - correct
Events : 104300
Layout : left-symmetric
Chunk Size : 512K
Device Role : spare
Array State : .... ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : d88446ae:f9f5c759:b4ac531c:933f8c62
Name : RHomeServer:0 (local to host RHomeServer)
Creation Time : Thu Aug 10 16:54:57 2023
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 35156391936 sectors (16.37 TiB 18.00 TB)
Array Size : 52734587904 KiB (49.11 TiB 54.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264096 sectors, after=0 sectors
State : clean
Device UUID : d49ff5be:6b1b8e8d:26b152e9:0bb05ce2
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Jul 16 18:13:26 2024
Bad Block Log : 512 entries available at offset 80 sectors
Checksum : 8d04e29c - correct
Events : 104300
Layout : left-symmetric
Chunk Size : 512K
Device Role : spare
Array State : .... ('A' == active, '.' == missing, 'R' == replacing)
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices:
The drives seem healthy, no SMART issues
I tried running mdadm --assemble --scan
but I get
mdadm: /dev/md0 assembled from 0 drives and 4 spares - not enough to start the array.
mdadm: failed to RUN_ARRAY /dev/md0: Invalid argument
mdadm: Not enough devices to start the array.
The exact same for mdadm --assemble /dev/md0 /dev/sdb /dev/sdc /dev/sdd /dev/sde
and with/without the flags --run --force
I'm running Ubuntu 24.10, everything up to date including kernel 6.9.9
What can I do to recover my array?
I've read about doing
mdadm --stop /dev/md0
mdadm --zero-superblock /dev/sdb /dev/sdc /dev/sdd /dev/sde
mdadm --create --assume-clean /dev/md0 --level=5 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
but I'm afraid that if I lose my data I also break any other recovery attempt.
Asked by Radu Ursache
(111 rep)
Jul 16, 2024, 04:30 PM
Last activity: Jul 16, 2024, 04:54 PM
Last activity: Jul 16, 2024, 04:54 PM