Sample Header Ad - 728x90

Missing mdadm raid5 array reassembles as raid0 after powerout

4 votes
2 answers
3681 views
I had RAID5 array of three disks with no spares. There was a power out, and on reboot, the array failed to come back up. In fact, the /dev/md127 device disappeared entirely, and was replaced by an incorrect /dev/md0. It was the only array on the machine. I've tried to reassemble it from the three component devices, but the assembly keeps creating a raid0 array instead of a raid5. The details of the three disks are root@bragi ~ # mdadm -E /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 002fa352:9968adbd:b0efdfea:c60ce290 Name : bragi:0 (local to host bragi) Creation Time : Sun Oct 30 00:10:47 2011 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 2930269954 (1397.26 GiB 1500.30 GB) Array Size : 2930269184 (2794.52 GiB 3000.60 GB) Used Dev Size : 2930269184 (1397.26 GiB 1500.30 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=770 sectors State : clean Device UUID : a8a1b48a:ec28a09c:7aec4559:b839365e Update Time : Sat Oct 11 09:20:36 2014 Checksum : 7b1ad793 - correct Events : 15084 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing) root@bragi ~ # mdadm -E /dev/sdd1 /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 002fa352:9968adbd:b0efdfea:c60ce290 Name : bragi:0 (local to host bragi) Creation Time : Sun Oct 30 00:10:47 2011 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 2930269954 (1397.26 GiB 1500.30 GB) Array Size : 2930269184 (2794.52 GiB 3000.60 GB) Used Dev Size : 2930269184 (1397.26 GiB 1500.30 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1968 sectors, after=770 sectors State : clean Device UUID : 36c08006:d5442799:b028db7c:4d4d33c5 Update Time : Wed Oct 15 08:09:37 2014 Checksum : 7e05979e - correct Events : 15196 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : .A. ('A' == active, '.' == missing, 'R' == replacing) root@bragi ~ # mdadm -E /dev/sde1 /dev/sde1: Magic : a92b4efc Version : 1.2 Feature Map : 0x8 Array UUID : 002fa352:9968adbd:b0efdfea:c60ce290 Name : bragi:0 (local to host bragi) Creation Time : Sun Oct 30 00:10:47 2011 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 2930275057 (1397.26 GiB 1500.30 GB) Array Size : 2930269184 (2794.52 GiB 3000.60 GB) Used Dev Size : 2930269184 (1397.26 GiB 1500.30 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1960 sectors, after=5873 sectors State : clean Device UUID : b048994d:ffbbd710:8eb365d2:b0868ef0 Update Time : Wed Oct 15 08:09:37 2014 Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present. Checksum : bdbc6fc4 - correct Events : 15196 Layout : left-symmetric Chunk Size : 512K Device Role : spare Array State : .A. ('A' == active, '.' == missing, 'R' == replacing) I stopped the old array, then reassembled as follows (blank lines inserted for clarity) root@bragi ~ # mdadm -S /dev/md0 mdadm: stopped /dev/md0 root@bragi ~ # mdadm -A /dev/md0 /dev/sdd1 /dev/sdc1 /dev/sde1 mdadm: /dev/md0 assembled from 1 drive and 1 spare - not enough to start the array. root@bragi ~ # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : inactive sdd1(S) sde1(S) sdc1(S) 4395407482 blocks super 1.2 unused devices: root@bragi ~ # mdadm -D /dev/md0 /dev/md0: Version : 1.2 Raid Level : raid0 Total Devices : 3 Persistence : Superblock is persistent State : inactive Name : bragi:0 (local to host bragi) UUID : 002fa352:9968adbd:b0efdfea:c60ce290 Events : 15084 Number Major Minor RaidDevice - 8 33 - /dev/sdc1 - 8 49 - /dev/sdd1 - 8 65 - /dev/sde1 root@bragi ~ # mdadm -Q /dev/md0 /dev/md0: is an md device which is not active Why is this assembling as a raid0 device and not a raid5 device, as the superblocks of the components indicate it should? Is it because /dev/sde1 is marked as spare? **EDIT:** I tried the following (according to @wurtel's suggestion), with the following results # mdadm --create -o --assume-clean --level=5 --layout=ls --chunk=512 --raid-devices=3 /dev/md0 missing /dev/sdd1 /dev/sde1 mdadm: /dev/sdd1 appears to contain an ext2fs file system size=1465135936K mtime=Sun Oct 23 13:06:11 2011 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sun Oct 30 00:10:47 2011 mdadm: /dev/sde1 appears to be part of a raid array: level=raid5 devices=3 ctime=Sun Oct 30 00:10:47 2011 mdadm: partition table exists on /dev/sde1 but will be lost or meaningless after creating array Continue creating array? no mdadm: create aborted. # So it looks like /dev/sde1 is causing the problem again. I suspect this is because it has been marked as spare. Is there anyway I can force change its role back to active? In this case I suspect assembling the array might even work.
Asked by sirlark (253 rep)
Oct 22, 2014, 07:37 PM
Last activity: Apr 3, 2024, 08:26 AM