Recovery of data on raid5+lvm reiserfs partition, after raid5 problems
3
votes
3
answers
1051
views
I've got a server with 3 sata hard drives. Each has 2 partitions: one small is part of /dev/md0, a raid1 array (/boot), rest is part of a raid5 array (/dev/md1), which is an lvm physical volume. Inside it are 3 (IIRC) logical volumes. One of these is a reiserfs 3.6 fs holding about 100gigs of data.
Yesterday this server crashed. At power up, SMART told me that one of the drives was dead. He was indeed making very bad noises. So I removed the failed drive, and tried to restart the system on the 2 remaining disks. Which failed.
With a live cd, I started it and tried to restart the array. Unfortunately, mdadm refused to do so, because it thought one of the 2 remaining disks was failed also.
So, following advice found at https://unix.stackexchange.com/questions/8861/how-to-recover-a-crashed-linux-md-raid5-array that looked like it could apply to my situation, I did something that was probably just plain stupid: I ran
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c64 /dev/sd[ab]2 missing
Now, I can actually start this array, but the lvm tools (vgscan, vgdisplay, pvck) cannot find anything related to lvm on the array, and I'm completely unable to get to my data. Did I just wipe all the lvm metadata?
My feeling is that actual data is still there, undamaged (apart from lvm metadata). Is there a chance to get the data back? How?
**UPDATE:**
Following advice from psusi (below), I tried each of the following ways of re-creating the array:
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c64 /dev/sda2 /dev/sdb2 missing
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c64 /dev/sdb2 /dev/sda2 missing
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c64 /dev/sda2 missing /dev/sdb2
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c64 /dev/sdb2 missing /dev/sda2
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c64 missing /dev/sda2 /dev/sdb2
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c64 missing /dev/sdb2 /dev/sda2
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c512 /dev/sda2 /dev/sdb2 missing
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c512 /dev/sdb2 /dev/sda2 missing
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c512 /dev/sda2 missing /dev/sdb2
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c512 /dev/sdb2 missing /dev/sda2
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c512 missing /dev/sda2 /dev/sdb2
mdadm --create /dev/md1 --assume-clean -l5 -n3 -c512 missing /dev/sdb2 /dev/sda2
Which is basically all possible orders, both with -c64 and -c512. After each test, I ran a vgscan. None found anything. Maybe I should not use vgscan but some other tool?
**UPDATE 2:**
I gave a try at reconnecting the failed hard drive. And miracle, it seems to somewhat work. At least, enough to examine it:
root@debian:~# mdadm --examine /dev/sda2
/dev/sda2:
Magic : a92b4efc
Version : 0.90.00
UUID : 1f5462ab:6945560d:019b01a5:914dd464
Creation Time : Fri Oct 17 12:40:40 2008
Raid Level : raid5
Used Dev Size : 160015360 (152.60 GiB 163.86 GB)
Array Size : 320030720 (305.21 GiB 327.71 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 1
Update Time : Tue Apr 12 08:15:03 2011
State : active
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Checksum : 64d514fb - correct
Events : 137
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 2 0 active sync /dev/sda2
0 0 8 2 0 active sync /dev/sda2
1 1 8 18 1 active sync /dev/sdb2
2 2 8 34 2 active sync /dev/sdc2
So, is there a way to copy this superblock to the other 2 devices, so that I can start the array "properly"?
Asked by Bill
(31 rep)
Mar 22, 2012, 07:29 AM
Last activity: May 21, 2017, 09:57 PM
Last activity: May 21, 2017, 09:57 PM