I have a QNAP server with 8 disks. I have 2 backup/extra disks that are not in use. (I had the money an knew eventually the running ones would need replaced) The full array was across the 8 disks @6TB each... math... but I pretty sure less than 1/3 was currently filled.
I had a rep from QNAP work on it and said that with a double disk dive it most likely wont recover. I have faith in the rep in regards to what he is allowed/can do... but..
I have seen things where people pretty much recovered from this type of disaster. My hope is resting on the fact that according to the system all the disks are still good???
Getting a "Bad magic number" / "Couldn't find valid filesystem superblock"
This is the device diag output before rep and then after rep:
**** BEFORE ****:
RAID metadata found!
UUID: cccd0319:89c30791:58322cfe:12ed5c64
Level: raid5
Devices: 8
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Mar 23 11:49:45 2017
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
5 /dev/sdl3 0 Active Apr 28 08:03:55 2019 3927 AAAAA.AA
6 /dev/sdk3 1 Active Apr 28 08:03:55 2019 3927 AAAAA.AA
7 /dev/sdj3 2 Active Apr 28 08:03:55 2019 3927 AAAAA.AA
-------------- 3 Missing -------------------------------------------
9 /dev/sdh3 4 Active Apr 28 08:03:55 2019 3927 AAAAA.AA
10 /dev/sdg3 5 Active Apr 23 15:03:27 2019 3515 AAAAAAAA
11 /dev/sdf3 6 Active Apr 28 08:03:55 2019 3927 AAAAA.AA
12 /dev/sde3 7 Active Apr 28 08:03:55 2019 3927 AAAAA.AA
===============================================================================
**** AFTER ****:
RAID metadata found!
UUID: a6860c7d:0b020f8d:1a61ec72:4684aeb7
Level: raid5
Devices: 8
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Apr 29 14:08:04 2019
Status: ONLINE (md1) [UU_UUUUU]
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
12 /dev/sde3 0 Active Apr 29 14:38:42 2019 331 AAAAAAAA
11 /dev/sdf3 1 Active Apr 29 14:38:42 2019 331 AAAAAAAA
10 /dev/sdg3 2 Rebuild Apr 29 14:38:42 2019 331 AAAAAAAA
9 /dev/sdh3 3 Active Apr 29 14:38:42 2019 331 AAAAAAAA
8 /dev/sdi3 4 Active Apr 29 14:38:42 2019 331 AAAAAAAA
7 /dev/sdj3 5 Active Apr 29 14:38:42 2019 331 AAAAAAAA
6 /dev/sdk3 6 Active Apr 29 14:38:42 2019 331 AAAAAAAA
5 /dev/sdl3 7 Active Apr 29 14:38:42 2019 331 AAAAAAAA
===============================================================================
Asked by Shadowed6
(11 rep)
Apr 29, 2019, 08:46 PM
Last activity: Apr 29, 2019, 09:45 PM
Last activity: Apr 29, 2019, 09:45 PM