Sample Header Ad - 728x90

Cannot mount RAID 5 array on server, "Unable to read superblock"

0 votes
3 answers
940 views
I am an novice to servers and RAID arrays, but I am trying to restore a raid 5 array that has recently started to have problems with mounting onto our server. Full disclosure, I was not the one who originally set up this array. I am working with a Dell 2950 PowerEdge server that we have installed CentOS 7 on to manage programs and files. My issue began when I couldn't start up the server after a restart. My current set up tries to mount the array when the main server is started up. The command in /etc/fstab currently is commented out on bootup of our server because it causes the server to crash on start up. When the server is started, we uncomment out the line
/dev/sdc  /array   ext4   defaults    1 2
Then when I try and mount with:
mount /array
I get the following:
mount: wrong fs type, bad option, bad superblock on /dev/sdc,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
Tailing the dmesg:
[1625373.377926] sd 1:2:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK cmd_age=0s
[1625373.377929] sd 1:2:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00
[1625373.377931] blk_update_request: I/O error, dev sdc, sector 0
[1625373.377933] Buffer I/O error on dev sdc, logical block 0, async page read
[1633765.092732] nf_conntrack: falling back to vmalloc.
[1633765.093244] nf_conntrack: falling back to vmalloc.
[1633782.909748] sd 1:2:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK cmd_age=0s
[1633782.909757] sd 1:2:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 02 00 00 00 02 00 00
[1633782.909761] blk_update_request: I/O error, dev sdc, sector 2
[1633782.909782] EXT4-fs (sdc): unable to read superblock
It appears that there is a bad superblock. I am not sure where to even begin on how to approach fixing this issue. I don't want to reformat the entire array at loss of the data (over 10 TB) if I can avoid it. Some other helpful information might be that this is not a MD compatible server (might be obvious, wasn't to me initially) so
commands are not applicable. It is set up with a EXT4 file system. The array was also not likely properly partitioned when it was initialized 5+ years ago, i.e., there is no sdc0, sdc1, etc. I am not sure as to the full ramifications of that might have, but it sounds like certain functions/scripts will assume that the array is properly partitioned and can cause things to get "hairy" if used improperly. Any help with this issue would be greatly appreciated. If there is information that I am not providing that would help in diagnosing the issue, please let me know. Update: I wanted to see if I could find which superblock was having trouble. With the array unmounted, I tried the following:
$ dumpe2fs /dev/sdc | grep -i superblock
output from that command gave me:
dumpe2fs 1.42.9 (28-Dec-2013)
dumpe2fs: Attempt to read block from filesystem resulted in short read while trying to open /dev/sdc
Couldn't find valid filesystem superblock.
I also tried running
-n /dev/sdc
to try and see if I could get more information on the Superblocks. this was the results
# mke2fs -n /dev/sdc
mke2fs 1.42.9 (28-Dec-2013)
/dev/sdc is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
335577088 inodes, 2684616704 blocks
134230835 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
81928 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
        102400000, 214990848, 512000000, 550731776, 644972544, 1934917632, 
        2560000000
Update (2022-11-15) I was able to get some assistance from our IT team and we were able to get into the BIOS for our server. It might be helpful to know our raid 5 was a PERC 5/E, and in the BIOS we found that the slot 8 disk was considered "foreign". We "import"ed (NOT "clear") the disk in slot 8 and the array came back up, but in a degraded state. Odd thing was that our disk in slot 0 was marked as "missing" which was odd. I believe that there was a disk inserted into the array as a global hot spare, and I think that it was used by the array to maintain parody. To ensure we don't lose the data, I am backing up the array on the external 15 TB drive with "rsync" before trying to figure out how to reestablish the global hot spare. Fingers crossed the copy process goes smoothly. Thanks everyone for all the help!
Asked by Schmidt (1 rep)
Nov 10, 2022, 04:25 PM
Last activity: Nov 15, 2022, 04:10 PM