Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

2 votes
1 answers
2212 views
md: kicking non-fresh sdg from array! md/raid:md0: and then not enough operational devices (3/7 failed)
today I run in a disaster... I have a RAID 6 with 7 HDDs and yesterday one disk failed. After replacing the disk and did a rebuild over night I found out that a 2nd HDD was out of the RAID... So today I 've started to backup my Files on external Drives but then the copying stopped and as I've checke...
today I run in a disaster... I have a RAID 6 with 7 HDDs and yesterday one disk failed. After replacing the disk and did a rebuild over night I found out that a 2nd HDD was out of the RAID... So today I 've started to backup my Files on external Drives but then the copying stopped and as I've checked why and saw in Webmins RAID that sdg was "down". I shut down the server and checked the hardware and found out, that the backplate, where the HDDs are connected got lose... After repairing it all drives are now back but my RAID 6 don't start anymore :-/ dmesg shows me: md: kicking non-fresh sdg from array! md: kicking non-fresh sdf from array! md: kicking non-fresh sde from array! md/raid:md0: not enough operational devices (3/7 failed) ... and after many md0: ADD_NEW_DISK not supported I can read this: EXT4-fs (md0): unable to read superblock With sudo mdadm --examine I checked the sdg, sdf and sde and e and f shows "State clean" where the sdg, which was "down" before repairing shows "Active". So 6 of 7 Devices shows "Clean" except the sdg. Here is the list of the output of all devices:
Disk sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
           Name : N5550:0  (local to host N5550)
  Creation Time : Fri Oct 29 14:43:58 2021
     Raid Level : raid6
   Raid Devices : 7

 Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
     Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
  Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258992 sectors, after=7344 sectors
          State : clean
    Device UUID : 9180f101:1dacdd9e:4adae9c4:fbeb2552

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Mar 26 18:13:45 2022
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 38019182 - correct
         Events : 256508

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA.A.. ('A' == active, '.' == missing, 'R' == replacing)
Disk sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
           Name : N5550:0  (local to host N5550)
  Creation Time : Fri Oct 29 14:43:58 2021
     Raid Level : raid6
   Raid Devices : 7

 Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
     Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
  Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258992 sectors, after=7344 sectors
          State : clean
    Device UUID : 889c6877:5ee5c647:eebd209c:d9c6abcb

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Mar 26 18:13:45 2022
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : a71ea53d - correct
         Events : 256508

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAA.A.. ('A' == active, '.' == missing, 'R' == replacing)
Disk sdd
/dev/sdd:
   MBR Magic : aa55
Partition :   3907026944 sectors at         2048 (type fd)
Disk sde
/dev/sde:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
           Name : N5550:0  (local to host N5550)
  Creation Time : Fri Oct 29 14:43:58 2021
     Raid Level : raid6
   Raid Devices : 7

 Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
     Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
  Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258992 sectors, after=7344 sectors
          State : clean
    Device UUID : 34198042:3d4c802b:36727b02:fdf65808

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Mar 26 18:05:00 2022
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : f8fb6b18 - correct
         Events : 256494

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAAA.. ('A' == active, '.' == missing, 'R' == replacing)
Disk sdf
/dev/sdf:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
           Name : N5550:0  (local to host N5550)
  Creation Time : Fri Oct 29 14:43:58 2021
     Raid Level : raid6
   Raid Devices : 7

 Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
     Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
  Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258992 sectors, after=7344 sectors
          State : clean
    Device UUID : b2e8d640:1f21336f:88d823fe:66ef7be7

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Mar 23 14:46:56 2022
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 15cd05bb - correct
         Events : 238681

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAAAAA. ('A' == active, '.' == missing, 'R' == replacing)
Disk sdg
/dev/sdg:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
           Name : N5550:0  (local to host N5550)
  Creation Time : Fri Oct 29 14:43:58 2021
     Raid Level : raid6
   Raid Devices : 7

 Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
     Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
  Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258992 sectors, after=7344 sectors
          State : active
    Device UUID : 2bc06e22:49aa73e2:3cf7eb79:55df1180

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Mar 26 17:57:06 2022
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 7f0ddb2a - correct
         Events : 256372

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 5
   Array State : AAAAAA. ('A' == active, '.' == missing, 'R' == replacing)
Disk sdh
/dev/sdh:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4
           Name : N5550:0  (local to host N5550)
  Creation Time : Fri Oct 29 14:43:58 2021
     Raid Level : raid6
   Raid Devices : 7

 Avail Dev Size : 3906770096 (1862.89 GiB 2000.27 GB)
     Array Size : 9766906880 (9314.45 GiB 10001.31 GB)
  Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258992 sectors, after=7344 sectors
          State : clean
    Device UUID : 7af89a18:52ef08ae:dec5ad7b:75626355

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Mar 26 18:13:45 2022
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 17d7b107 - correct
         Events : 256508

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAA.A.. ('A' == active, '.' == missing, 'R' == replacing)
I've tried to start the RAID with mdadm --run /dev/md0 and get: mdadm: failed to start array /dev/md0: Input/output error But after I started it with this Webmin shows me then: /dev/md0 active, FAILED, Not Started RAID6 (Dual Distributed Parity) 7.27 TiB Its 7.27 from 9TB. Any ideas how to get my RAID back to work again without data loss? I've read about that I could add devices back again to the RAID but I'm unsure and wanted to ask before. Any help would be appreciated! **UPDATE**: I forgot that one of the device is /dev/sdd1 and not /sdd! Here the examine of it: ~~~ /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4 Name : N5550:0 (local to host N5550) Creation Time : Fri Oct 29 14:43:58 2021 Raid Level : raid6 Raid Devices : 7 Avail Dev Size : 3906767872 (1862.89 GiB 2000.27 GB) Array Size : 9766906880 (9314.45 GiB 10001.31 GB) Used Dev Size : 3906762752 (1862.89 GiB 2000.26 GB) Data Offset : 259072 sectors Super Offset : 8 sectors Unused Space : before=258992 sectors, after=5120 sectors State : clean Device UUID : d8df004e:44ee4060:ba4d2c22:e7e6bdcb Internal Bitmap : 8 sectors from superblock Update Time : Sat Mar 26 18:13:45 2022 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 1c4e98a4 - correct Events : 256508 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAA.A.. ('A' == active, '.' == missing, 'R' == replacing) ~~~ And here an mdadm -D /dev/md0: ~~~ /dev/md0: Version : 1.2 Raid Level : raid0 Total Devices : 7 Persistence : Superblock is persistent State : inactive Working Devices : 7 Name : N5550:0 (local to host N5550) UUID : e866cf54:90d5c74e:fe00b6e7:d25c82f4 Events : 256494 Number Major Minor RaidDevice - 8 64 - /dev/sde - 8 32 - /dev/sdc - 8 112 - /dev/sdh - 8 80 - /dev/sdf - 8 16 - /dev/sdb - 8 49 - /dev/sdd1 - 8 96 - /dev/sdg ~~~
LeChatNoir (21 rep)
Mar 26, 2022, 07:29 PM • Last activity: Jun 2, 2025, 04:08 PM
1 votes
1 answers
49 views
How to calculate the checksum of a ext4 (eg. superblock or inode)
I have an image-file of an ext4 file system (filesystem.img). I am extracting sections sections of the filsystem. Is there a way to manually calculate the the checksum of the sections just with a given byte dump of the section. Given the the superblock according to the documentation the checksum is...
I have an image-file of an ext4 file system (filesystem.img). I am extracting sections sections of the filsystem. Is there a way to manually calculate the the checksum of the sections just with a given byte dump of the section. Given the the superblock according to the documentation the checksum is calculated by using the algorithm CRC32C over the superblock ignoring the checksum field. I have tried to do it in python with the function checksum = crc32c.crc32c(superblock) but with no success yet. Has anybody managed to do this or know what I have to do there? Any help would be appreciated. Thanks
user722585 (11 rep)
Mar 18, 2025, 01:32 PM • Last activity: Mar 19, 2025, 04:08 AM
0 votes
0 answers
30 views
How many Group Descriptor is there in GDT of Ext4?
`GDT` = `Group Descriptor Table` ; `GD` = `Group Descriptor`. I'm trying to figure out how many bytes I need to read after the superblock to extract the entire `GDT`, because it's needed for a program I'm making. As we know each `GD` is `64B`, and each block group is supposed to have `GD`, but since...
GDT = Group Descriptor Table ; GD = Group Descriptor. I'm trying to figure out how many bytes I need to read after the superblock to extract the entire GDT, because it's needed for a program I'm making. As we know each GD is 64B, and each block group is supposed to have GD, but since there is special flags such as flex_bg or sparse_super or meta_bg which will make there less GD than there should be. In that case how am I suppose to know the length of GDT?
james (121 rep)
Feb 9, 2025, 11:25 AM • Last activity: Feb 9, 2025, 01:23 PM
4 votes
1 answers
1963 views
Relation between "Block Size"and "Upper Limits" in ext2
[A similar looking question][1] asks for the reason why the upper file limit could be 2 TB in `ext2`. I am trying to understand but [the Documentation on `ext2`][2] but find this hard. Please correct me if I'm wrong: - Blocks can be 1 - 4 KB in size - available amount of blocks is based upon a 32-Bi...
A similar looking question asks for the reason why the upper file limit could be 2 TB in ext2. I am trying to understand but the Documentation on ext2 but find this hard. Please correct me if I'm wrong: - Blocks can be 1 - 4 KB in size - available amount of blocks is based upon a 32-Bit value: 232 = 4.294.967.296 blocks In the documentation I found: 231-1 = 2.147.483.647 addressable Blocks. I miss 2.147.483.649 Blocks. My **guess** is that this is »reserved« for the *Superblock*, the *Block Group Descriptors* , their backups etc. (correct?) Question ---------- **How exactly are the file size limits calculated in ext2** And, just in advance: Is – and if so – how far can this be translated to ext3 and ext4 (or other file systems... please?) – I'm still far from them and not lazy to look this up myself; just confused about where the basics come from .
erch (5200 rep)
Dec 14, 2013, 12:34 AM • Last activity: Jan 4, 2025, 05:02 PM
1 votes
1 answers
145 views
How does the combination of SPARSE_SUPER and FLEX_BG features impact the location of backup super blocks in EXT4?
Edit: I've posted an answer to the following below, but not accepted yet, in the hopes that someone will kindly provide me with a generic calculation for determining whether block group *n* will have a backup super block and descriptors at the start. I'm trying to understand the way in which EXT4 fi...
Edit: I've posted an answer to the following below, but not accepted yet, in the hopes that someone will kindly provide me with a generic calculation for determining whether block group *n* will have a backup super block and descriptors at the start. I'm trying to understand the way in which EXT4 file systems are structured (i.e. the layout) on a block device, and in (manually) parsing my EXT4 file system (which has the
and
feature flags enabled), I'm finding that the actual layout (both from my parsing, and when summarised using
, and
) doesn't align with what I expect from my understanding of how those features work, and from the Kernel EXT4 documentation. It is my understanding that when the
feature is enabled, several block groups (16 in my case) will be treat as a single logical larger block group (flex group). As such, the first block group in the flex group will be used to store much of the metadata for the other block groups in the flex group (such as the block and inode bitmaps, inode table) - the remaining block groups can primarily be used for data (allowing more contiguous blocks for larger files, etc.). However, all block groups still contain backup copies of the super block and group descriptor table (and and reserved descriptor blocks), *as default*. It is my understanding that when the
feature flag is enabled, backup copies of the super block and group descriptors (and any reserved descriptor blocks) are **not** stored in every block group (as would be default), but, as per the EXT4 documentation, >are kept only in the groups whose group number is either 0 or a power of 3, 5, or 7 So, the expectation, is that these would be in block groups 0, 1, 8, 27, 32, 64, 128, 243.. etc. Or if the intention was to be *multiple of 3, 5, or 7*, the expectation would be backups in 0, 3, 5, 6, 7, 9, 10, 12, 14, 15, .. etc. However, in my case,
shows that backup copies of the super block are contained in block groups: 0, 1, 3, 5, 7, 9, 25, 27, 49, 81, 125, 243 This values align with the values I get when manually parsing the block device, so I know there's no bug in my code in this regard. As can be seen, there are some block groups aligning with being *powers of 3, 5, and 7*, All the block groups are also *multiples of* (1), 3, 5 and 7, but are not **all** multiples of these. So, what is the actual logic here, as it doesn't seem to completely align with the Kernel EXT4 documentation? How does the file system know whether a given block will have a backup super block (et al.) or not? Any advice, thoughts or knowledge appreciated.
genericuser99 (119 rep)
Oct 15, 2024, 12:48 PM • Last activity: Oct 16, 2024, 07:26 PM
0 votes
1 answers
1054 views
XFS superblock not found
I had an old drive beginning to fail and my backups were no good apparently. I got a new drive larger than the failed one and was able to ddrescue it over, with bad/missing superblocks of course. Through some fiddling around I was able to mount the new drive, but there were errors of course displaye...
I had an old drive beginning to fail and my backups were no good apparently.
I got a new drive larger than the failed one and was able to ddrescue it over, with bad/missing superblocks of course.
Through some fiddling around I was able to mount the new drive, but there were errors of course displayed on a directory listing. I began to copy what I could, and I'm assuming subdirectories had issues, because it errored mid-copy on the new drive and the partition stopped responding.
I cannot get xfs_repair to salvage the new drive saying to secondary superblocks were found. What are my options here? I suppose I could ddrescue off the old drive again, which unfortunately took over a day last time, but even then how do I proceed? Running testdisk/photorec would be a nightmare.
MaKR (181 rep)
Mar 2, 2023, 10:40 PM • Last activity: Oct 2, 2024, 08:25 AM
4 votes
0 answers
112 views
Accidentally ran mkswap on ext4 fs
I accidentally used `mkswap` on a unmounted 2 TB data partition of a hard drive (not an SSD), while I was on a live USB session. I did not run swapon as I immediately realized what I had just done. I wonder if there is any way to fix this maybe through the use of `e2fsck`. I ran `mke2fs -n` on the a...
I accidentally used mkswap on a unmounted 2 TB data partition of a hard drive (not an SSD), while I was on a live USB session. I did not run swapon as I immediately realized what I had just done. I wonder if there is any way to fix this maybe through the use of e2fsck. I ran mke2fs -n on the affected partition without any other arguments and got a bunch of backup superblocks. However when I run dumpe2fs -o superblocks=N I get confusing information, like the Last write time is August 2023 (but I had been using this until Sept. 2 2024). Perhaps because I haven't used the exact same arguments as when creating the FS the backup superblocks I found are from an older filesystem or I am missunderstanding something about the dump. If there are other superblocks, how would I find them? Or could I use this superblocks anyway to try and run e2fsck on the file system?
Samuel (342 rep)
Sep 4, 2024, 01:19 AM • Last activity: Sep 4, 2024, 01:17 PM
0 votes
0 answers
47 views
BTRFS not mounting, only one good superblock remaining
I've tried every btrfs tool I can find documentation for to try and mount and recover a KVM/QEMU virtual drive. I'm wondering if anyone knows if it's possible to mount a drive with only one good superblock? $ btrfs rescue super-recover -v /dev/vda2 ``` ERROR: superblock checksum mismatch ERROR: supe...
I've tried every btrfs tool I can find documentation for to try and mount and recover a KVM/QEMU virtual drive. I'm wondering if anyone knows if it's possible to mount a drive with only one good superblock? $ btrfs rescue super-recover -v /dev/vda2
ERROR: superblock checksum mismatch
ERROR: superblock checksum mismatch
ERROR: superblock checksum mismatch
All Devices:
	Device: id = 1, name = /dev/vda2

ERROR: superblock checksum mismatch
ERROR: superblock checksum mismatch
Before Recovering:

		superblock bytenr = 274877906944

		superblock bytenr = 65536

		device name = /dev/vda2
		superblock bytenr = 67108864


Make sure this is a btrfs disk otherwise the tool will destroy other fs, Are you sure? [y/N]: y
parent transid verify failed on 971849728 wanted 62052 found 61780
parent transid verify failed on 971849728 wanted 62052 found 61780
ERROR: could not setup extent tree
Failed to recover bad superblocks
Maybe it's possible to only recover files from a directory without mounting it? This is non-critical data so if it's dead, all good. I just wanted to see if I can save it for when it counts in the future.
prodigerati (101 rep)
Aug 17, 2024, 08:05 PM
0 votes
1 answers
191 views
How are EXT4 metadata checksums within group descriptors calculated?
I'm looking at parsing raw EXT4-formatted block devices, just using Python, primarily as a learning exercise, but am having trouble manually generating the expected Group Descriptor checksums - there appears to be some conflicting, missing or (seemingly) incorrect information when I search resources...
I'm looking at parsing raw EXT4-formatted block devices, just using Python, primarily as a learning exercise, but am having trouble manually generating the expected Group Descriptor checksums - there appears to be some conflicting, missing or (seemingly) incorrect information when I search resources online. I am able to correctly calculate the expected block bitmap and inode bitmap checksums, using the following calculation:

- (crc32c(s_uuid + bitmap_block))


as opposed to:

(s_uuid + bitmap_block)


As most of the documentation suggests (though I don't understand why inverting it yields the expected checksum value). However, I am unable to calculate the expected group descriptor checksum. The documentation suggests this should be:

(s_uuid + bg_num + group_desc) & 0xffff


I have tried calculating it as the documentation suggests, inverting as before, with and without the block group number, using the full block as the group descriptor, using the 64 byte descriptor, using only 32 bytes as the descriptor. And I have tried all of these with zero-ing out the 16-bit checksum field, and skipping over that field in calculations. Nothing I try yields the expected checksum value.

For reference, both METADATA_CSUM and FLEX_BG feature flags are set, and maybe the
part of the calculation that I am using is incorrect as a result of this.

Can anyone provide more information on how to correctly calculate the group descriptor checksum within EXT4 group descriptors? Can anyone also advise why the bitmap checksums only yield the expected (correct) values when subtracted from
despite no documentation I've found suggesting that this is necessary?
genericuser99 (119 rep)
Jul 18, 2024, 10:40 AM • Last activity: Jul 23, 2024, 10:54 PM
0 votes
0 answers
43 views
Disk error by partioning dual boot and running fsck -b
so after I partitioned with mini partion tool the drive containing Linux and windows 11 broke the windows so I hoped on linux trying to fix it but the linux partition sustained some damage too it wouldn't mount so I ran fsck -b to try and fix it with the backs superblock data cause fsck would not ru...
so after I partitioned with mini partion tool the drive containing Linux and windows 11 broke the windows so I hoped on linux trying to fix it but the linux partition sustained some damage too it wouldn't mount so I ran fsck -b to try and fix it with the backs superblock data cause fsck would not run cause of superblock error after I ran fsck -b it removed all my files and created a folder lost+found completely empty how do I fix this I forgot the original superblock byte
TheGuy (11 rep)
Jun 28, 2024, 11:30 AM
1 votes
1 answers
1029 views
Recover old HOME directory data from USB external HDD error: mount: unknown filesystem type 'LVM2_member', Couldn't find valid filesystem superblock
**Reason:** /boot filesystem corrupted: Both bootable hard drive cables were disconnected with motherboard by accident when it was powered on, which caused the HDDs /boot filesystem crashed. CentOS 7 **Bootable 2 x 320GB entire hard drives were mirrored by LVM2** (/dev/mapper/cl-root, /dev/mapper-cl...
**Reason:** /boot filesystem corrupted: Both bootable hard drive cables were disconnected with motherboard by accident when it was powered on, which caused the HDDs /boot filesystem crashed. CentOS 7 **Bootable 2 x 320GB entire hard drives were mirrored by LVM2** (/dev/mapper/cl-root, /dev/mapper-cl-home and swap) during the installation. I boot up with CD-ROM, tried to repair the corrupted root partition superblock without success. Now I put the drive connected with SATA->USB connect to another CentOS 7 server's USB port, showing as following:
[root@localhost ~]# fdisk -l
.....  (**ignore the current server internal drivers info here**)
--- below is the external USB SATA 320GB corrupted /boot drive -----
......
Disk /dev/sdg: 320.1 GB, 320072933376 bytes, 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 33553920 bytes
Disk label type: dos
Disk identifier: 0xe9e67578
   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1   *        2048     2099199     1048576   83  Linux
/dev/sdg2       606116385   625137344     9510480    7  HPFS/NTFS/exFAT
/dev/sdg3         2099200   606115839   302008320   8e  Linux LVM
My main purpose is to recover the HOME directory data from the crashed boot disk. I am looking for your help on one of the followings which can achieve this goal: 1. copy data out of the crashed hard drive. 2. repair the superblock to boot up as previously (in order to keep this question as short as possible, if you consider this is easier than 1, please let me know, so I have a longer troubleshooting code to paste here); Below is the detail information for what I did on 1): 1. Copy data out of the crashed hard drive:
[root@localhost ~]# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0   2.7T  0 disk 
......(ignore the internal disk, below is the USB HDD)....

sdg           8:96   0 298.1G  0 disk 
├─sdg1        8:97   0     1G  0 part 
├─sdg2        8:98   0   9.1G  0 part
└─sdg3        8:99   0   288G  0 part 
sr0          11:0    1   4.1G  0 rom
I am mounting all of the 3 slice onto the mount points: but the /dev/sdg3 failed with the following error <<-- this is my key problem: how can I mount this slice to see the HOME data? (it's obviously the data is not on sdg1 and sdg2).
[root@localhost ~]# mount /dev/sdg1 /media/1 
[root@localhost ~]# mount /dev/sdg2 /media/2
[root@localhost ~]# mount /dev/sdg3 /media/3
mount: unknown filesystem type 'LVM2_member'

[root@localhost ~]# df -k
Filesystem           1K-blocks      Used  Available Use% Mounted on
......
/dev/sdg2              9510476   8211864    1298612  87% /media/2
/dev/sdg1              1038336    300072     738264  29% /media/1
I tried to use testdisk to dump out data from sdg3, with output file image.dd. however, at the end I found that this image.dd file is stilled wrapped with LVM2_member:
[root@localhost home-gwu-old]# ls -l
total 302008328
-rw-r--r--. 1 root root 309256519680 Jan  5 19:34 image.dd

[root@localhost]# mount -o loop image.dd /media/3
mount: unknown filesystem type 'LVM2_member
Any your help would be greatly appreciated! gordon
Gordon (11 rep)
Jul 2, 2020, 01:30 AM • Last activity: Jun 12, 2024, 09:51 PM
0 votes
2 answers
2772 views
How to erase/wipe all ext4 metadata, not just the filesystem signature 53 ef?
I want to make zero all bits of a partition where an ext4 filesystem structure existed (superblocks, metadata, journal, etc), not just the filesystem signature deleted by `wipefs`. I can quickly zero out files content using `shred` utility, but not the metadata. On block devices supporting `TRIM/DIS...
I want to make zero all bits of a partition where an ext4 filesystem structure existed (superblocks, metadata, journal, etc), not just the filesystem signature deleted by wipefs. I can quickly zero out files content using shred utility, but not the metadata. On block devices supporting TRIM/DISCARD, this is easy as I can quickly run blkdiscard on the entire partition. However on large rotational HDDs where TRIM/DISCARD is not available, making all bits zero either becomes a large time consuming process or implies destroying/regenerating the disk encryption key (on self encrypting drives) which implies losing entire drive, not just the ext4 partition. Other than reading mke2fs code and creating an imaginary wipe2fs tool based on it, is there already another way to quickly wipe all ext4 superblocks/metadata?
Costin Gușă (629 rep)
Feb 24, 2021, 07:40 PM • Last activity: Sep 9, 2023, 06:13 PM
0 votes
0 answers
203 views
Corrupt EXT4 filesystem, backup superblock no dice
I am a novice linux user, for some time. I need some help with recovering a corrupted EXT4 filesystem. Ideally, I would like to recover the filesystem intact. The drive in question is 'sda'. The partition structure is given below. I have tried to mount partition 1, with backup superblock, but no dic...
I am a novice linux user, for some time. I need some help with recovering a corrupted EXT4 filesystem. Ideally, I would like to recover the filesystem intact. The drive in question is 'sda'. The partition structure is given below. I have tried to mount partition 1, with backup superblock, but no dice. No idea on how to proceed from here. I am only attempting to fix 'sda1' now, as I will replicate what works on other partitions and drives. I have another 4Tb drive with the same issue. Thank you for your time. **Context**: I was using a Live USB to read/write to 'sda'. I was using it for a while, in this situation, when one day the drive was no longer accessible. I tried acessing from a proper installation of linux from a hdd, and now here we area. The drive 'sdb' is current linux installation drive. Disks Gparted [enter image description hereReferenced image enter image description here enter image description here [Update] Remembered something. I had booted the disk from a test live cd and changed the permissions for all folders to rw across the 3 groups. Later, the drives were mounted(or tried to be mounted) on the original linux installation, with its own set of user & group rw permissions. I think, the lack of the correct permissions that it had expected to see of the files & folders, corrupted the superblocks.
userk (1 rep)
Jun 27, 2023, 04:17 AM • Last activity: Jul 2, 2023, 05:45 AM
1 votes
1 answers
730 views
Probabilistic (~50%) error on boot regarding "wrong fs type, bad option, bad superblock...missing codepage or helper program, or other error"
I've found a ton of posts on the internet regarding this error: `wrong fs type, bad option, bad superblock on /dev/xxx,missing codepage or helper program, or other error` Yet I have never found any case where the error just "sometimes" appears when booting. Whenever I boot my linux machine I sometim...
I've found a ton of posts on the internet regarding this error: wrong fs type, bad option, bad superblock on /dev/xxx,missing codepage or helper program, or other error Yet I have never found any case where the error just "sometimes" appears when booting. Whenever I boot my linux machine I sometimes get the mentioned error and sometimes it just works fine. It's around a 50/50 chance and I have not been able to see a pattern in any way. If I get the error, I just reboot and try again; I've been doing this for the past half a year. 3 boots is usually the maximum n. of times I have to boot till I get to my desktop. If I try mounting the drive in the emergency shell, no error pops up whatsoever and I can write/read to/from the drive without any problem. I would like to know if this is a fixable problem or if I should send back the nvme drive (it still has warranty).
Kernel: 6.2.8-alderlake-xanmod1-1 (Xanmod + GCC optimizations)  
OS: ArchLinux  
Drive: Kingston KC3000 PCIe 4.0, 1TB, bought separately from the laptop  
Laptop: Rog Zephyrus m16
EDIT: I have two drives with windows/linux dual boot. Linux tries to boot /dev/nvme1n1p1, but I just found out that from the emergency shell I can only mount /dev/nvme0n1p1 which is actually the linux root that is supposed to be booted. Whenever I get to boot into the desktop, then an fdisk -l shows me that the linux drive is correctly labeled as nvme1n1p1, therefore I suppose that the system is only able to boot when the linux drive is assigned "nvme1" and the windows drive is assigned "nvme0". I've manually specified the kernel command line with EFISTUB as such: root=/dev/nvme1n1p1 resume=/dev/nvme1n1p2 rw quiet modprobe.blacklist=nouveau ibt=off initrd=\initramfs-linux-xanmod.img
Dennis Orlando (81 rep)
Jun 28, 2023, 09:19 AM • Last activity: Jun 28, 2023, 09:44 AM
0 votes
2 answers
391 views
What is the difference between blocks and fragments in a group?
The output of a mke2fs command is as follows root@localhost:~# mke2fs /dev/xvdf mke2fs 1.42.9 (4-Feb-2023) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 6553600 inodes, 26214400 blocks 1310720 blocks (5.00%) reserved for th...
The output of a mke2fs command is as follows root@localhost:~# mke2fs /dev/xvdf mke2fs 1.42.9 (4-Feb-2023) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 6553600 inodes, 26214400 blocks 1310720 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 800 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks:     32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,     4096000, 7962624, 11239424, 20480000, 23887872   Allocating group tables: done                             Writing inode tables: done                             Writing superblocks and filesystem accounting information: done What I don't understand over here is this line > 32768 blocks per group, 32768 fragments per group I understood that the sectors(generally 512B) in a hard disk are first divided into small groups. The groups of sectors is called as blocks. In every block group I have 32768 blocks. What I don't understand here are the fragments. What are they? What does it signify? And if its possible to alter fragments in my FS.
Himanshuman (321 rep)
Jun 18, 2023, 06:49 PM • Last activity: Jun 19, 2023, 06:04 AM
64 votes
5 answers
279185 views
Recovering ext4 superblocks
Recently, my external hard drive enclosure failed (the hard drive itself powers up in another enclosure). However, as a result, it appears its EXT4 file system is corrupt. The drive has a single partition and uses a GPT partition table (with the label `ears`). `fdisk -l /dev/sdb` shows: Device Boot...
Recently, my external hard drive enclosure failed (the hard drive itself powers up in another enclosure). However, as a result, it appears its EXT4 file system is corrupt. The drive has a single partition and uses a GPT partition table (with the label ears). fdisk -l /dev/sdb shows: Device Boot Start End Blocks Id System /dev/sdb1 1 1953525167 976762583+ ee GPT testdisk shows the partition is intact: 1 P MS Data 2049 1953524952 1953522904 [ears] ... but the partition fails to mount: $ sudo mount /dev/sdb1 a mount: you must specify the filesystem type $ sudo mount -t ext4 /dev/sdb1 a mount: wrong fs type, bad option, bad superblock on /dev/sdb1, fsck reports an invalid superblock: $ sudo fsck.ext4 /dev/sdb1 e2fsck 1.42 (29-Nov-2011) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/sdb1 and e2fsck reports a similar error: $ sudo e2fsck /dev/sdb1 Password: e2fsck 1.42 (29-Nov-2011) e2fsck: Superblock invalid, trying backup blocks... e2fsck: Bad magic number in super-block while trying to open /dev/sdb1 dumpe2fs also: $ sudo dumpe2fs /dev/sdb1 dumpe2fs 1.42 (29-Nov-2011) dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb1 --- mke2fs -n (note, -n) returns the superblocks: $ sudo mke2fs -n /dev/sdb1 mke2fs 1.42 (29-Nov-2011) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 61054976 inodes, 244190363 blocks 12209518 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 7453 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 ... but trying "e2fsck -b [block]" for each block fails: $ sudo e2fsck -b 71663616 /dev/sdb1 e2fsck 1.42 (29-Nov-2011) e2fsck: Invalid argument while trying to open /dev/sdb1 However as I understand, these are where the superblocks were when the filesystem was created, which does not necessarily mean they are still intact. --- I've also ran a testdisk deep search if anyone can decypher the log. It mentions many entry like: recover_EXT2: s_block_group_nr=1/7452, s_mnt_count=6/20, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 244190363 recover_EXT2: part_size 1953522904 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed Running e2fsck with those values gives: e2fsck: Bad magic number in super-block while trying to open /dev/sdb1 I tried that with all superblocks in the testdisk.log for i in $(grep e2fsck testdisk.log | uniq | cut -d " " -f 4); do sudo e2fsck -b $i -B 4096 /dev/sdb1 done ... all with the same e2fsck error message. --- In my last attempt, I tried different filesystem offsets. For each offset i, where i is one of 31744, 32768, 1048064, 1049088: $ sudo losetup -v -o $i /dev/loop0 /dev/sdb ... and running testdisk /dev/loop0, I didn't find anything interesting. --- I've been fairly exhaustive, but is there *any way* to recover the file system without resorting to low-level file recovery tools (foremost/photorec)?
tlvince (1166 rep)
Mar 2, 2012, 08:54 PM • Last activity: Apr 26, 2023, 02:33 PM
0 votes
0 answers
586 views
Bad Superblock while mounting an ext4 Disk
I wanted to migrate my Data to a larger Disk for my secondary Drive for my Synology NAS. I tried to Mount the Disk on a Rapsberry to do a Filetransfer to the new Disk on the Synology. But on the way to get the Data from the old Disk I got superblock errors. Maybe I hit a wrong Command to mount the D...
I wanted to migrate my Data to a larger Disk for my secondary Drive for my Synology NAS. I tried to Mount the Disk on a Rapsberry to do a Filetransfer to the new Disk on the Synology. But on the way to get the Data from the old Disk I got superblock errors. Maybe I hit a wrong Command to mount the Disk which was previously used as ext4 Disk in a Disk Pool with only this single Disk. (Last Time I used mdadm to migrate the Single Disk RAID and worked like a charm). So now I am unable to mount it and have no clue how to continue getting back the Data. Here are some commands I tried to debug:
pi@pi4:~ $ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0  7.3T  0 disk
├─sda1        8:1    0  2.4G  0 part
├─sda2        8:2    0    2G  0 part
└─sda3        8:3    0  7.3T  0 part
mmcblk0     179:0    0 29.5G  0 disk
├─mmcblk0p1 179:1    0  256M  0 part /boot
└─mmcblk0p2 179:2    0 29.3G  0 part /
pi@pi4:~ $ sudo mount -t ext4 /dev/sda3 /mnt/tmp/
mount: /mnt/tmp: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error.
pi@pi4:~ $ sudo fsck.ext4 /dev/sda3
e2fsck 1.44.5 (15-Dec-2018)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/sda3

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 
 or
    e2fsck -b 32768
pi@pi4:~ $ sudo mke2fs -n /dev/sda3
mke2fs 1.44.5 (15-Dec-2018)
Creating filesystem with 1952301396 4k blocks and 244039680 inodes
Filesystem UUID: c98ad2b3-3e8a-426c-a0f7-b06ed0279fa3
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848, 512000000, 550731776, 644972544, 1934917632
pi@pi4:~ $ sudo mdadm --examine /dev/sda
/dev/sda:
   MBR Magic : aa55
Partition :   4294967295 sectors at            1 (type ee)
I tried to use a superblock stored in the blocks showed in the previous mke2fs command output but without success (and I tried more superblocks then shown in the example).
sudo e2fsck -b 32768 /dev/sda3
e2fsck 1.44.5 (15-Dec-2018)
e2fsck: Bad magic number in super-block while trying to open /dev/sda3

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 
 or
    e2fsck -b 32768
pi@pi4:~ $ sudo dumpe2fs /dev/sda3 | grep -i superblock
dumpe2fs 1.44.5 (15-Dec-2018)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sda3
Couldn't find valid filesystem superblock.
Maybe you can give me an Advice how to get the Data from the Disk. Thanks in Advance! Edit: Add more Information as requested
pi@pi4:~ $ sudo file -s /dev/sda3
/dev/sda3: data
pi@pi4:~ $ sudo fdisk -l /dev/sda
Disk /dev/sda: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model:
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disklabel type: gpt
Disk identifier: F6E21726-C437-47C8-BA95-FF84072B6C52

Device Start End Sectors Size Type
/dev/sda1 2048 4982527 4980480 2.4G Linux RAID
/dev/sda2 4982528 9176831 4194304 2G Linux RAID
/dev/sda3 9437184 15627848351 15618411168 7.3T Linux RAID
Igor (1 rep)
Feb 9, 2023, 05:36 PM • Last activity: Feb 9, 2023, 08:29 PM
1 votes
1 answers
447 views
Unix ext2 superblock - file system creation date
I am trying to find the creation date on an _ext2_ file system. I seem to get a current date using `dumpe2fs`. The problem is that the original _ext2_ superblock specification does not contain such information, though it seems like there might be an extension to the original fields (something about...
I am trying to find the creation date on an _ext2_ file system. I seem to get a current date using dumpe2fs.
The problem is that the original _ext2_ superblock specification does not contain such information, though it seems like there might be an extension to the original fields (something about after byte 264).
In fact using hexdump on the superblock (hexdump -s 1024 -n 1024 -C /dev/vdb) I can find 4 bytes starting from byte _265_ that containg a hex number which contain in little endian the unix time of the file system creation. Any information on how, why and under what circumstances that it there?
Thanks in advance
Panagiotis Stefanis (13 rep)
Jan 12, 2023, 06:01 PM • Last activity: Jan 12, 2023, 11:03 PM
0 votes
1 answers
2096 views
Bad magic number while trying to mount a new hard disk
Im using RHEL 8.7 I've added new HD `nvme0n2` to my linux and created partititions successfully the output of `lsblk -f` >NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n2 ├─nvme0n2p1 │ xfs 5d966f3d-7aca-4f06-bf74-aa32d97aba76 └─nvme0n2p2 ext4 56f6e1d8-58f3-47c7-840b-c1eebc24c3f7 But when I try to mount tha...
Im using RHEL 8.7 I've added new HD nvme0n2 to my linux and created partititions successfully the output of lsblk -f >NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n2 ├─nvme0n2p1 │ xfs 5d966f3d-7aca-4f06-bf74-aa32d97aba76 └─nvme0n2p2 ext4 56f6e1d8-58f3-47c7-840b-c1eebc24c3f7 But when I try to mount that hard disk `sudo mount /dev/nvme0n2 /mnt/newHardDrive/ ` , it says > mount: /mnt/newHardDrive: wrong fs type, bad option, bad superblock on /dev/nvme0n2, missing codepage or helper program, or other error. when i tried checking in /var/log/messages, it shows: > kernel: XFS (nvme0n2): Invalid superblock magic number I also tried replacing the superblock using the backup superblocks with the command sudo fsck -b 32768 /dev/nvme0n2 But then I get this error: >fsck from util-linux 2.32.1 e2fsck 1.45.6 (20-Mar-2020) fsck.ext2: Bad magic number in super-block while trying to open /dev/nvme0n2 The superblock could not be read or does not describe a valid ext2/ext3/ext4 filesystem. If the device is valid and it really contains an ext2/ext3/ext4 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 or e2fsck -b 32768 Found a gpt partition table in /dev/nvme0n2 Please Help.
Raj Parayane (13 rep)
Dec 29, 2022, 03:44 PM • Last activity: Dec 29, 2022, 04:25 PM
2 votes
2 answers
999 views
Rebuilding directory tree (and/or inodes)
Problem My Armbian based Orange Pi webserver crashed (probably because of a power loss). I thought it would be fine since ext4 is more resilient and it usually has been in the past, but for some reason, this time, it didn't reboot, it hung. When I checked, it seems the "drive" (actually a memory-car...

Problem

My Armbian based Orange Pi webserver crashed (probably because of a power loss). I thought it would be fine since ext4 is more resilient and it usually has been in the past, but for some reason, this time, it didn't reboot, it hung. When I checked, it seems the "drive" (actually a memory-card) **"does not have a valid filesystem"**. I pulled the card and made a backup image (amounting to ~4.5GB out of the 32GB space on the card) and examined it in a hex-editor and various programs. I also burned an Armbian ISO to an identical memory-card to compare metadata values. I do have a copy of the system from a couple of years ago on a different (non-identical card), but the server has changed a fair bit since then. Hopefully it can at least provide some more information to compare.

Observations

I've noticed several issues: * Most programs are unable to detect the card as containing a filesystem. No, fsck, testdisk, etc. don't work, they complain about a bad magic number, or wrong fs type, bad option, bad superblock, or missing codepage, or filesystem seems damaged, or bad relative sector, or other blocking errors. * The superblock is damaged (and I couldn't find backups); it looks like most of the entries are okay, but the following definitely have unusual/invalid values. I compared the values of the broken system with the new one, leaving out values like partition size or mount count which will naturally differ. Expected values with a question mark are ones which I'm not sure are specific to the drive or not, that is, I don't know if they're incorrect. |Field|Actual value|Expected value| |-|-:|-:| |Blocks per group|0|32768| |Fragments per group|0|32768| |inodes per group|5680|? 8160| |Maximal mount count|0x7777|0xffff| |Magic signature|0x6753|0xef53| |File system state|🤨 ‽‽‽ 0x03|0x00 or 0x01| |First inode number|3|? 11| |Compatible features|0x34|? 0x3c| |Incompat features|0x246|? 0x242| |R0 comp features|0x63|? 0x7b| * Searching the drive with a hex-editor, I'm able to find various directories and files, but examining it with some programs seems to indicate that at least some inodes seem to be wiped/zeroed out. * If I modify some of the superblock values (specifically the …per group values, magic signature, and file system state, then programs can see it, but with the following problems: * They still complain about things like it being an invalid filesystem or partition type * Some programs can list the root directory, but only show a few subdirectories (bin, boot, dev, lib, lost+found) * Contents of displayed directories are incorrect (eg only a few Python files in /boot) * Other subdirectories (etc, mnt, …, usr, var) are shown as broken/unknown files instead of directories * At least one subdirectory (home) isn't listed at all—though that might be because home is a symlink to somewhere in one of the broken directories 🤔 * One program's validation routine shows the following problems: * Integrity status: **invalid** * inodes count is valid: **no** * SuperSector copy exists and matched: **no** (is that supposed to be superblock? 🤔) * Most programs are showing the root-directory as ' instead of /. 🤨 * The damaged card has ~300MB of unallocated space after the ~29GB partition. That may or may not have been like that before the corruption. I distinctly recall a Pi setup tutorial saying to run a command to expand the partition to the full size of the drive/card after burning the ISO and booting the first time, but I just can't seem to find the tutorial or command 🤷. When I burned the new card and expanded the partition, it used the the whole card. It does seem like the old version on the old card also has a ~290mb unallocated chunk at the end. * Sometimes I get the sense that something was deleting everything on the card, but that doesn't make sense because it wasn't a virus that crashed it, it was a faulty power-supply. 🤨

Resources

Here are some resources I have available at my disposal: * Original corrupted card with minimal modifications (which I can reverse) * Cloned image of corrupted card * Previous version of the system from 2019 before it was transferred to a new card (though I can't for the life of me remember if I cloned it to the new card and then updated it or just burned a new ISO from scratch and transferred some files 🤔—I know I started with Jesse, updated to Stretch somehow, and was using Buster before the crash) * Another identical memory-card that (should) have identical parameters (eg sector count) * Free or trial versions of as many data-recovery and/or forensic programs as I could find (I'd consider purchasing a license to something if it's not too expensive and it'll definitely help restore the server) * A WordPress SQL dump from two months before the crash (but it doesn't contain external files like graphics files 🤦) * Backups of the etc, home, var/lib/mysql, and var/www directories from a couple of years ago—I could swear I made a more recent backup of at least etc but can't seem to find it anywhere 😕 * ISOs of various versions of Armbian * Various screenshots * Unfortunately, I'm not too knowledgeable about ext* (yet; I'm reading up on it now), but I am very familiar with FAT* and have done a lot of data-recovery with that over the years, so hopefully my skills can transfer to this filesystem 🤞 * I'm happy to provide any other information or answer any other questions needed

Desire/Requirements/Question

Ideally, I'd like to be able to modify a couple of values to fix the filesystem so that it's all fixed like nothing happened, but it looks like that's not going to happen. As a last resort, I can restore the image from late 2019 and try to manually restore/re-inject the other individual backups, at the risk of possibly losing a couple of years of work. At a minimum, I need to recover: * At least a few things from etc (mostly text files) * Any new files in root/home since last version (mostly text files) * Any new files in var/www since the last version (mostly graphics files—WITH FILENAMES) * Ideally var/lib/mysql (binary files) to avoid losing two months since last SQL dump * List of installed packages (where does Linux keep the list? please be a text-file🤞) Remeber, this is offline, so I can't run a command to dump it (though I swear I did recently, but again, I can't seem to find it 🤦) (You know, looking at it like this, it doesn't seem so disastrous anymore; it feels like time and work but seems doable. 😀)

Question

**Is there a way/program that will let me rebuild the directory tree or inode chains? That is, some way to look for directories and files (eg a string/byte search), then fill in the sector/cluster/inode of the start of the item, then have the program rebuild the inode chain based on that and the length of the file/directory?** The card was at least 80% empty and there wasn't too much delete/write activity, so most of the files and directories should be unfragmented, so it should be sufficient for most of them to just know the start and length to be able to restore the directory or copy the whole file.

Note

A regular content-based file-recovery scan isn't going to be of much use since it would just dump a ton of files with no filenames or directory structure, which is useless here since many of the files will be similar and/or binary, and thus very difficult to rename or locate their appropriate directories.

Tangential question

How does the system know if the partition is ext2/ext3/ext4 if they all use the same partition identifier of 83/0x53? I couldn't find another field to identify it it. 🤔
Synetech (173 rep)
Jun 6, 2021, 01:13 PM • Last activity: Nov 13, 2022, 01:16 AM
Showing page 1 of 20 total questions