Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
0
answers
8
views
How to recover data or mount a logical volume after resizing drive to less than size of physical volume
Due to a series of mistakes i resized my drive to be smaller than the physical volume it contains. The drive has a physical volume at around 110Gib, a logical volume at 100Gib, all on a 99.4Gib drive. The actual data on the logical volume is only 50gb or so, but i cant resize anything because i cant...
Due to a series of mistakes i resized my drive to be smaller than the physical volume it contains. The drive has a physical volume at around 110Gib, a logical volume at 100Gib, all on a 99.4Gib drive. The actual data on the logical volume is only 50gb or so, but i cant resize anything because i cant mount anything due to this mishap and i need the data off the logical volume.
Output of
:
WARNING: Device /dev/sda3 has size of 208445799 sectors which is smaller than corresponding PV size of 230684672 sectors. Was device resized?
WARNING: One or more devices used as PVs in VG ubuntu-vg have changed sizes.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
ubuntu-lv ubuntu-vg -wi------- 100.00g
Output of
:
WARNING: Device /dev/sda3 has size of 208445799 sectors which is smaller than corresponding PV size of 230684672 sectors. Was device resized?
WARNING: One or more devices used as PVs in VG ubuntu-vg have changed sizes.
PV VG Fmt Attr PSize PFree
/dev/sda3 ubuntu-vg lvm2 a-- <110.00g <10.00g
Output of
:
WARNING: Device /dev/sda3 has size of 208445799 sectors which is smaller than corresponding PV size of 230684672 sectors. Was device resized?
WARNING: One or more devices used as PVs in VG ubuntu-vg have changed sizes.
VG #PV #LV #SN Attr VSize VFree
ubuntu-vg 1 1 0 wz--n- <110.00g <10.00g
Axell
(1 rep)
Aug 6, 2025, 03:59 PM
1
votes
1
answers
12221
views
extundelete - How to solve 'Block bitmap checksum does not match bitmap when trying to examine filesystem'?
The OS is Ubuntu 17.10 and I've been trying to recover(undelete) with extundelete. (The File System is ext4.) [![enter image description here][1]][1] [1]: https://i.sstatic.net/sBg5w.png This didn't work. So, I tried with extundelete /dev/mapper/ubuntu--vg-root --restore-file /home/chan/origol/route...
The OS is Ubuntu 17.10 and I've been trying to recover(undelete) with extundelete.
(The File System is ext4.)
This didn't work. So, I tried with
extundelete /dev/mapper/ubuntu--vg-root --restore-file /home/chan/origol/routes/user.js
And It worked.
However, I got another problem.
Loading filesystem metadata ... extundelete: Block bitmap checksum does not match bitmap when trying to examine filesystem
I couldn't find any information about it. How can I solve this problem?

Chanjung Kim
(111 rep)
Jul 10, 2018, 05:25 PM
• Last activity: Aug 2, 2025, 11:00 PM
0
votes
0
answers
29
views
How to recover data from unbootable Acer PC drive on Ubuntu?
My ACER PC would not start, so I removed the disk and tried to access it on another PC in Ubuntu 24.04.2, but got this error msg: >Error mounting /dev/sdc4 at /media/ubuntu/ACER:wrong fs type,bad option,bad superblock on dev/sdc4, missing codepage or helper program, or other error. Is there a way to...
My ACER PC would not start, so I removed the disk and tried to access it on another PC in Ubuntu 24.04.2, but got this error msg:
>Error mounting /dev/sdc4 at /media/ubuntu/ACER:wrong fs type,bad option,bad superblock on dev/sdc4, missing codepage or helper program, or other error.
Is there a way to access this disk ?
Sonja Levorsen
(1 rep)
Jul 30, 2025, 01:16 PM
• Last activity: Jul 30, 2025, 01:28 PM
0
votes
2
answers
2143
views
How to process dd-created disk image, which is corrupted (disk died during dumping)
I have a laptop with HDD which had issues with booting (windows10). I assumed that Windows just failed some way. I've booted from LinuxLiveUSB and tried to dump the disk using `dd`. DD failed on 85GB because of I/O error. I've read that it is sign of bad block, so I've used `NOERROR` flag next time....
I have a laptop with HDD which had issues with booting (windows10). I assumed that Windows just failed some way.
I've booted from LinuxLiveUSB and tried to dump the disk using
dd
. DD failed on 85GB because of I/O error. I've read that it is sign of bad block, so I've used NOERROR
flag next time.
During that process dd
now throws only I/O errors. I've checked disk in fdisk -l
, but see only one partition (was 4 before the whole operation) with message that there was no other partition or something (sorry I can't remember correctly).
For the next reboot to LiveUSB, fdisk
detect no sda
whatsoever. So I think, disk is dead.
I still have 270GB image (closed DD because of never-ending I/O errors) of 1TB disk. I want to recover data from this image, but neither OSFMount
on Windows nor losetup
/kpartx
can mount Windows partition from this image (OSFmount
just hangs and linux tools do nothing).
Is there any process to prepare the image in a way to read the data from it? Thanks.
tuxfan
(1 rep)
Aug 6, 2020, 10:06 AM
• Last activity: Jul 25, 2025, 08:55 PM
1
votes
2
answers
2114
views
Help recovering a raid5 array
A little bit of background first. I store a bunch of data on a Thecus N4200Pro NAS array. I had gotten a report that one of the 4 drives in the array was showing smart errors. - So I swapped out the offending drive (#4) and it got to work rebuilding. About 60% into the rebuild one of the other drive...
A little bit of background first. I store a bunch of data on a Thecus N4200Pro NAS array. I had gotten a report that one of the 4 drives in the array was showing smart errors.
- So I swapped out the offending drive (#4) and it got to work rebuilding. About 60% into the rebuild one of the other drives in the array drops out, #1 in this case.
- Great.. I shut down and try swapping back in the original #4 to see if it will come back up. No dice.
- So I shut down and swap #1 & #2 to see if they can recover with the bad drive swapped around and replace the #4 with the half-rebuilt #4. In hindsight this was bad. I should have shut down after the first one and cloned all the original discs from there.
- The device boots back up and of course the raid fails to assemble, showing only discs 3 and 4, 4 being marked as a spare.
- At this point I shut everything down and pull all the discs and clone them, making sure to keep track of the number order.
- I put all 4 cloned discs into my unbutu 16.04 LTS box in the correct drive order and booted up.
- All 4 discs show up, and show the partitions in Disks. It shows a raid5 array and a raid1 array as well.
The raid1 array is the system info for the NAS, not really concerned with that. The raid5 array is the one I'm interested in with all my data on it, but I can't access anything on it. So time to start digging.
First I ran
cat /proc/mdstat
to see the arrays-
jake@ubuntu-box:~$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
[raid10]
md0 : active raid1 sdd1
1959884 blocks super 1.0 [4/1] [___U]
md1 : inactive sdd2(S) sdc2(S) sdb2(S) sda2(S)
3899202560 blocks
unused devices:
Ok, sees two arrays. So we get the details on md1 from: mdadm --detail /dev/md1
jake@ubuntu-box:~$ sudo mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Raid Level : raid0
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
State : inactive
UUID : e7ab07c3:b9ffa9ae:377e3cd3:a8ece374
Events : 0.14344
Number Major Minor RaidDevice
- 8 50 - /dev/sdd2
- 8 34 - /dev/sdc2
- 8 18 - /dev/sdb2
- 8 2 - /dev/sda2[/CODE]
Hrmm.. that's odd. showing the raid as raid0, which is not the case. Ok, lets check out each individual partition with: mdadm --examine /dev/sdXX
Disc 1
jake@ubuntu-box:~$ sudo mdadm --examine /dev/sda2/
dev/sda2:
Magic : a92b4efc
Version : 0.90.00
UUID : e7ab07c3:b9ffa9ae:377e3cd3:a8ece374
Creation Time : Thu Aug 18 14:30:36 2011
Raid Level : raid5
Used Dev Size : 974800000 (929.64 GiB 998.20 GB)
Array Size : 2924400000 (2788.93 GiB 2994.59 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Update Time : Tue Mar 13 14:00:33 2018
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 1
Spare Devices : 1
Checksum : e52c5f8 - correct
Events : 20364
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 2 0 active sync /dev/sda2
0 0 8 2 0 active sync /dev/sda2
1 1 8 18 1 active sync /dev/sdb2
2 2 8 34 2 active sync /dev/sdc2
3 3 0 0 3 faulty removed
4 4 8 50 4 spare /dev/sdd2
Disc 2
jake@ubuntu-box:~$ sudo mdadm --examine /dev/sdb2/
dev/sdb2:
Magic : a92b4efc
Version : 0.90.00
UUID : e7ab07c3:b9ffa9ae:377e3cd3:a8ece374
Creation Time : Thu Aug 18 14:30:36 2011
Raid Level : raid5
Used Dev Size : 974800000 (929.64 GiB 998.20 GB)
Array Size : 2924400000 (2788.93 GiB 2994.59 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Update Time : Tue Mar 13 14:56:30 2018
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Checksum : e597e42 - correct
Events : 238868
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 1 8 18 1 active sync /dev/sdb2
0 0 0 0 0 removed
1 1 8 18 1 active sync /dev/sdb2
2 2 8 34 2 active sync /dev/sdc2
3 3 0 0 3 faulty removed
4 4 8 50 4 spare /dev/sdd2
Disc 3
jake@ubuntu-box:~$ sudo mdadm --examine /dev/sdc2/
dev/sdc2:
Magic : a92b4efc
Version : 0.90.00
UUID : e7ab07c3:b9ffa9ae:377e3cd3:a8ece374
Creation Time : Thu Aug 18 14:30:36 2011
Raid Level : raid5
Used Dev Size : 974800000 (929.64 GiB 998.20 GB)
Array Size : 2924400000 (2788.93 GiB 2994.59 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 1
Update Time : Tue Mar 13 15:10:07 2018
State : clean
Active Devices : 1
Working Devices : 2
Failed Devices : 2
Spare Devices : 1
Checksum : e598570 - correct
Events : 239374
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 34 2 active sync /dev/sdc2
0 0 0 0 0 removed
1 1 0 0 1 faulty removed
2 2 8 34 2 active sync /dev/sdc2
3 3 0 0 3 faulty removed
4 4 8 50 4 spare /dev/sdd2
and Disc 4
jake@ubuntu-box:~$ sudo mdadm --examine /dev/sdd2/
dev/sdd2:
Magic : a92b4efc
Version : 0.90.00
UUID : e7ab07c3:b9ffa9ae:377e3cd3:a8ece374
Creation Time : Thu Aug 18 14:30:36 2011
Raid Level : raid5
Used Dev Size : 974800000 (929.64 GiB 998.20 GB)
Array Size : 2924400000 (2788.93 GiB 2994.59 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Update Time : Tue Mar 13 11:03:10 2018
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : e526d87 - correct
Events : 14344
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 3 8 50 3 active sync /dev/sdd2
0 0 8 2 0 active sync /dev/sda2
1 1 8 18 1 active sync /dev/sdb2
2 2 8 34 2 active sync /dev/sdc2
3 3 8 50 3 active sync /dev/sdd2
So - Magic numbers and UUID are all good between the set. Events are all out of whack because it had tried to rebuild the replaced #4 as a spare instead of just rebuilding #4
Disc 4 has the correct info for the raid, and the sequencing as it was the drive I pulled originally and didn't get anything re-written. Discs 1-3 are showing in various states of chaos from swapping things around.
So two questions -
1. Why is it showing up as raid0 in the mdadm --detail
2. Is it possible to update the info for the first three discs that I got from the mdadm --examine /dev/sdd2
so that it sees everything as it should be, instead of the cluster that I inadvertently made of it. I *think* if I can find a way to update the info for those partitions or discs the raid should reassemble correctly and rebuild itself so I can access my data
Any ideas would be helpful, as I've gotten about as far as I can get trying to figure this out on my own and doing a ton of searching.
psykokid
(11 rep)
Mar 16, 2018, 01:01 AM
• Last activity: Jul 14, 2025, 04:07 PM
4
votes
1
answers
4449
views
after fs crash and running fsck, some files were recovered but not place in lost+found?
I had a I/O error on an external hard drive partition sdb4 (its usual mountpoint being /run/media/yan/data). The partition was non responsive, couldn't be accessed and refused to unmount. I did not know what to do but unplug the disk and replug it. After that I had error on its fs, so I ran fsck: su...
I had a I/O error on an external hard drive partition sdb4 (its usual mountpoint being /run/media/yan/data).
The partition was non responsive, couldn't be accessed and refused to unmount. I did not know what to do but unplug the disk and replug it. After that I had error on its fs, so I ran fsck:
sudo e2fsck /dev/sdb4 -y -v
It was asking for a lot of fixes (thousands) but since data is non-critical on that disk, I ran it with -y.
data contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
# Fixed invalid inode numbers, incorrect filetypes, cleared links, deleted/unused inodes
Pass 3: Checking directory connectivity
# Connected unconnected directory inodes to /lost+found
Pass 4: Checking reference counts
#Fix inodes ref count, connected unattached inode to /lost+found
Pass 5: Checking group summary information
# Fix block bitmap differences, blocks count wrong for group
# Fix inode bitmap differences, directories count wrong for group, free inodes count wrong for group
data: ***** FILE SYSTEM WAS MODIFIED *****
72955 inodes used (0.14%, out of 51200000)
2390 non-contiguous files (3.3%)
17 non-contiguous directories (0.0%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 72264/636/1
186984621 blocks used (91.30%, out of 204800000)
0 bad blocks
34 large files
70447 regular files
2453 directories
0 character device files
0 block device files
0 fifos
4294966642 links
46 symbolic links (46 fast symbolic links)
0 sockets
------------
71063 files
So if I understand correctly, fsck managed to salvage 70k files, so most of the files since I had like 75-80k files on that disk. The problem is that only 20k files appear in '/run/media/yan/data/lost+found', and only 24k on the entire partition.
[yan@machine ~]$ find /run/media/yan/data/lost+found | wc -l
19786
[yan@machine ~]$ find /run/media/yan/data | wc -l
23691
I reran fsck but he tells me that the partition is clear (and has 74k files ?)
[yan@machine ~]$ sudo fsck /dev/sdb4
fsck from util-linux 2.28
e2fsck 1.42.13 (17-May-2015)
data: clean, 74200/51200000 files, 186685980/204800000 blocks[/cpp]
I also have very different disk usage according to df and du (I know there should be a difference, but here it seems too big to be normal):
[yan@machine ~]$ df -h /run/media/yan/data
Filesystem Size Used Avail Use% Mounted on
/dev/sdb4 769G 700G 31G 96% /run/media/yan/data
[yan@machine ~]$ du -sh /run/media/yan/data
586G /run/media/yan/data
I'm guessing there is still recovered data that I can't access.
My questions are :
1) Is it possible for recovered files by fsck to not be place in lost+found ? In that case, where are they ?
2) Is there any way to get back those missing files ?
3) If not, how do I free this space ?
EDIT:
I tried a more recent version of e2fsck on sourcejedi's recommandation:
[yan@machine build]$ sudo ./e2fsck/e2fsck -f /dev/sdb4
e2fsck 1.43.3 (04-Sep-2016)
Pass 1: Checking inodes, blocks, and sizes
Inode 40501578 extent tree (at level 2) could be narrower. Fix? yes
Pass 1E: Optimizing extent trees
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
data: ***** FILE SYSTEM WAS MODIFIED *****
data: 74200/51200000 files (3.2% non-contiguous), 186685964/204800000 blocks
It did not do much, lost+found still has the same file count and size.
Yann
(211 rep)
Nov 6, 2016, 02:08 PM
• Last activity: Jun 30, 2025, 06:09 AM
2
votes
1
answers
2012
views
SafeCopy manually finish ISO
Good Morning, I am currently helping a good friend to recover her broken 1TB external HDD. She dropped the drive and now, it cannot be mounted anymore. After some research I gave safecopy a try. I am working with a Kali Linux live CD and an internal 3TB HDD that is connected and mounted via USB stat...
Good Morning,
I am currently helping a good friend to recover her broken 1TB external HDD. She dropped the drive and now, it cannot be mounted anymore. After some research I gave safecopy a try. I am working with a Kali Linux live CD and an internal 3TB HDD that is connected and mounted via USB station. The external drive has less than 100GB space occupied. SafeCopy collects ~30GB per day. My first try aborted after ~260GB with an "location not found" error, the drive has reconnected to another mount path. The current try is at ~280GB. Since the drive is brand new, all stored data should already be collected in the output ISO. However, when I try to mount the 260GB ISO I get an file error, something about corrupted file and I/O error
I used this command for safecopy:
sudo safecopy --stage1 /dev/sda1 /path/to/3tb/drive/data.iso
/dev/sda1 is the place where the external HDD is detected.
Is there a way to manually finish the build of the ISO file? This would save me a lot of time, since safecopy would need ~34 days to complete the job.
EDIT:
As mentioned in the comments, I had to abort the process for some time. I've now set it all up again and after some difficulties, this is what fdisk produced:
sudo fdisk -l /dev/sdc1
Disk /dev/sdc1: 931.5 GiB, 1000169537536 bytes, 1953456128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x69205244
Device Boot Start End Sectors Size Id Type
/dev/sdc1p1 ? 218129509 1920119918 1701990410 811.6G 72 unknown
/dev/sdc1p2 ? 729050177 1273024900 543974724 259.4G 74 unknown
/dev/sdc1p3 ? 168653938 168653938 0 0B 65 Novell Netware 386
/dev/sdc1p4 2692939776 2692991410 51635 25.2M 0 Empty
Partition table entries are not in disk order.
I forgot to save the stage1.badblocks file, so I cannot really continue the first run.
I now started a new stage1 safecopy run, hope it will be a bit faster than before since I now run a Debian Linux directly from this notebook.
Since then, is there a way to use the iso files from the first run and make it readable?
EDIT2:
Ok, after 3 hours, this is the output so far:
(+0){XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXX 8-X 0%
the drive makes terrible clicking noises. If I interprete the fdisk output form earlier, the data seems to be written way more back on the drive, not from sector 0 onwards. Is it possible to read and rescue the data by starting at the end of the disk?
I fear I am more or less dependant on extracting the data from the iso file I created so far. Again, i it somehow possible to extracxt portions from an unfinished iso file and build a valid one from it?
EDIT 3:
I now tried ddrescue. It now runs for ~ 23h. The output file has a size of 134MB, the size I already knew from safecopy to be ok.
gmesg | tail
produces the following output:
[80840.705000] usb 2-1.1: reset high-speed USB device number 8 using ehci-pci
[80880.711821] usb 2-1.1: reset high-speed USB device number 8 using ehci-pci
[80920.718561] usb 2-1.1: reset high-speed USB device number 8 using ehci-pci
[80922.888408] sd 8:0:0:0: [sdb] Unhandled error code
[80922.888413] sd 8:0:0:0: [sdb]
[80922.888415] Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[80922.888417] sd 8:0:0:0: [sdb] CDB:
[80922.888419] Read(10): 28 00 49 a5 38 80 00 00 08 00
[80922.888426] end_request: I/O error, dev sdb, sector 1235564672
[80922.888430] Buffer I/O error on device sdb1, logical block 154445328
So what I can see there is that there are difficulties with the usb access and something with hostbyte=DID_TIME_OUT
dd has this output so far:
rescued: 123928 kB, errsize: 0 B, current rate: 12976 kB/s
rescued: 134742 kB, errsize: 39649 kB, current rate: 0 B/s
ipos: 635829 MB, errors: 605, average rate: 1688 B/s ago
opos: 635829 MB, run time: 22.17 h, successful read: 22.01 h ago
Copying non-tried blocks... Pass 1 (forwards)
After dd has finished I will try to extract at least a little bit with tsk_recover from the dd image.
As mentioned in the comments I looked up the hardware specs of the drive. THe problem is that the USB connector (USB 3.0 Type B Micro) is placed on the main PCB so I cannot access a ATA/SATA connection. Or at least thats what I found out (I didn't open the case so far). I couldn't find a data sheet with circuit diagram. The product number is WDBHHG0010BBK-04.
I found a video of a similar looking (!) drive that has pins next to the usb port. I don't know enough about hard drives and electronics to see if I could use these. As soon as dd finished I will open the case.
I am asking myself what could have damaged the drive that bad. My friend told me she just dropped it. It seems like the rw head is damaged or has smashed the disk. As far as I know, HDDs move their heads aside while idle or powered off.
My attempts of rescue seem to have not dealt much more damage since the result of readable sectors are the same as from the start.
So, much text. my current question is what the syslog entries should tell me.
Some Pics of the drive:



Ueda Ichitaka
(81 rep)
Aug 18, 2016, 09:20 AM
• Last activity: Jun 25, 2025, 04:03 AM
7
votes
1
answers
428
views
Recovering data from incomplete ddrescue iso
I've got a 2.5 inch 5200RPM 320G HDD to recover data from. As I've been told, a child has stepped on the laptop and broke it. They gave me the laptop and motherboard seems to be completely fine. There's no any signs of being stepped on. The HDD also seems fine at the first glance. I connected it ove...
I've got a 2.5 inch 5200RPM 320G HDD to recover data from. As I've been told, a child has stepped on the laptop and broke it. They gave me the laptop and motherboard seems to be completely fine. There's no any signs of being stepped on.
The HDD also seems fine at the first glance. I connected it over a satausb cable and started a ddrescue (without specifying mapfile). It took 13 days to complete the first stage and go to trimming. At that point, ddrescue told that 99.39% of disk was rescued, but unfortunately, the disk moved and the fragile connection broke, leaving me with I/O errors and this message:
ipos: 8623 MB, non-trimmed: 837763 kB, current rate: 0 B/s
opos: 8623 MB, non-scraped: 1095 MB, average rate: 280 kB/s
non-tried: 0 B, bad-sector: 2449 kB, error rate: 13824 B/s
rescued: 318137 MB, bad areas: 4784, run time: 13d 3h 34m
pct rescued: 99.39%, read errors: 45_212, remaining time: 19d 16h 16m
time since last successful read: 6s
Trimming failed blocks... (forwards)
ddrescue: /dev/sda: Unaligned read error. Is sector size correct?
For now, I've started the ddrescue again on the same outfile, but as far as I know, it will take another 13 days to scan it through.
I'm aware that it's a bad idea to connect it over such cable and without mapfile, but I thought that the process will be faster. Anyways, I have a backup outfile for any experiments and I think that 99.39% is pretty much enough, so I'd like to try to mount the file and take a look at the data inside. Unfortunately, I cannot:
[root@foxserver ~]# mount -o loop sda.iso /mnt/iso
mount: /mnt/iso: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.
dmesg(1) may have more information after failed mount system call.
What I tried so far:
1. ntfsfix. It couldn't recover and told me to use chkdsk.
2. fsck. But I couldn't even run it, it just always prints the help message no matter what options I pass.
I've got another 2.5 inch 320GB HDD and I could try to write this image to that disk, boot into windows installation and try to chkdsk, but I don't really want to, because I'd like to use free, libre and open source software to do my job.
Leca
(117 rep)
May 25, 2025, 09:22 AM
• Last activity: Jun 8, 2025, 01:25 PM
0
votes
1
answers
45
views
Can I use grep or strings to seach for deleted file name on btrfs?
Months ago, one of my systemd journal files was purged from my btrfs hdd. Because I couldn't use file carving to check if the content is still on the hdd because unfortunately the format is binary and not text, I can at least look for the metadata of this file (inode is called?). For this, can I jus...
Months ago, one of my systemd journal files was purged from my btrfs hdd. Because I couldn't use file carving to check if the content is still on the hdd because unfortunately the format is binary and not text, I can at least look for the metadata of this file (inode is called?). For this, can I just use grep or strings over my sda1 and search for the filename? If I find something then I will see which blocks it points to and try recover the actual content of the journal file... I know there are btrfs-undelete scripts but I don't see the point of using them at least in this case.
user324831
(113 rep)
May 31, 2025, 09:22 PM
• Last activity: Jun 1, 2025, 07:40 AM
3
votes
1
answers
5024
views
Repairing or Recovering files from a Corrupted F2FS partition?
I have a f2fs partition with missing Superblock. I made a disk image so I would not destroy the original partition trying to fix. sd-repair# fsck.f2fs sd128.img Info: Segments per section = 1 Info: Sections per zone = 1 Info: sector size = 512 Info: total sectors = 249737216 (121942 MB) Can't find a...
I have a f2fs partition with missing Superblock. I made a disk image so I would not destroy the original partition trying to fix.
sd-repair# fsck.f2fs sd128.img
Info: Segments per section = 1
Info: Sections per zone = 1
Info: sector size = 512
Info: total sectors = 249737216 (121942 MB)
Can't find a valid F2FS superblock at 0x0
Can't find a valid F2FS superblock at 0x1
Testdisk doesn't support F2FS.
I don't know if there is a way to rewrite the superblocks, I would like to recover my files or repair the filesystem.
Here is hex of what I believe is F2FS superblock, from a good partition
10 20 F5 F2 01 00 07 00 09 00 00 00 03 00 00 00 0C 00 00 00 09 00
00 00 01 00 00 00 01 00 00 00 00 00 00 00 00 00 20 00 00 00 00 00
E1 0F 00 00 FF 0F 00 00 02 00 00 00 02 00 00 00 12 00 00 00 08 00
00 00 E1 0F 00 00 00 02 00 00 00 02 00 00 00 06 00 00 00 0A 00 00
00 2E 00 00 00 3E 00 00 03 00 00 00 01 00 00 00 02 00 00 00 31 8B
E4 FB 13 D1 42 26 A5 07 EA 8A B6 70 A9 45
Here is the hex I found on bad partiton
10 20 F5 F2 01 00 07 00 09 00 00 00 03 00 00 00 0C 00 00 00 09 00
00 00 01 00 00 00 01 00 00 00 00 00 00 00 00 46 DC 01 00 00 00 00
31 ED 00 00 22 EE 00 00 02 00 00 00 06 00 00 00 72 00 00 00 77 00
00 00 31 ED 00 00 00 02 00 00 00 02 00 00 00 06 00 00 00 12 00 00
00 F6 00 00 00 E4 01 00 03 00 00 00 01 00 00 00 02 00 00 00 16 CD
C2 62 53 10 46 17 A5 B7 41 C6 8E AA 33 D5 73 00 64 00 2D 00 65 00
78 00 74 00
The superblock seems ok, differences are because of 1 is 128 GB part and other is 8 GB part. I don't know how to tell if the superblock is in the right location on bad partition. There offset don't match, from what I can tell. But I'm not that good with hex editors so I don't know how to compare there offsets.
Update:
the offset for the superblock was wrong it was at 0x600 or 3 sector. I removed the first 512 bytes from the disk image. Now fsck.f2fs shows
sd-repair# fsck.f2fs -f trim_sd.img
Info: Force to fix corruption
Info: Segments per section = 1
Info: Sections per zone = 1
Info: sector size = 512
Info: total sectors = 249704447 (121925 MB)
Info: MKFS version
"Linux version 3.4.0-CM-g87d27dd (Adam@TheKeurig) (gcc version 4.9 20150123 (prerelease)
(GCC) ) #6 SMP PREEMPT Sat Dec 17 21:28:57 CET 2016"
Info: FSCK version
from "Linux version 4.9.0-3-amd64 (debian-kernel@lists.debian.org)
(gcc version 6.3.0 20170516 (Debian 6.3.0-18) )
#1 SMP Debian 4.9.30-2 (2017-06-12)"
to "Linux version 4.9.0-3-amd64 (debian-kernel@lists.debian.org)
(gcc version 6.3.0 20170516 (Debian 6.3.0-18) ) #1 SMP Debian 4.9.30-2 (2017-06-12)"
Info: superblock features = 0 :
Info: superblock encrypt level = 0, salt = 00000000000000000000000000000000
Info: total FS sectors = 249704448 (121926 MB)
[f2fs_crc_valid: 477] CRC validation failed: cal_crc = 4076150800, blk_crc = 0 buff_size = 0x0
[f2fs_crc_valid: 477] CRC validation failed: cal_crc = 4076150800, blk_crc = 0 buff_size = 0x0
[f2fs_do_mount:1945] Can't find valid checkpoint
From what I can tell the partition has shifted, It might be on issue with partition table. All the data seems to be intact. Is is using ms-dos partition table.
Jcfunk
(101 rep)
Jun 17, 2017, 01:30 AM
• Last activity: Jun 1, 2025, 03:04 AM
1
votes
1
answers
55
views
Forensics to recover the second-to-last access timestamp of a file on btrfs on HDD
I searched online, to no avail. Is there some way to recover the access timestamp of my file on BTRFS, before the access timestamp which appears currently? Using HDD (not SSD). Please let me know. Is this question better suited for superuser? I made no snapshots (willingly), using Fedora and the met...
I searched online, to no avail. Is there some way to recover the access timestamp of my file on BTRFS, before the access timestamp which appears currently? Using HDD (not SSD). Please let me know. Is this question better suited for superuser? I made no snapshots (willingly), using Fedora and the metadata change dates back some two weeks... In fact to be precise I'm interested in two timestamps ago, which happened in rapid succession.
user324831
(113 rep)
May 23, 2025, 06:15 PM
• Last activity: May 23, 2025, 08:35 PM
0
votes
0
answers
29
views
Recover deleted txt file which was stored to btrfs right before another one that I still have
I used my terminal emulator to run a command and redirect output to a text file on my btrfs. Right after, I did the same with another command. I since deleted the first text file and not the second. I'd like to recover part of the first text: photorec seems overkill but I'm familiar with the tool, u...
I used my terminal emulator to run a command and redirect output to a text file on my btrfs. Right after, I did the same with another command. I since deleted the first text file and not the second. I'd like to recover part of the first text: photorec seems overkill but I'm familiar with the tool, using strings seems more suitable but I'm not familiar with this method. How about looking at the actual blocks on the fs: maybe the two files were written sequentially?
user324831
(113 rep)
May 20, 2025, 06:50 PM
• Last activity: May 20, 2025, 06:55 PM
0
votes
0
answers
43
views
Data recovery from a damaged partition during resize
I use Fedora KDE and I have separate drives for Linux home and root. Recently when both of these were on low storage, I decided to resize my other drive and extend both by 5GB. After I cut off 10GB from my extra drive, to extend the drives I booted up a live CD and in KDE Partition Manager I moved u...
I use Fedora KDE and I have separate drives for Linux home and root. Recently when both of these were on low storage, I decided to resize my other drive and extend both by 5GB.
After I cut off 10GB from my extra drive, to extend the drives I booted up a live CD and in KDE Partition Manager I moved up home partition since unallocated storage was upon it and added 5GB to it and extended the root drive by the remaining 5GB space.
Applying went successfully except plasmashell crashed for some reason during the process, but it didn't affect the process I believe. After I restarted and booted from my SSD, I couldn't login to my account. I was confused and at first thought that maybe the fstab with home drive mount instructions was broken. I logged in to TTY and started checking drives and saw that my home drive was mounted correctly, but almost empty. I went back to live CD and started checking different ways to recover data.
I tried testdisk, photorec, dmde, R-Studio and a few other options. I was able to recover some of my data, but in absolutely unstructured way. I was wondering is there a way to try to recover data with folders and other metadata.
Including a picture of my current drive situation (damaged - empty drive is highlighted):
Fdisk:


iamawebgeek
(101 rep)
May 19, 2025, 11:20 AM
• Last activity: May 19, 2025, 02:38 PM
5
votes
1
answers
21467
views
ext2fs_open2: Bad magic number in super-block
I'm trying to resize a Linux partition, but after tweaking a lot with this disk I don't know If I have totally corrupted it. Device Boot Start End Sectors Size Id Type /dev/sdd1 * 64 5913631 5913568 2.8G 17 Hidden HPFS/NTFS /dev/sdd2 5913632 5915039 1408 704K 1 FAT12 /dev/sdd3 5915040 17578125 11663...
I'm trying to resize a Linux partition, but after tweaking a lot with this disk I don't know If I have totally corrupted it.
Device Boot Start End Sectors Size Id Type
/dev/sdd1 * 64 5913631 5913568 2.8G 17 Hidden HPFS/NTFS
/dev/sdd2 5913632 5915039 1408 704K 1 FAT12
/dev/sdd3 5915040 17578125 11663086 5.6G 83 Linux
/dev/sdd4 17578126 28320312 10742187 5.1G 83 Linux
Using
dd
deleting partitions and creating new ones I get
Device Boot Start End Sectors Size Id Type
/dev/sdd1 * 64 5913631 5913568 2.8G 17 Hidden HPFS/NTFS
/dev/sdd2 5913632 5915039 1408 704K 1 FAT12
/dev/sdd3 5915040 40000000 34084961 16.3G 83 Linux
/dev/sdd4 40000001 62521343 22521343 10.8G 83 Linux
Then following some tutorial I do
$ e2fsck -f /dev/sdd1
$ e2fsck 1.43.7 (16-Oct-2017)
$ ext2fs_open2: Bad magic number in super-block
$ e2fsck: Superblock invalid, trying backup blocks...
$ e2fsck: Bad magic number in super-block while trying to open /dev/sdd1
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
or
e2fsck -b 32768
/dev/sdd1 contains a iso9660 file system labelled 'Kali Live'
(And so on for the rest of the next 3 partitions)
Trying to resize makes the same effect:
$ resize2fs /dev/sdd3
resize2fs 1.43.7 (16-Oct-2017)
resize2fs: Bad magic number in super-block while trying to open /dev/sdd3
Couldn't find valid filesystem superblock.
I've followed a tutorial in internet but is not working, titled: HOWTO: Repair a broken Ext4 Superblock in Ubuntu .
$ mke2fs -n /dev/sdd4
$ e2fsck -b block_number /dev/sdd4
$ e2fsck 1.43.7 (16-Oct-2017)
e2fsck: Bad magic number in super-block while trying to open /dev/sdd4
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
or
e2fsck -b 32768
So I definitely run out of ideas of what to do. Is it totally wasted or shall I just reinstall everything from the scratch?
Manuela Perez Pimentel
(51 rep)
Jul 17, 2018, 02:10 PM
• Last activity: May 13, 2025, 06:06 PM
0
votes
0
answers
48
views
Deliberately marking bad sectors as "bad" after ddrescue process
New here but hopefully my question will be helpful to others. I'm trying to recover a very old CD. There are total 15 files (out of many, many) that are unreadable by Windows copy. I've used a tool called Roadkil's Unstoppable Copier on 3 of those files, and it successfully reconstructed 2 of them,...
New here but hopefully my question will be helpful to others. I'm trying to recover a very old CD. There are total 15 files (out of many, many) that are unreadable by Windows copy. I've used a tool called Roadkil's Unstoppable Copier on 3 of those files, and it successfully reconstructed 2 of them, which is impressive (the 3rd file was really small - 1 Kb - and its corresponding sectors were giving read error as per Unstoppable Copier).
Equally impressive is ddrescue which has so far managed to rescue 99.93% of the CD - so that 415774 bytes of bad sector is remaining. I was thinking - hypothetically if the actual physical CD had only 415 Kb of unreadable data (which I'm sure is distributed among those 15 files), then Unstoppable Copier's error correction algorithm would be able to reconstruct those files with much higher probability.
Hence my question - suppose there's some bad sectors still remaining after ddrescue's 3 retries, they get filled with zeroes in the resulting .iso file. If I mount that .iso image in a virtual drive and run Unstoppable Copier on it, it won't see the bad sectors as "bad" since they're just zeroes. Is there a way to force those residual sectors to be marked "bad" in the resulting .iso file? So that the Copier can try to reconstruct them based on its error correction algorithm?
user9343456
(101 rep)
May 12, 2025, 07:56 PM
0
votes
1
answers
1946
views
Error: ext4magic Error 13 while opening filesystem
I'm trying to recover an accidentally removed directory (`/home/garid/.gnupg`) with `ext4magic`. However, It outputs following error: ```sh $ ext4magic /dev/nvme0n1p3 -f /home/garid/.gnupg/ -a $(date -d -5days +%s) /dev/nvme0n1p3 Error 13 while opening filesystem ext4magic : EXIT_SUCCESS ``` I'm not...
I'm trying to recover an accidentally removed directory (
/home/garid/.gnupg
) with ext4magic
. However, It outputs following error:
$ ext4magic /dev/nvme0n1p3 -f /home/garid/.gnupg/ -a $(date -d -5days +%s)
/dev/nvme0n1p3 Error 13 while opening filesystem
ext4magic : EXIT_SUCCESS
I'm not sure what I'm doing wrong. What is Error 13
? And How can I make it work?
---------------------------------------
My distro is Archlinux. My Partition table:
nvme0n1 259:0 0 476.9G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /efi
├─nvme0n1p2 259:2 0 32G 0 part [SWAP]
└─nvme0n1p3 259:3 0 443.9G 0 part /
---------------------
As suggested in below, running as root outputs below:
$ sudo ext4magic /dev/nvme0n1p3 -f /home/garid/.gnupg/ -a $(date -d -5days +%s)
[sudo] password for garid:
Filesystem in use: /dev/nvme0n1p3
Using internal Journal at Inode 8
Activ Time after : Sun Feb 26 20:15:57 2023
Activ Time before : Fri Mar 3 20:15:58 2023
zsh: segmentation fault sudo ext4magic /dev/nvme0n1p3 -f /home/garid/.gnupg/ -a $(date -d -5days +%s)
Garid Z.
(552 rep)
Mar 3, 2023, 10:59 AM
• Last activity: May 8, 2025, 12:04 AM
3
votes
1
answers
2315
views
How to find orphan directory entries in a ext4 disk
This comes from a long story of trying to recovering a TrueCrypt volume from a hardware failure (thanks, WD). I ended up with an unencrypted 3TB image that had the files that I want to recover. Unfortunately, after using `testdisk` and `extundelete`, I guessed the directory entry that leads to the d...
This comes from a long story of trying to recovering a TrueCrypt volume from a hardware failure (thanks, WD). I ended up with an unencrypted 3TB image that had the files that I want to recover.
Unfortunately, after using
testdisk
and extundelete
, I guessed the directory entry that leads to the descriptors (of the additional directories) that I want to recover has been overwritten.
However, I think that its subdirectories may have their entries still intact. I would like to know how can I search throughout the disk image for directory entries in unallocated blocks, in order to recover their files (with their proper names, which would be much better than using foremost
, photorec
and the like).
I know that extundelete
with a default --recover-all
doesn't look further than the tree that spawns from the root directory. Okay, what if one of the branches is broken but I know that the subfolders entries are somewhere?
Just in case I didn't express myself clearly, imagine that the entry lost is [root]/information. The root directory has the 'information' entry, but it points to overwritten data. Its directory entry is gone, but I want to scan for its subdirectories, [root]/information/personal, and [root]/information/business, and so on. (the name of those subdirectories was in the 'information' entry- I don't care about that name but their whole structure)
huff
(131 rep)
Mar 4, 2014, 12:14 AM
• Last activity: May 7, 2025, 03:07 AM
1
votes
2
answers
3915
views
Failed NVMe M.2 SSD, broken filesystem, unwriteable; can I wipe it anyway?
My Samsung 970 EVO M.2 500GB SSD (MZ-V7E500BW) suddenly failed yesterday during a power outage. I now have a warning during POST ("WARNING! Please back up your data and replace your hard disk drive. WARNING! Your HDD/SSD might crash at any moment."). The last time I rebooted before this was about 5...
My Samsung 970 EVO M.2 500GB SSD (MZ-V7E500BW) suddenly failed yesterday during a power outage.
I now have a warning during POST ("WARNING! Please back up your data and replace your hard disk drive.
WARNING! Your HDD/SSD might crash at any moment."). The last time I rebooted before this was about 5 days earlier, and the warning was not present then.
By booting a live USB stick I managed to check the SMART log:
Smart Log for NVME device:nvme0 namespace-id:ffffffff
critical_warning : 0x8
temperature : 49 C
available_spare : 29%
available_spare_threshold : 10%
percentage_used : 0%
endurance group critical warning summary: 0
data_units_read : 4,948,748
data_units_written : 20,573,476
host_read_commands : 100,316,217
host_write_commands : 357,643,056
controller_busy_time : 1,790
power_cycles : 24
power_on_hours : 4,570
unsafe_shutdowns : 11
media_errors : 41
num_err_log_entries : 70
Warning Temperature Time : 0
Critical Composite Temperature Time : 0
Temperature Sensor 1 : 49 C
Temperature Sensor 2 : 74 C
Thermal Management T1 Trans Count : 0
Thermal Management T2 Trans Count : 0
Thermal Management T1 Total Time : 0
Thermal Management T2 Total Time : 0
Messages from the kernel mentioning
nvme
during startup of the live USB OS:
Oct 26 19:18:58 ubuntu kernel: [ 1.233479] nvme nvme0: pci function 0000:06:00.0
Oct 26 19:18:58 ubuntu kernel: [ 1.243303] nvme nvme0: missing or invalid SUBNQN field.
Oct 26 19:18:58 ubuntu kernel: [ 1.243323] nvme nvme0: Shutdown timeout set to 8 seconds
Oct 26 19:18:58 ubuntu kernel: [ 1.252449] nvme nvme0: 4/0/0 default/read/poll queues
Oct 26 19:18:58 ubuntu kernel: [ 1.254855] nvme0n1: p1 p2 p3
Oct 26 19:18:58 ubuntu kernel: [ 3.629244] EXT4-fs (nvme0n1p2): INFO: recovery required on readonly filesystem
Oct 26 19:18:58 ubuntu kernel: [ 3.629246] EXT4-fs (nvme0n1p2): write access will be enabled during recovery
Oct 26 19:18:58 ubuntu kernel: [ 3.674861] blk_update_request: critical medium error, dev nvme0n1, sector 124928 op 0x1:(WRITE) flags 0x800 phys_seg 4 prio class 0
Oct 26 19:18:58 ubuntu kernel: [ 3.674893] Buffer I/O error on dev nvme0n1p2, logical block 0, lost async page write
Oct 26 19:18:58 ubuntu kernel: [ 3.674913] Buffer I/O error on dev nvme0n1p2, logical block 1, lost async page write
Oct 26 19:18:58 ubuntu kernel: [ 3.674931] Buffer I/O error on dev nvme0n1p2, logical block 2, lost async page write
Oct 26 19:18:58 ubuntu kernel: [ 3.674949] Buffer I/O error on dev nvme0n1p2, logical block 3, lost async page write
Oct 26 19:18:58 ubuntu kernel: [ 3.674967] blk_update_request: critical medium error, dev nvme0n1, sector 133200 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Oct 26 19:18:58 ubuntu kernel: [ 3.674995] Buffer I/O error on dev nvme0n1p2, logical block 1034, lost async page write
Oct 26 19:18:58 ubuntu kernel: [ 3.675013] blk_update_request: critical medium error, dev nvme0n1, sector 133384 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Oct 26 19:18:58 ubuntu kernel: [ 3.675040] Buffer I/O error on dev nvme0n1p2, logical block 1057, lost async page write
Oct 26 19:18:58 ubuntu kernel: [ 3.675059] blk_update_request: critical medium error, dev nvme0n1, sector 147176 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Oct 26 19:18:58 ubuntu kernel: [ 3.675086] Buffer I/O error on dev nvme0n1p2, logical block 2781, lost async page write
Oct 26 19:18:58 ubuntu kernel: [ 3.675105] blk_update_request: critical medium error, dev nvme0n1, sector 4319360 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Oct 26 19:18:58 ubuntu kernel: [ 3.675132] Buffer I/O error on dev nvme0n1p2, logical block 524304, lost async page write
Oct 26 19:18:58 ubuntu kernel: [ 3.675151] blk_update_request: critical medium error, dev nvme0n1, sector 4319488 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Oct 26 19:18:58 ubuntu kernel: [ 3.675178] Buffer I/O error on dev nvme0n1p2, logical block 524320, lost async page write
Oct 26 19:18:58 ubuntu kernel: [ 3.675197] blk_update_request: critical medium error, dev nvme0n1, sector 4319544 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
Oct 26 19:18:58 ubuntu kernel: [ 3.675224] Buffer I/O error on dev nvme0n1p2, logical block 524327, lost async page write
Oct 26 19:18:58 ubuntu kernel: [ 3.675243] blk_update_request: critical medium error, dev nvme0n1, sector 4319816 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Oct 26 19:18:58 ubuntu kernel: [ 3.675270] blk_update_request: critical medium error, dev nvme0n1, sector 4320256 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Oct 26 19:18:58 ubuntu kernel: [ 3.675297] blk_update_request: critical medium error, dev nvme0n1, sector 4320936 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Oct 26 19:18:58 ubuntu kernel: [ 3.729319] EXT4-fs (nvme0n1p2): error loading journal
Oct 26 19:18:58 ubuntu kernel: [ 3.743157] EXT4-fs (nvme0n1p3): INFO: recovery required on readonly filesystem
Oct 26 19:18:58 ubuntu kernel: [ 3.743158] EXT4-fs (nvme0n1p3): write access will be enabled during recovery
Oct 26 19:18:58 ubuntu kernel: [ 3.806113] EXT4-fs (nvme0n1p3): error loading journal
Oct 26 19:19:04 ubuntu kernel: [ 30.724414] blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
Oct 26 19:19:04 ubuntu kernel: [ 30.752254] blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
Oct 26 19:19:05 ubuntu kernel: [ 31.346630] blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
Oct 26 19:19:05 ubuntu kernel: [ 31.365831] blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
Oct 26 19:19:29 ubuntu kernel: [ 55.502099] blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
Oct 26 19:19:29 ubuntu kernel: [ 55.516704] blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
Oct 26 19:24:44 ubuntu kernel: [ 370.116101] blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
Oct 26 19:24:44 ubuntu kernel: [ 370.130330] blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
Thanks to ddrescue
I managed to clone all of its partitions to a different machine over the network. There were IO errors while extracting both ext4 partitions but with enough retries it eventually got everything.
After that I was able to run e2fsck
on the images, which appeared to succeed, and now I can mount them as read-only loop devices. Data appears to be intact.
I suppose the first question is **is there anything I can do to fix whatever the problem is, and keep using this drive?** I'm assuming not, but I'm definitely open to suggestions.
If I try to run fsck
on one of the partitions from the live USB, this is what happens. I tried all combinations of answers to the questions as you'll see below. I can't understand enough of the manual pages and don't know enough about filesystems or drives to know what options, if any, might help me.
ubuntu@ubuntu:~$ sudo fsck /dev/nvme0n1p3
fsck from util-linux 2.36.1
e2fsck 1.46.3 (27-Jul-2021)
/dev/nvme0n1p3: recovering journal
Superblock needs_recovery flag is clear, but journal has data.
Run journal anyway? yes
fsck.ext4: Input/output error while recovering journal of /dev/nvme0n1p3
fsck.ext4: unable to set superblock flags on /dev/nvme0n1p3
/dev/nvme0n1p3: ********** WARNING: Filesystem still has errors **********
ubuntu@ubuntu:~$ sudo fsck /dev/nvme0n1p3
fsck from util-linux 2.36.1
e2fsck 1.46.3 (27-Jul-2021)
/dev/nvme0n1p3: recovering journal
Superblock needs_recovery flag is clear, but journal has data.
Run journal anyway? no
Clear journal? no
fsck.ext4: Input/output error while recovering journal of /dev/nvme0n1p3
fsck.ext4: unable to set superblock flags on /dev/nvme0n1p3
/dev/nvme0n1p3: ********** WARNING: Filesystem still has errors **********
ubuntu@ubuntu:~$ sudo fsck /dev/nvme0n1p3
fsck from util-linux 2.36.1
e2fsck 1.46.3 (27-Jul-2021)
/dev/nvme0n1p3: recovering journal
Superblock needs_recovery flag is clear, but journal has data.
Run journal anyway? no
Clear journal? yes
fsck.ext4: Input/output error while recovering journal of /dev/nvme0n1p3
fsck.ext4: unable to set superblock flags on /dev/nvme0n1p3
/dev/nvme0n1p3: ********** WARNING: Filesystem still has errors **********
ubuntu@ubuntu:~$
I believe the drive is still under warranty, and I'm trying to get in contact with Samsung support to try to get a replacement or refund.
If they ask me to send it back, that's going to pose a problem since there's sensitive data on this drive.
The drive resists all attempts to write to it. I can't mount it and write to it normally. The kernel emits IO errors if I try to write to it at the block level. Even Samsung's secure erase tool (their Windows-only software offers to produce a bootable USB drive with such a tool) fails.
**Is there some way to force secure erasure of this device?**
tremby
(563 rep)
Oct 26, 2021, 07:52 PM
• Last activity: Apr 30, 2025, 05:03 PM
4
votes
0
answers
2462
views
Recovering an overwritten file on ZFS
Is there any way to recover a deleted or overwritten file on ZFS? I accidentally overwrote a JPG file with a scanned image. Unfortunately, I didn’t take a snapshot beforehand. However, since ZFS uses the Copy-on-Write (CoW) mechanism, I believe the overwritten data might still exist in some form. Do...
Is there any way to recover a deleted or overwritten file on ZFS?
I accidentally overwrote a JPG file with a scanned image. Unfortunately, I didn’t take a snapshot beforehand. However, since ZFS uses the Copy-on-Write (CoW) mechanism, I believe the overwritten data might still exist in some form.
Does anyone know if there is a way to restore the overwritten file on ZFS?
I tried using
photorec
. As a result, I recovered some JPG files. However, my target file was not there. Strangely, photorec
couldn't recover well almost JPG files which is not deleted. And I remembered that unfortunately my pool had a setting for lz4 compression.
kou
(41 rep)
Sep 25, 2020, 08:05 AM
• Last activity: Mar 26, 2025, 08:01 PM
9
votes
2
answers
5167
views
Why is ddrescue slow when it could be faster on error free areas?
This question addresses the **first pass** of `ddrescue` on the device to be rescued. I had to rescue a 1.5TB hard disk. The command I used is: # ddrescue /dev/sdc1 my-part-img my-part-map When the rescue is started (with no optional parameters) on a good area of the disk, the read rate ("`current r...
This question addresses the **first pass** of
ddrescue
on the device to be rescued.
I had to rescue a 1.5TB hard disk.
The command I used is:
# ddrescue /dev/sdc1 my-part-img my-part-map
When the rescue is started (with no optional parameters) on a good
area of the disk, the read rate ("current rate
") stays around 18 MB/s.
It occasionally slows a bit, but then comes back to this speed.
However, when it encounters a bad area of the disk, it may slow down
significantly, and then it never comes back to the 18 MB/s, but stays
around 3 MB/s, even after reading 50 GB of good disk with no problem.
The strange part is that, when it is currently scanning a good
disk area at 3 MB/s, if I stop ddrescue
and restart it, it restarts at the higher reading rate of 18 MB/s. I
actually saved about 2 days by stopping and restarting ddrescue
when it was going at 3 MB/s, which I had to do 8 times to finish the first pass.
My question is: why is it that ddrescue
will not try to go back to the
highest speed on its own. Given the policy, explicitly stated in the documentation, of doing first and fast the
easy areas, that is what should be done, and the behavior I observed
seems to me to be a bug.
I have been wondering whether this can be dealt with with the option
-a
or --min-read-rate=…
but the manual is so terse that I was not
sure. Besides, I do not understand on what basis one should choose a
read rate for this option. Should it be the above 18 MB/s?
Still, even with an option to specify it, I am surprised this is not done by default.
Meta note
---------
Two users have voted to close the question for being primarily opinion
based.
I would appreciate knowing in what sense it is?
I describe with some numerical precision the behavior of an
important piece of software on an actual example, showing clearly that
it does not meet a major design objective stated in its documentation
(doing the easy parts as quickly as possible), and that very simple
reasoning could improve that.
The software is well know, from a very trusted source, with precise
algorithms, and I expect that most defects were weeded out long ago.
So I am asking experts for a possible known reason for this unexpected
behavior, not being an expert myself on this issue.
Furthermore, I ask whether one of the options of the software should
be used to resolve the issue, which is even more a very precise
question. And I ask for a detailed aspect (how to choose the parameter
for this option) since I did not find documentation for that.
I am asking for facts that I need for my work, not opinions. And I
motivate it with experimental facts, not opinions.
babou
(878 rep)
Aug 5, 2018, 09:55 PM
• Last activity: Mar 24, 2025, 03:35 AM
Showing page 1 of 20 total questions