Sample Header Ad - 728x90

Ask Different (Apple)

Q&A for power users of Apple hardware and software

Latest Questions

0 votes
1 answers
283 views
Apple fsck.hfs borks on corrupted journal
Got message, "Unrecognized disk" and a choice of eject, initialize, or ignore I chose ignore, then used "repair disk" Made no difference. Can this be fixed without reinitializing?
Got message, "Unrecognized disk" and a choice of eject, initialize, or ignore I chose ignore, then used "repair disk" Made no difference. Can this be fixed without reinitializing?
Sherwood Botsford (1730 rep)
Jan 9, 2022, 08:58 PM • Last activity: May 24, 2025, 03:04 PM
4 votes
1 answers
241 views
Any way to configure fsck_hfs to use more memory to speed up verification of Time Capsule images?
Periodically Time Machine verifies sparsebundle backups with fsck_hfs. When the sparsebundle is on a Time Capsule (TC) it does this by creating a partial and much smaller representation of the sparsebundle on the TC, transferring the result to the Mac into ``` /private/var/db/com.apple.backupd.backu...
Periodically Time Machine verifies sparsebundle backups with fsck_hfs. When the sparsebundle is on a Time Capsule (TC) it does this by creating a partial and much smaller representation of the sparsebundle on the TC, transferring the result to the Mac into
/private/var/db/com.apple.backupd.backupVerification
and then running fsck_hfs on it locally (it mounts the local sparsebundle as can be seen with diskutil list). The problem is that it takes a very long time for fsck_hfs to verify this representation of very large TC sparsebundles (e.g. >24hrs). [This leaves the timemachine process showing in the menu as if it was stuck 'verifying' with the real Time Capsule disk apparently no longer mounted under /Volumes - which confuses many into thinking the process has died] fsck_hfs is launched by its parent process backupd with the parameters -f -n -x -E. There are various posts about fsck_hfs working much more efficiently if allowed to use more memory (-c option). By default (at least on my system) it seems to be limited to 3Gb. My question is, is there any way to pass a config that might cause backupd to launch fsck_hfs with the additional (-c) parameter and so run faster?
timjph (41 rep)
May 10, 2021, 04:00 PM • Last activity: Feb 25, 2024, 10:04 AM
0 votes
1 answers
1087 views
Repair options of a (really) Broken HFS+ "Invalid extent entry" volume?
I have an un-mountable external 2TB harddisk, formattet HFS+, one "usable" partition, containing a set of my backups. It won't show in Finder, it won't repair in disk utility, it won't repair on the commandline: ``` $ diskutil verifyVolume /dev/disk2s2 Started file system verification on disk2s2 zuh...
I have an un-mountable external 2TB harddisk, formattet HFS+, one "usable" partition, containing a set of my backups. It won't show in Finder, it won't repair in disk utility, it won't repair on the commandline:
$ diskutil verifyVolume /dev/disk2s2
Started file system verification on disk2s2 zuhauseBackup
Verifying file system
Volume is already unmounted
Performing fsck_hfs -fn -x /dev/rdisk2s2
Journal needs to be replayed but volume is read-only
Checking Journaled HFS Plus volume
Invalid extent entry
The volume   could not be verified completely
File system check exit code is 8
Restoring the original state found as unmounted
Error: -69845: File system verify or repair failed
Underlying error: 8

$ diskutil list /dev/disk2
/dev/disk2 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk2
   1:                        EFI EFI                     209.7 MB   disk2s1
   2:                  Apple_HFS NameNotShown            2.0 TB     disk2s2
$
(The above NameNotShown is, in reality, shown correctly). Linux hfsplus-fsck also give the same error as the macOS tools (it appears to be exactly the same tool):
$ sudo fsck_hfs /dev/rdisk2s2
** /dev/rdisk2s2
   Executing fsck_hfs (version hfs-522.100.5).
** Checking Journaled HFS Plus volume.
   Invalid extent entry
(4, 0)
** The volume   could not be verified completely.
$
Mounting it to a Linux PC I can see some files in the Backup-Directories, yet most "important" directories only show up as files of size 0. The above **Invalid extent entry** seems to be the culprit. Any ideas how to get around this and possibly fix my volume, hopefully at least recover the files?
Christian (121 rep)
Feb 9, 2021, 05:52 PM • Last activity: Feb 9, 2021, 06:02 PM
0 votes
0 answers
163 views
Restore a partition on an external drive after enclosure failure
I had a very freak electrical accident with my iMac 5K a few weeks ago. One of my external drives (my backup drive) shorted out and would no longer mount. I decided to buy an 3.5" enclosure as I figured it was just the power supply that died. Now, when connected I get a "This drive could not be read...
I had a very freak electrical accident with my iMac 5K a few weeks ago. One of my external drives (my backup drive) shorted out and would no longer mount. I decided to buy an 3.5" enclosure as I figured it was just the power supply that died. Now, when connected I get a "This drive could not be read…" I've run numerous terminal commands and some third party software, and as far as I can tell, the data is all intact, but but it looks like the drive configuration and headers are messed up. I see errors such as: "Check the harddisk size: HD jumper settings, BIOS detection..." "…no primary or secondary GPT headers, can't recover…" "…unrecognized file system (-69846)…" --- I'm talking out of my butt here, but it seems like the drive headers(?) could be reset correctly and the partition would magically show up. Here's the results of several scans. The drive in question is disk2, and disk2s1 appears to be the partition in question:
> diskutil list

/dev/disk2 (external, physical):
   #:         TYPE NAME                    SIZE          IDENTIFIER
   0:         FDisk_partition_scheme      *4.0 TB        disk2
   1:         0xEE                         500.1 GB      disk2s1
> sudo gpt show disk2

       start             size        index  contents
           0                1         PMBR
           1       7813971632
> testdisk > analyze >

Disk /dev/rdisk2 - 4000 GB / 3725 GiB - 7813971633 sectors
Current partition structure:
     Partition                  Start        End    Size in sectors

Bad GPT partition, invalid signature.
Trying alternate GPT
Bad GPT partition, invalid signature.
Data Rescue returns what looks like a clean filesystem on its scan: Data Rescue returns what looks like a clean filesystem on it's scan.
Tomasch (1 rep)
Aug 25, 2020, 03:31 PM • Last activity: Aug 28, 2020, 04:32 PM
1 votes
1 answers
397 views
Can I speed up Disk Utility First Aid by running “fsck_hfs” with the “-c” option?
I ran Disk Utility’s First Aid on a portable hard disk I use for Time Machine, which took around 13 hours (with no repairs being needed). I was wondering whether there’s a way to speed that up. Disk Utility’s output shows that it ran `fsck_hfs -fy -x /dev/rdisk3`, and so it doesn’t use the `-c` opti...
I ran Disk Utility’s First Aid on a portable hard disk I use for Time Machine, which took around 13 hours (with no repairs being needed). I was wondering whether there’s a way to speed that up. Disk Utility’s output shows that it ran fsck_hfs -fy -x /dev/rdisk3, and so it doesn’t use the -c option, which according to the command’s man page “can result in better performance”:
-c size Specify the size of the cache used by fsck_hfs internally.
	   Bigger size can result in better performance but can result
	   in deadlock when used with -l option.  Size can be speci-
	   fied as a decimal, octal, or hexadecimal number.  If the
	   number ends with a `k'', m'', or `g'', the number is
	   multiplied by 1024 (1K), 1048576 (1M), or 1073741824 (1G),
	   respectively.
I was wondering whether anyone has experience using the -c option. Does fsck_hfs not use a cache at all when the option is not used, or does it use a default size? How large would I need to set the cache size to see significant, or even just any, performance improvement? I assume this depends on the size of the disk; mine is 1TB with Disk Utility showing it contains around 15.500.000 files.
Rinzwind (3751 rep)
Jun 19, 2019, 08:11 AM • Last activity: Aug 18, 2020, 07:18 AM
0 votes
1 answers
614 views
Time Machine Sparse Bundle failed to verify after catalog rebuild
After reading about the fsck_hfs man page, I feel like the need to rebuild catalogs. However, when I do a repair before the rebuild, it succeeded, but then printed a lot of errors during the build, and then none again during an verify after the rebuild. Here's an output of rebuilding a local disk (S...
After reading about the fsck_hfs man page, I feel like the need to rebuild catalogs. However, when I do a repair before the rebuild, it succeeded, but then printed a lot of errors during the build, and then none again during an verify after the rebuild. Here's an output of rebuilding a local disk (Seagate Backup 5T)
Last login: Thu Aug 15 17:23:10 on ttys010
Welcome to fish, the friendly interactive shell
root@Joys-MBP /p/v/root# fsck_hfs -c 4294967296 -d -D 
fsck_hfs: option requires an argument -- D
usage: fsck_hfs [-b [size] B [path] c [size] e [mode] ESdfglx m [mode] npqruy] special-device
  b size = size of physical blocks (in bytes) for -B option
  B path = file containing physical block numbers to map to paths
  c size = cache size (ex. 512m, 1g)
  e mode = emulate 'embedded' or 'desktop'
  E = exit on first major error
  d = output debugging info
  f = force fsck even if clean (preen only) 
  g = GUI output mode
  x = XML output mode
  l = live fsck (lock down and test-only)
  m arg = octal mode used when creating lost+found directory 
  n = assume a no response 
  p = just fix normal inconsistencies 
  q = quick check returns clean, dirty, or failure 
  r = rebuild catalog btree 
  S = Scan disk for bad blocks
  u = usage 
  y = assume a yes response 
root@Joys-MBP /p/v/root# fsck_hfs -c 4294967296 -d -D /dev/disk2
fsck_hfs: invalid debug development argument.  Assuming zero
fsck_hfs: missing special-device
usage: fsck_hfs [-b [size] B [path] c [size] e [mode] ESdfglx m [mode] npqruy] special-device
  b size = size of physical blocks (in bytes) for -B option
  B path = file containing physical block numbers to map to paths
  c size = cache size (ex. 512m, 1g)
  e mode = emulate 'embedded' or 'desktop'
  E = exit on first major error
  d = output debugging info
  f = force fsck even if clean (preen only) 
  g = GUI output mode
  x = XML output mode
  l = live fsck (lock down and test-only)
  m arg = octal mode used when creating lost+found directory 
  n = assume a no response 
  p = just fix normal inconsistencies 
  q = quick check returns clean, dirty, or failure 
  r = rebuild catalog btree 
  S = Scan disk for bad blocks
  u = usage 
  y = assume a yes response 
root@Joys-MBP /p/v/root# fsck_hfs -c 4294967296 -d -D /dev/disk2s2
fsck_hfs: invalid debug development argument.  Assuming zero
fsck_hfs: missing special-device
usage: fsck_hfs [-b [size] B [path] c [size] e [mode] ESdfglx m [mode] npqruy] special-device
  b size = size of physical blocks (in bytes) for -B option
  B path = file containing physical block numbers to map to paths
  c size = cache size (ex. 512m, 1g)
  e mode = emulate 'embedded' or 'desktop'
  E = exit on first major error
  d = output debugging info
  f = force fsck even if clean (preen only) 
  g = GUI output mode
  x = XML output mode
  l = live fsck (lock down and test-only)
  m arg = octal mode used when creating lost+found directory 
  n = assume a no response 
  p = just fix normal inconsistencies 
  q = quick check returns clean, dirty, or failure 
  r = rebuild catalog btree 
  S = Scan disk for bad blocks
  u = usage 
  y = assume a yes response 
root@Joys-MBP /p/v/root# fsck_hfs -c 4294967296 -d -D 0x0001 /dev/disk2s2
journal_replay(/dev/disk2s2) returned 0
** /dev/rdisk2s2
	Cache size should be greater than 32M and less than 17592186044415M
	Using cacheBlockSize=32K cacheTotalBlock=98304 cacheSize=3145728K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
fsck_hfs: Volume is journaled.  No checking performed.
fsck_hfs: Use the -f option to force checking.
	CheckHFS returned 0, fsmodified = 0
root@Joys-MBP /p/v/root# fsck_hfs -c 4294967296 -d -D 0x0002 /dev/disk2s2
journal_replay(/dev/disk2s2) returned 0
** /dev/rdisk2s2
	Cache size should be greater than 32M and less than 17592186044415M
	Using cacheBlockSize=32K cacheTotalBlock=98304 cacheSize=3145728K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
fsck_hfs: Volume is journaled.  No checking performed.
fsck_hfs: Use the -f option to force checking.
	CheckHFS returned 0, fsmodified = 0
root@Joys-MBP /p/v/root# fsck_hfs -c 4294967296 -d -D 0x0002 0x0001 /dev/disk2s2

0x0001: No such file or directory
Can't stat 0x0001
Can't stat 0x0001: No such file or directory
journal_replay(/dev/disk2s2) returned 0
** /dev/rdisk2s2
	Cache size should be greater than 32M and less than 17592186044415M
	Using cacheBlockSize=32K cacheTotalBlock=98304 cacheSize=3145728K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
fsck_hfs: Volume is journaled.  No checking performed.
fsck_hfs: Use the -f option to force checking.
	CheckHFS returned 0, fsmodified = 0
root@Joys-MBP /p/v/root# fsck_hfs -c 17592186044415M -d -D 0x0002 0x0001 0x0010 0x0020 /dev/disk2s2
0x0001: No such file or directory
Can't stat 0x0001
Can't stat 0x0001: No such file or directory
0x0010: No such file or directory
Can't stat 0x0010
Can't stat 0x0010: No such file or directory
0x0020: No such file or directory
Can't stat 0x0020
Can't stat 0x0020: No such file or directory
journal_replay(/dev/disk2s2) returned 0
** /dev/rdisk2s2
	Cache size should be greater than 32M and less than 17592186044415M
	Using cacheBlockSize=32K cacheTotalBlock=98304 cacheSize=3145728K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
fsck_hfs: Volume is journaled.  No checking performed.
fsck_hfs: Use the -f option to force checking.
	CheckHFS returned 0, fsmodified = 0
root@Joys-MBP /p/v/root# fsck_hfs -c 17592186044415M -d -D 0x0002 -D 0x0001 -D 0x0010 -D 0x0020 /dev/disk2s2
journal_replay(/dev/disk2s2) returned 0
** /dev/rdisk2s2
	Cache size should be greater than 32M and less than 17592186044415M
	Using cacheBlockSize=32K cacheTotalBlock=98304 cacheSize=3145728K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
fsck_hfs: Volume is journaled.  No checking performed.
fsck_hfs: Use the -f option to force checking.
	CheckHFS returned 0, fsmodified = 0
root@Joys-MBP /p/v/root# fsck_hfs -c 1759218604441M -d -D 0x0002 -D 0x0001 -D 0x0010 -D 0x0020 /dev/disk2s2
journal_replay(/dev/disk2s2) returned 0
** /dev/rdisk2s2
	Cache size should be greater than 32M and less than 17592186044415M
	Using cacheBlockSize=32K cacheTotalBlock=98304 cacheSize=3145728K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
fsck_hfs: Volume is journaled.  No checking performed.
fsck_hfs: Use the -f option to force checking.
	CheckHFS returned 0, fsmodified = 0
root@Joys-MBP /p/v/root# fsck_hfs -c 4g -d -D 0x0002 -D 0x0001 -D 0x0010 -D 0x0020 /dev/disk2s2
journal_replay(/dev/disk2s2) returned 0
** /dev/rdisk2s2
	Cache size should be greater than 32M and less than 17592186044415M
	Using cacheBlockSize=32K cacheTotalBlock=98304 cacheSize=3145728K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
fsck_hfs: Volume is journaled.  No checking performed.
fsck_hfs: Use the -f option to force checking.
	CheckHFS returned 0, fsmodified = 0
root@Joys-MBP /p/v/root# fsck_hfs -c 4g -d -D 0x0002 -D 0x0001 -D 0x0010 -D 0x0020 -f /dev/disk2s2
journal_replay(/dev/disk2s2) returned 0
** /dev/rdisk2s2
	Cache size should be greater than 32M and less than 17592186044415M
	Using cacheBlockSize=32K cacheTotalBlock=98304 cacheSize=3145728K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
   The volume name is Seagate 5T (Joy Jin)
** Checking extents overflow file.
** Checking catalog file.
** Checking multi-linked files.
** Checking catalog hierarchy.
** Checking extended attributes file.
** Checking volume bitmap.
** Checking volume information.
** The volume Seagate 5T (Joy Jin) appears to be OK.
	CheckHFS returned 0, fsmodified = 0
root@Joys-MBP /p/v/root# fsck_hfs -c 4g -d -D 0x0002 -D 0x0001 -D 0x0010 -D 0x0020 -f -R ace /dev/disk2s2
journal_replay(/dev/disk2s2) returned 0
** /dev/rdisk2s2
	Cache size should be greater than 32M and less than 17592186044415M
	Using cacheBlockSize=32K cacheTotalBlock=98304 cacheSize=3145728K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
   The volume name is Seagate 5T (Joy Jin)
** Checking extents overflow file.
** Checking catalog file.
** Rebuilding extents overflow B-tree.
hfs_UNswap_BTNode: invalid node height (1)
** Rebuilding catalog B-tree.
hfs_UNswap_BTNode: invalid node height (1)
** Rebuilding extended attributes B-tree.
hfs_UNswap_BTNode: invalid node height (1)
** Rechecking volume.
** Checking Journaled HFS Plus volume.
   The volume name is Seagate 5T (Joy Jin)
** Checking extents overflow file.
** Checking catalog file.
** Checking multi-linked files.
** Checking catalog hierarchy.
** Checking extended attributes file.
   Unused node is not erased (node = 6360)
   (message repeat from 6360 to 49663) --> I exceeded character limit (1916033/30000) so cut this part off
   Unused node is not erased (node = 49663)
** Checking volume bitmap.
** Checking volume information.
   Invalid volume file count
   (It should be 1143585 instead of 872171)
   Invalid volume directory count
   (It should be 201272 instead of 168319)
   Invalid volume free block count
   (It should be 310948865 instead of 350464208)
	invalid VHB nextCatalogID 
   Volume header needs minor repair
(2, 0)
   Verify Status: VIStat = 0x8000, ABTStat = 0x0004 EBTStat = 0x0000
                  CBTStat = 0x0000 CatStat = 0x00000000
** Repairing volume.
** Rechecking volume.
** Checking Journaled HFS Plus volume.
   The volume name is Seagate 5T (Joy Jin)
** Checking extents overflow file.
** Checking catalog file.
** Checking multi-linked files.
** Checking catalog hierarchy.
** Checking extended attributes file.
** Checking volume bitmap.
** Checking volume information.
** The volume Seagate 5T (Joy Jin) was repaired successfully.
	CheckHFS returned 0, fsmodified = 1
root@Joys-MBP /p/v/root#
And then, I decided to do this to my Time Machine sparsebundle on Synology NAS (local 1gbps network). It failed. What should I do now?
Last login: Fri Aug 16 14:40:08 on ttys010
Welcome to fish, the friendly interactive shell
root@Joys-MacBook-Pro /p/v/root# fsck_hfs -df /dev/disk2s2
^C⏎                                                                             root@Joys-MacBook-Pro /p/v/root# 
date ; fsck_hfs -df /dev/disk2s2 ; date ; fsck_hfs -c 4g -d -D 0x0002 -D 0x0001 -D 0x0010 -D 0x0020 -f -R ace /dev/disk2s2 ; date
2019年 8月16日 星期五 14时40分44秒 CST
journal_replay(/dev/disk2s2) returned 0
** /dev/rdisk2s2
	Using cacheBlockSize=32K cacheTotalBlock=65536 cacheSize=2097152K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
** Detected a case-sensitive volume.
   The volume name is 时间机器备份
** Checking extents overflow file.
** Checking catalog file.
** Checking multi-linked files.
** Checking catalog hierarchy.
** Checking extended attributes file.
** Checking multi-linked directories.
	privdir_valence=17552, calc_dirlinks=104189, calc_dirinode=17552
** Checking volume bitmap.
** Checking volume information.
** The volume 时间机器备份 appears to be OK.
	CheckHFS returned 0, fsmodified = 0
2019年 8月16日 星期五 15时44分22秒 CST
journal_replay(/dev/disk2s2) returned 0
** /dev/rdisk2s2
	Cache size should be greater than 32M and less than 17592186044415M
	Using cacheBlockSize=32K cacheTotalBlock=98304 cacheSize=3145728K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
** Detected a case-sensitive volume.
   The volume name is 时间机器备份
** Checking extents overflow file.
** Checking catalog file.
** Rebuilding extents overflow B-tree.
hfs_UNswap_BTNode: invalid node height (1)
** Rebuilding catalog B-tree.
hfs_UNswap_BTNode: invalid node height (1)
** Rebuilding extended attributes B-tree.
hfs_UNswap_BTNode: invalid node height (1)
** Rechecking volume.
** Checking Journaled HFS Plus volume.
   Invalid extent entry
(4, 0)
	CheckExtRecord: id=4 1:(16384,0), (blockCount == 0)
** The volume   could not be verified completely.
	volume check failed with error 7 
	volume type is pure HFS+ 
	primary MDB is at block 0 0x00 
	alternate MDB is at block 0 0x00 
	primary VHB is at block 2 0x02 
	alternate VHB is at block 22476447294 0x53bb35e3e 
	sector size = 512 0x200 
	VolumeObject flags = 0x07 
	total sectors for volume = 22476447296 0x53bb35e40 
	total sectors for embedded volume = 0 0x00 
	CheckHFS returned -1317, fsmodified = 1
2019年 8月16日 星期五 16时12分03秒 CST
root@Joys-MacBook-Pro /p/v/root# fsck_hfs -df /dev/disk2s2
/dev/disk2s2: No such file or directory
Can't stat /dev/disk2s2
Can't stat /dev/disk2s2: No such file or directory
root@Joys-MacBook-Pro /p/v/root# fsck_hfs -df /dev/disk2s2
Unable to open block device /dev/disk2s2: Resource busyjournal_replay(/dev/disk2s2) returned 16
** /dev/rdisk2s2
	Using cacheBlockSize=32K cacheTotalBlock=65536 cacheSize=2097152K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
   Invalid extent entry
(4, 0)
	CheckExtRecord: id=4 1:(16384,0), (blockCount == 0)
** The volume   could not be verified completely.
	volume check failed with error 7 
	volume type is pure HFS+ 
	primary MDB is at block 0 0x00 
	alternate MDB is at block 0 0x00 
	primary VHB is at block 2 0x02 
	alternate VHB is at block 22476447294 0x53bb35e3e 
	sector size = 512 0x200 
	VolumeObject flags = 0x07 
	total sectors for volume = 22476447296 0x53bb35e40 
	total sectors for embedded volume = 0 0x00 
	CheckHFS returned -1317, fsmodified = 0
root@Joys-MacBook-Pro /p/v/root# fsck_hfs -df /dev/disk2s2
Unable to open block device /dev/disk2s2: Resource busyjournal_replay(/dev/disk2s2) returned 16
** /dev/rdisk2s2
	Using cacheBlockSize=32K cacheTotalBlock=65536 cacheSize=2097152K.
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
   Invalid extent entry
(4, 0)
	CheckExtRecord: id=4 1:(16384,0), (blockCount == 0)
** The volume   could not be verified completely.
	volume check failed with error 7 
	volume type is pure HFS+ 
	primary MDB is at block 0 0x00 
	alternate MDB is at block 0 0x00 
	primary VHB is at block 2 0x02 
	alternate VHB is at block 22476447294 0x53bb35e3e 
	sector size = 512 0x200 
	VolumeObject flags = 0x07 
	total sectors for volume = 22476447296 0x53bb35e40 
	total sectors for embedded volume = 0 0x00 
	CheckHFS returned -1317, fsmodified = 0
root@Joys-MacBook-Pro /p/v/root#
Joy Jin (3043 rep)
Aug 17, 2019, 10:27 AM • Last activity: Jul 25, 2020, 04:39 AM
0 votes
0 answers
71 views
Seemingly unprovoked software corruption on external drive. Can I salvage?
I have an external disk on my Mojave system that seemingly had the volume name overwritten with another volume's name and now it won't mount and Disk Utility firstaid won't help. I have an external enclosure containing two SATA drives I also have a "toaster" for dropping in other drives I have on my...
I have an external disk on my Mojave system that seemingly had the volume name overwritten with another volume's name and now it won't mount and Disk Utility firstaid won't help. I have an external enclosure containing two SATA drives I also have a "toaster" for dropping in other drives I have on my shelf. In the enclosure I have volumes "Drone Raw" and "Masters" - both mounted. I pulled an older drive off the shelf also called "Masters" and dropped into the toaster. It mounted fine and I had two "Masters". I've done this before without incident. Then something went awry and I am not exactly sure of the sequence. I think I unmounted the "Masters" in the enclosure. Maybe both. I can't be sure how I got here. But suddenly the Masters in the enclosure would not mount up at all. When I used Disk Utility, it thinks that the volume name is now "Drone Raw" and it won't mount. The true "Drone Raw" mounts fine. If I try to mount Masters, it then shows two Drone Raw with one of them being unmounted. The results of a First aid read: "Repairing file system. Volume is already unmounted. Performing fsck_hfs -fy -x /dev/rdisk6s2 Checking Journaled HFS Plus volume. Checking extents overflow file. Checking catalog file. Invalid sibling link Rebuilding catalog B-tree. The volume Drone Raw could not be repaired. File system check exit code is 8. Restoring the original state found as unmounted. File system verify or repair failed. Operation failed…" I tried command line fsck_hfs -r and -f to no avail. Is this drive hopelessly scrozzed? If so, more academically, why did this happen? I really wasn't doing anything creative or risky at the time. Thanks for any help Bill
BSartist (1 rep)
May 13, 2020, 01:50 PM
1 votes
0 answers
485 views
How do I rebuild the catalog on time machine backup hard drive?
My Time machine backup drive has started giving me these errors on doing a fsck. How can I rebuild the indexes on this drive? I just upgraded to Mac OS Mojave 10.14.6. The drive has 2 partitioned as: 1.9TB for time machine (HFS+) 100GB for other files (HFS+) Logs: /dev/rdisk2s2: fsck_hfs started at...
My Time machine backup drive has started giving me these errors on doing a fsck. How can I rebuild the indexes on this drive? I just upgraded to Mac OS Mojave 10.14.6. The drive has 2 partitioned as: 1.9TB for time machine (HFS+) 100GB for other files (HFS+) Logs: /dev/rdisk2s2: fsck_hfs started at Sat Feb 15 08:37:39 2020 /dev/rdisk2s2: /dev/rdisk2s2: ** /dev/rdisk2s2 /dev/rdisk2s2: Executing fsck_hfs (version hfs-407.200.4). /dev/rdisk2s2: ** Checking Journaled HFS Plus volume. /dev/rdisk2s2: The volume name is 2TB /dev/rdisk2s2: ** Checking extents overflow file. /dev/rdisk2s2: ** Checking catalog file. /dev/rdisk2s2: ** Checking multi-linked files. /dev/rdisk2s2: ** Checking catalog hierarchy. /dev/rdisk2s2: ** Checking extended attributes file. /dev/rdisk2s2: ** Checking multi-linked directories. /dev/rdisk2s2: Invalid parent for directory inode (id = 11770766) /dev/rdisk2s2: (It should be 19 instead of 18) /dev/rdisk2s2: Invalid name for directory inode (id = 11770766) /dev/rdisk2s2: (It should be dir_11770766 instead of temp11770766) /dev/rdisk2s2: Incorrect number of directory hard links /dev/rdisk2s2: ** The volume 2TB could not be verified completely. /dev/rdisk2s2: fsck_hfs completed at Sat Feb 15 09:02:06 2020 On a ubuntu VM, I am seeing: $ sudo fsck.hfsplus -dfy /dev/sda2 ** /dev/sda2 Using cacheBlockSize=32K cacheTotalBlock=1024 cacheSize=32768K. ** Checking HFS Plus volume. ** Checking Extents Overflow file. ** Checking Catalog file. ** Checking multi-linked files. ** Checking Catalog hierarchy. ** Volume check failed. volume check failed with error 5 volume type is pure HFS+ primary MDB is at block 0 0x00 alternate MDB is at block 0 0x00 primary VHB is at block 2 0x02 alternate VHB is at block 3711042830 0xdd32050e sector size = 512 0x200 VolumeObject flags = 0x07 total sectors for volume = 3711042832 0xdd320510 total sectors for embedded volume = 0 0x00 I tried to rebuild the catalog using the -r option (on ubuntu) but that failed: $ sudo fsck.hfsplus -dfyr /dev/sda2 ** /dev/sda2 Using cacheBlockSize=32K cacheTotalBlock=1024 cacheSize=32768K. could not get volume block 2, err 5 could not get alternate volume header at 3711042830, err 5 unknown volume type primary MDB is at block 0 0x00 alternate MDB is at block 0 0x00 primary VHB is at block 0 0x00 alternate VHB is at block 0 0x00 sector size = 512 0x200 VolumeObject flags = 0x01 total sectors for volume = 3711042832 0xdd320510 total sectors for embedded volume = 0 0x00
user674669 (2666 rep)
Feb 15, 2020, 05:20 PM • Last activity: Feb 15, 2020, 06:36 PM
1 votes
0 answers
664 views
fsck_hfs invalid key length
The HDD is a brand new MyPassport WD 4Tb. I recently used it as my main backup and before I had the time to reformat another drive for a second backup (which means that some data on that drive are crucial to recover), this new HD stopped mounting on my MacBook Pro Mojave where it was always attached...
The HDD is a brand new MyPassport WD 4Tb. I recently used it as my main backup and before I had the time to reformat another drive for a second backup (which means that some data on that drive are crucial to recover), this new HD stopped mounting on my MacBook Pro Mojave where it was always attached to (I don't know what happened). I tried DiskUtility which failed. I tried fsck_hfs in Recovery or Single User Mode on 2 different Apple computer as well as on Linux Mint... all failed. The returned error is Invalid Key Length and Catalog File not found for extent. I tried Stellar Data Recovery which ran for a full day and now it is stuck at the beginning of the yellow phase 3 of 4. It has not crashed, no error message is displayed. I tried their new Sellar Volume Repair which failed. I tried iBoySoft Data Recovery which crashes everytime I do a deep scan. I do not understand why such disk failure is so difficult to repair! What can I do? Advice will be deeply appreciated.
Paul Godard (131 rep)
Jan 5, 2020, 11:47 AM • Last activity: Jan 5, 2020, 12:08 PM
3 votes
1 answers
1951 views
Why is fsck_hfs slow on *journaled* HFS+
Sometimes my external drives get uncleanly detached, and at the next connection a disk check is necessary. What surprises me now is to see a slow disk check process *on a journaled filesystem*. According to the manual, that requires the `-f` option, but that seems incorrect. Isn't the point of journ...
Sometimes my external drives get uncleanly detached, and at the next connection a disk check is necessary. What surprises me now is to see a slow disk check process *on a journaled filesystem*. According to the manual, that requires the -f option, but that seems incorrect. Isn't the point of journaling that you *never* need a full disk check? In detail, macOS started fsck automatically with -y:
$ ps auxw|grep fsck
root              3792   1.1  2.3  6429668 389400   ??  U     2:09pm   0:05.10 /System/Library/Filesystems/hfs.fs/Contents/Resources/./fsck_hfs -y /dev/disk3s2
Then I interrupted the process and re-run it by hand without -y for greater control, as I usually do:
sudo fsck_hfs /dev/disk3s2
** /dev/rdisk3s2
   Executing fsck_hfs (version hfs-407.50.6).
** Checking Journaled HFS Plus volume.
   The volume name is MyPass4T-TM2
** Checking extents overflow file.
** Checking catalog file.
** Checking multi-linked files.
FWIW, this is on a journaled HFS+ sparsebundle, that I use for Time Machine, hosted inside an external filesystem (ExFAT).
Blaisorblade (676 rep)
Jan 15, 2019, 02:17 PM • Last activity: Apr 14, 2019, 04:53 PM
3 votes
0 answers
1942 views
"No space left on device" despite 70GB free; can't create any files larger than 8.0MiB on iPad
iPad Pro 9.7" (1st gen) 256GB, iOS 10.2.1. **Problem #1**: **I can't create files larger than 2-8MB** (it varies upon reboot). This renders the iPad **virtually unusuable.** Many apps won't launch, apps won't install, etc. It **reports "no space left on device" when you try to create a file larger t...
iPad Pro 9.7" (1st gen) 256GB, iOS 10.2.1. **Problem #1**: **I can't create files larger than 2-8MB** (it varies upon reboot). This renders the iPad **virtually unusuable.** Many apps won't launch, apps won't install, etc. It **reports "no space left on device" when you try to create a file larger than the bizarre 2-8MB limit, despite having gigs of free space.** **Problem #2**: **Disk space constantly keeps disappearing**. I kept uninstalling apps (before this "no space left on device" issue began) and no matter how many I deleted, it would act full a few days later. At first it acted full at 1GB free. Then over several weeks it eventually became 2GB, then 3...4... 6...8... and eventually even with 9GB free, the device still acted like it was full! So I knew a HUGE amount of disk space was unaccounted for bc I had uninstalled dozens of gigs of apps. **Precipitating Incident**: Something catastrophic happened a few months ago when I was legitimately very low on disk space and tried updating several apps at once. The iPad froze and several system databases were corrupted, and the iPad began asking me to setup certain passwords again, etc. Ever since then I've had various issues with it but was able to use it mostly. Until last week! I ended up Jailbreaking the iPad because I'm at the end of my wits and going to have to erase the device if I can't solve it, and I was absolutely DYING to run a **"du -h -d 1"** to see just WHAT was consuming roughly 60GB of missing space!! I ran a fsck_hfs on the drive (which was incredibly difficult to do!!) and SURE ENOUGH, it said something like **2 million blocks free - should be 16 million**, and I did the math and it made perfect sense! The fsck completed and rebooted and BAM! Suddenly my missing space is back and I've got **71GB free!** But that's around the time the problem got so bad that I can't create any files bigger than 2-8MB. I literally ran: dd if=/dev/zero of=testfile.bin bs=1M count=10 ..and it will fail at a certain number that almost always is a perfect MiB power of 2 (like 2, 4, or 8MiB) with "No space left on device". BUT I CAN ALWAYS WRITE AS MANY MORE FILES OF THAT SIZE AS I WANT! Let's say the limit is 4.0MiB today. I can do that DD command with incremental filenames over and over. I've done it 7 times in a row creating 7 files and every time it worked perfectly. If I made it 4.1MiB, it fails. Even though I just created 7x4 (32MiB) of files! And STILL, the disk space CONTINUES to shrink on its own, this morning its down to 39GB free. **If I fsck_hfs it again, it will go back to the ~70GB free mark, and slowly begin dwindling once again.** I'm at a loss. Just **HOW can the device give "No space left on device" errors when there's dozens of GB free?** The iPad only has 1 disk, divided into a 4GB /System partition and the rest on /private/var. My System partition is only 75% full which is normal for any iOS device. I even checked the inodes with df and there's something like 4 billion inodes free on the Data disk (/dev/disk0s1s2). Here are some relevant printouts (from various days): iPad:/private root# df Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on /dev/disk0s1s1 9316200 6795912 2427128 74% 125137 4294842142 0% / devfs 99 99 0 100% 172 0 100% /dev /dev/disk0s1s2 486135960 476137152 9998808 98% 1217291 4293749988 0% /private/var iPad:/private root# df -h Filesystem Size Used Avail Capacity iused ifree %iused Mounted on /dev/disk0s1s1 4.4Gi 3.2Gi 1.2Gi 74% 125137 4294842142 0% / devfs 50Ki 50Ki 0Bi 100% 172 0 100% /dev /dev/disk0s1s2 232Gi 227Gi 4.8Gi 98% 1217291 4293749988 0% /private/var iPad-Pro-256GB:/sbin root# mount /dev/disk0s1s1 on / (hfs, local, journaled, noatime) devfs on /dev (devfs, local, nobrowse) /dev/disk0s1s2 on /private/var (hfs, local, nodev, nosuid, journaled, noatime, protect) iPad-Pro-256GB:~ root# pwd /var/root iPad-Pro-256GB:~ root# dd if=/dev/zero of=test3.bin bs=1M count=20 dd: error writing 'test3.bin': No space left on device 9+0 records in 8+0 records out 8388608 bytes (8.4 MB, 8.0 MiB) copied, 0.671137 s, 12.5 MB/s Excerpt from one of the first fsck_hfs I ran when the device had about 9GB free but should've had 70GB free: ** Checking volume bitmap. Volume bitmap needs minor repair for orphaned blocks Volume bitmap needs repair for under-allocation ** Checking volume information. Invalid volume free block count (It should be 16884367 instead of 2063604) A complete successful fsck_hfs: iPad-Pro-256GB:/ root# umount -f /private/var && killall backboardd && fsck_hfs -f -y /dev/disk0s1s2 umount: /private/var: not currently mounted iPad-Pro-256GB:/ root# fsck_hfs -f -y /dev/disk0s1s2 ** /dev/rdisk0s1s2 Executing fsck_hfs (version hfs-366.30.3). ** Checking Journaled HFS Plus volume. ** Detected a case-sensitive volume. The volume name is Data ** Checking extents overflow file. ** Checking catalog file. Incorrect size for file MediaLibrary.sqlitedb (It should be 1343488 instead of 1564672) ** Checking multi-linked files. ** Checking catalog hierarchy. ** Checking extended attributes file. ** Checking volume bitmap. Volume bitmap needs minor repair for orphaned blocks ** Checking volume information. Invalid volume free block count (It should be 16972349 instead of 14633343) ** Repairing volume. Limited repair mode, not all repairs available ** Rechecking volume. ** Checking Journaled HFS Plus volume. ** Detected a case-sensitive volume. The volume name is Data ** Checking extents overflow file. ** Checking catalog file. ** Checking multi-linked files. ** Checking catalog hierarchy. ** Checking extended attributes file. ** Checking volume bitmap. ** Checking volume information. ** Trimming unused blocks. ** The volume Data was repaired successfully. **Notes:** A. Nothing relevant on the syslog when large files fail to create. B. Device: iPad Pro 9.7" 256GB iOS 10.2.1 HFS (not APFS which was introduced later in 10.3). Never jailbroken UNTIL long after this problem started.
Syclone0044 (1375 rep)
Aug 27, 2018, 10:31 PM • Last activity: Aug 30, 2018, 08:04 AM
0 votes
1 answers
4667 views
Interpreting fsck_hfs result
While trying to understand why my Mac stopped performing so well on Spotlight.. I tried a suggestion that had me check my HD's status on Disk Utility. I can't press the `Unmount` button even after using `First-Aid`, weird. So this led me to searching this [link][1] - an old one but informative - and...
While trying to understand why my Mac stopped performing so well on Spotlight.. I tried a suggestion that had me check my HD's status on Disk Utility. I can't press the Unmount button even after using First-Aid, weird. So this led me to searching this link - an old one but informative - and tried the commands there. For the two other commands I got the expected result. Using sudo fsck_hfs -f /dev/disk0s2 or -fy, however.. (-y for Always attempt to repair any damage that is found ) while this is expected: enter image description here I got the following result. Same when typed -f in place of -fy. enter image description here This seemed troublesome. But shame I don't have a clue what is the issue nor how to deal with it. Could anyone share some insight please?
dia (241 rep)
Jan 18, 2018, 03:01 PM • Last activity: Jan 18, 2018, 06:46 PM
1 votes
0 answers
53 views
Is there a way to figure out which file is stored at a particular block?
I've just checked my SSD with `sudo fsck_hfs -fy -S /dev/disk2s2` and it threw some errors, for example: `block 2760096: *** NO MATCH ***` What I would like to know which file is located at offset 2760096.
I've just checked my SSD with sudo fsck_hfs -fy -S /dev/disk2s2 and it threw some errors, for example: block 2760096: *** NO MATCH *** What I would like to know which file is located at offset 2760096.
Lenar Hoyt (843 rep)
Nov 11, 2017, 12:13 PM
2 votes
1 answers
1947 views
MacBook Pro (Late2011) fails to boot and repair HDD volume:
I have a MacBookPro ("late 2011"/ MD313LL/A) (i5, 4GB, 500 GB Toshiba HDD) with MacOSX 10.9.x on it. The MacBook won't boot. In verbose mode I see the following information: During boot, after mounting the drive `fsck_hfs` is being started, and it states "*Incorrect number of thread records (4,23121...
I have a MacBookPro ("late 2011"/ MD313LL/A) (i5, 4GB, 500 GB Toshiba HDD) with MacOSX 10.9.x on it. The MacBook won't boot. In verbose mode I see the following information: During boot, after mounting the drive fsck_hfs is being started, and it states "*Incorrect number of thread records (4,23121)*". Then the machine enters a repair, checking, failing loop that doesn't end successfully. It aborts after 3 or 4 attempts and machine is shut down. (I have also started recovery mode and ran the Disk Check Utility from there but same result, Volume could not be repaired. I booted a Linux live system and checked the HDD with a tool that read S.M.A.R.T. data. According to this, there is no issue with the drive at all. What does *Incorrect number of thread records (4,23121)* mean? Any ideas what to do will be appreciated.
Matt (23 rep)
Sep 22, 2017, 01:40 PM • Last activity: Sep 22, 2017, 02:13 PM
1 votes
1 answers
708 views
Strange messages from fsck_hfs
I am trying to check an entire USB external disk that is not erased. For that reason I have typed fsck_hfs -fy -l -S -d /dev/disk1s2 This the result [![enter image description here][1]][1] [1]: https://i.sstatic.net/q6N6z.png Several messages I found strange here 1. NO WRITE - how can that be? If I...
I am trying to check an entire USB external disk that is not erased. For that reason I have typed fsck_hfs -fy -l -S -d /dev/disk1s2 This the result enter image description here Several messages I found strange here 1. NO WRITE - how can that be? If I am trying to fix it how can it be no write? 2. ok, it found a bad block at 137228189184 but at end it says the disk is OK 3. what is block 26023808 NO MATCH? 4. at end it says fsmodified = 0??? No modification performed? I don't understand this. Have fsck_hfs fixed my disk? I by fixed I mean mark the block as unusable or reformatted it or whatever? Thanks
Duck (2572 rep)
Sep 9, 2017, 03:38 AM • Last activity: Sep 9, 2017, 07:29 AM
Showing page 1 of 15 total questions