Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

32 votes
8 answers
124566 views
UUID of a drive that won't show up in /dev/disk/by-uuid or blkid
I have a USB drive that is not receiving a UUID. When I look at the contents of the /dev/disk/by-uuid it doesn't exist there. The dev point that the partition lives in is on /dev/sdb. I am able to see sdb under /dev/disk/by-path. Also, when using blkid, I get zero output. I'm assuming that I got an...
I have a USB drive that is not receiving a UUID. When I look at the contents of the /dev/disk/by-uuid it doesn't exist there. The dev point that the partition lives in is on /dev/sdb. I am able to see sdb under /dev/disk/by-path. Also, when using blkid, I get zero output. I'm assuming that I got an error code that returned back. Is there a way to get a UUID for this partition? Result of fdisk -l /dev/sdb: Disk /dev/sdb: 320.1 GB, 320072932352 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142446 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00082145 Device Boot Start End Blocks Id System /dev/sdb1 2048 625141759 312569856 83 Linux The partition table and partition was created with gparted, so it was partitioned and ran the command mkfs.ext3. Output of fsck -n /dev/sdb1: fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) fsck.ext2: Superblock invalid, trying backup blocks... zwei was not cleanly unmounted, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information zwei: 11/19537920 files (0.0% non-contiguous), 1275097/78142464 blocks It was formatted as an ext3 drive. Why is that showing up as ext2?
monksy (773 rep)
Feb 11, 2013, 08:32 PM • Last activity: Mar 31, 2025, 03:40 PM
1 votes
0 answers
1013 views
How do I get ext3/ext4 filesystem features to apply in mke2fs?
As many of you know, `ext3/ext4` have filesystem features that provide their special functionalities. Some of these features can be retrieved using `dumpe2fs`, for example, this output: **Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_s...
As many of you know, ext3/ext4 have filesystem features that provide their special functionalities. Some of these features can be retrieved using dumpe2fs, for example, this output: **Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize** So my question is, where in the system or which command can I use to get the full listing of filesystem features to apply when using mke2fs? If anyone happens to know a useful website/link I'd appreciate it too.
strkIV (56 rep)
May 8, 2017, 08:22 PM • Last activity: Mar 17, 2025, 08:32 AM
7 votes
4 answers
3677 views
How to deliberately fragment a file
I am looking for a way to fragment an existing file in order to evaluate the performance of some tools. I found a solution for NTFS file system called [MyFragmenter][1] as described in this [thread][2]. However I can't find anything for ext2/3/4... I guest I can develop my own file fragmenter but du...
I am looking for a way to fragment an existing file in order to evaluate the performance of some tools. I found a solution for NTFS file system called MyFragmenter as described in this thread . However I can't find anything for ext2/3/4... I guest I can develop my own file fragmenter but due to time constraint I would like to find a faster solution. I found some tool like HJ-Split which split a file in smaller bits but I doubt this will simulate file fragmentation. Is their any solution available for my problem?
Flanfl (965 rep)
Mar 18, 2012, 05:00 PM • Last activity: Nov 3, 2024, 11:35 PM
9 votes
1 answers
16406 views
How to resize ext3 image files
I have created an 200MB ext3 using the following commands. dd if=/dev/zero of=./system.img bs=1000000 count=200 mkfs.ext2 ./system.img tune2fs -j ./system.img How can I resize it to 50MB and 300MB? The problem is I have only some binaries in my system. They are : `badblocks,e2fsck, mke2fs, mke2fs.bi...
I have created an 200MB ext3 using the following commands. dd if=/dev/zero of=./system.img bs=1000000 count=200 mkfs.ext2 ./system.img tune2fs -j ./system.img How can I resize it to 50MB and 300MB? The problem is I have only some binaries in my system. They are : badblocks,e2fsck, mke2fs, mke2fs.bin, parted, resize2fs, tune2fs
Binoy Babu (1065 rep)
Apr 10, 2012, 08:14 AM • Last activity: Feb 22, 2024, 01:48 PM
14 votes
1 answers
16204 views
Mounting an ext3 filesystem with user privileges
I'm trying to mount an ext3 file system from another Linux installation so that the user, not root, will have full access to all the files. (I really do need user to have access to those files, because I would like to use them from another computer via sshfs, and sshfs will only give the user's acce...
I'm trying to mount an ext3 file system from another Linux installation so that the user, not root, will have full access to all the files. (I really do need user to have access to those files, because I would like to use them from another computer via sshfs, and sshfs will only give the user's access rights to the files.) If I run mount /dev/sda1 /mnt/whatever all files are only accessible by root. I've also tried mount -o nosuid,uid=1000,gid=1000 /dev/sda1 /mnt/whatever as instructed by a SuperUser question discussing ext4 but that fails with an error, and dmesg reports: EXT3-fs: Unrecognized mount option "uid=1000" or missing value How can I mount the filesystem?
Ilari Kajaste (335 rep)
Jun 9, 2011, 10:18 AM • Last activity: Feb 2, 2024, 09:54 AM
9 votes
1 answers
4010 views
What is a fragment size in an ext3 filesystem?
This is an output from dumpe2fs: root: ~/# dumpe2fs -h /dev/sdb3 | grep -i 'fragment|block size' dumpe2fs 1.39 (29-May-2006) Block size: 4096 Fragment size: 4096 Fragments per group: 32768 Is this related to disk fragmentation?
This is an output from dumpe2fs: root: ~/# dumpe2fs -h /dev/sdb3 | grep -i 'fragment|block size' dumpe2fs 1.39 (29-May-2006) Block size: 4096 Fragment size: 4096 Fragments per group: 32768 Is this related to disk fragmentation?
JBraganza (91 rep)
Jul 8, 2011, 11:48 PM • Last activity: Jan 8, 2024, 09:53 AM
25 votes
9 answers
37073 views
How to search for files with immutable attribute set?
For config auditing reasons, I want to be able to search my ext3 filesystem for files which have the immutable attribute set (via `chattr +i`). I can't find any options for `find` or similar that do this. At this point, I'm afraid I'll have to write my own script to parse `lsattr` output for each di...
For config auditing reasons, I want to be able to search my ext3 filesystem for files which have the immutable attribute set (via chattr +i). I can't find any options for find or similar that do this. At this point, I'm afraid I'll have to write my own script to parse lsattr output for each directory. Is there a standard utility that provides a better way?
depquid (3981 rep)
May 31, 2014, 02:34 AM • Last activity: Nov 17, 2023, 07:00 AM
2 votes
3 answers
2515 views
Besides the journal, what are the differences between ext2 and ext3?
I just saw [an answer question about filesystems for embedded hardware](https://raspberrypi.stackexchange.com/a/1169) on another Stack Exchange site. The question was "What file system format should I use on flash memory?" and the answer suggested the ext2 filesystem, or the ext3 filesystem with jou...
I just saw [an answer question about filesystems for embedded hardware](https://raspberrypi.stackexchange.com/a/1169) on another Stack Exchange site. The question was "What file system format should I use on flash memory?" and the answer suggested the ext2 filesystem, or the ext3 filesystem with journaling disabled a'la tune2fs -O ^has_journal /dev/sdbX This made me wonder... What would the advantage be to using ext3 (with journaling disabled) over ext2? As far as I understood, the only real difference between the two was the journal. What other differences between ext2 and ext3 are there?
Josh (8728 rep)
Jul 20, 2012, 12:37 PM • Last activity: Nov 16, 2023, 10:02 AM
143 votes
12 answers
226337 views
How do I know if a partition is ext2, ext3, or ext4?
I just formatted stuff. One disk I format as ext2. The other I want to format as ext4. I want to test how they perform. Now, how do I know the kind of file system in a partition?
I just formatted stuff. One disk I format as ext2. The other I want to format as ext4. I want to test how they perform. Now, how do I know the kind of file system in a partition?
user4951 (10749 rep)
Jan 9, 2013, 10:24 AM • Last activity: Nov 8, 2023, 02:19 AM
0 votes
0 answers
363 views
hdparm causes drive spin up after spindown
I have followed the manual of hdparm to try to make the drive spindown after a few minutes of inactivity but as soon as it spins down it turns back on again within 30 seconds but I don't hear any disk activity. Someone reported a bug with hdparm parameters here: https://unix.stackexchange.com/questi...
I have followed the manual of hdparm to try to make the drive spindown after a few minutes of inactivity but as soon as it spins down it turns back on again within 30 seconds but I don't hear any disk activity. Someone reported a bug with hdparm parameters here: https://unix.stackexchange.com/questions/107165/hard-disk-spins-down-and-up-too-frequently-when-on-battery So I was trying different values. I notice this setting: hdparm -q -a 1 -B 128 -S 120 /dev/sda ...causes the hard drive to go slower as if it sometimes powers down then immediately powers up before completely powering down (so like tiny knocks). Apparently this command fixes that yet keeps the drive always on even when I never touch the computer: hdparm -q -a 1 -B 255 -S 120 /dev/sda So I looked to see what could be doing IO at idle times and noticed a kworker thread doing it every second or so. I'm not sure if the udisk, upowerd, or udev are the cause behind this. I have also made a ram drive (dev/ramX) and stored frequently accessed files there so that I/O to disk is minimized. I don't know if I'm using ideal settings in my sysctl settings but these are the values I have: for the /proc/sys/vm folder:
admin_reserve_kbytes=8192
block_dump=0
compact_unevictable_allowed=1
dirty_background_bytes=0
dirty_background_ratio=54
dirty_bytes=0
dirty_expire_centisecs=10
dirty_ratio=55
dirty_writeback_centisecs=300
dirtytime_expire_seconds=43200
drop_caches=0
extfrag_threshold=500
highmem_is_dirtyable=0
laptop_mode=1
legacy_va_layout=0
lowmem_reserve_ratio=256        32      32
max_map_count=65530
min_free_kbytes=43196
mmap_min_addr=98304
nr_pdflush_threads=0
oom_dump_tasks=0
oom_kill_allocating_task=1
overcommit_kbytes=0
overcommit_memory=0
overcommit_ratio=50
page-cluster=128
panic_on_oom=0
percpu_pagelist_fraction=0
swappiness=0
user_reserve_kbytes=64233
vdso_enabled=1
vfs_cache_pressure=10000
For the /proc/sys/fs folder:
aio-max-nr=65536
aio-nr=0
dentry-state=36019      23519   45      0       0       0
dir-notify-enable=1
file-max=205612
file-nr=4056    0       205612
inode-nr=27643  72
inode-state=27643       72      0       0       0       0       0
lease-break-time=45
leases-enable=1
nr_open=1048576
overflowgid=65534
overflowuid=65534
pipe-max-size=1048576
pipe-user-pages-hard=0
pipe-user-pages-soft=16384
protected_hardlinks=0
protected_symlinks=0
suid_dumpable=0
I don't know if the filesystem matters much but the drive has multiple partitions of ext3 and vfat. I also do not use a swap drive as I have 2 GB memory installed. how do I go about ensuring that the disk can shut off from inactivity until I'm ready to use the computer again? I'm running slackware 14 with xfce on an older 32-bit PC. I also checked the power settings both in xfce and KDE and I didn't see anywhere where I can specify a timeout to spin down a drive. And no I don't have money to invest in an SSD and I recall someone recently having issues with one of them in a mac.
mike_s (11 rep)
Dec 1, 2022, 04:42 AM
1 votes
1 answers
1294 views
What happens if you never ever run e2fsck?
We have servers which have been running for a long time. When they reboot, we see this message: kernel: EXT4-fs (sda3): warning: maximal mount count reached, running e2fsck is recommended My question is: what if you never ever run e2fsck? Man page does not shed enough light. The warning message says...
We have servers which have been running for a long time. When they reboot, we see this message: kernel: EXT4-fs (sda3): warning: maximal mount count reached, running e2fsck is recommended My question is: what if you never ever run e2fsck? Man page does not shed enough light. The warning message says "is recommended" - but does not say it is mandatory. What are consequences of not running it? What does it mean to have maximal count reached?
Pawan Singh (13 rep)
Jul 16, 2022, 01:51 AM • Last activity: Jul 16, 2022, 06:49 AM
30 votes
3 answers
45140 views
Difference between block size and cluster size
I've got a question concerning the block size and cluster size. Regarding to what I have read about that I assume the following: * The block size is the physical size of a block, mostly 512 bytes. There is no way to change this. * The cluster size is the minimal size of a block that is read and writ...
I've got a question concerning the block size and cluster size. Regarding to what I have read about that I assume the following: * The block size is the physical size of a block, mostly 512 bytes. There is no way to change this. * The cluster size is the minimal size of a block that is read and writable by the OS. If I create a new filesystem e.g. ext3, I can specify this minimal block size with the switch -b. Almost all programs like dumpe2fs, mke2fs use block size as an name for cluster size. If I have got the following output: $ stat test File: `test' Size: 13 Blocks: 4 IO Block: 2048 regular file Device: 700h/1792d Inode: 15 Links: 1 Is it correct that the size is the actual space in bytes, blocks are the physically used blocks (512 bytes for each) and IO block relates to the block size specified when creating the FS?
pluckyDuck (473 rep)
Jun 4, 2011, 04:19 PM • Last activity: Jun 25, 2022, 09:26 AM
1 votes
0 answers
483 views
how to locate the root cause of a corrupted ext3 filesystem
I have a 4G Compact Flash card with a 2.5G ext3 partition. The file system has become corrupted. I am not necessarily interested in fixing the file system as identifying exactly what is corrupted in the filesystem. Aside from knowing outright the filesystem is corrupted, when running GNOME's [Disks]...
I have a 4G Compact Flash card with a 2.5G ext3 partition. The file system has become corrupted. I am not necessarily interested in fixing the file system as identifying exactly what is corrupted in the filesystem. Aside from knowing outright the filesystem is corrupted, when running GNOME's Disks utility, if I select the partition and select to 'repair the filesystem', it eventually errors with: Error repairing filesystem on /dev/sdb3: Process reported exit code 1: e2fscf 1.42.9 (28-Dec-2013) (udisks-error-quark,0) I got the idea from here to use badblocks to try to identify bad blocks, get inode numbers, and find corrupted files. however when I run this, badblocks didn't find any badblocks (ran a few times) sudo badblocks -v /dev/sdb3 -b 4096 -s Checking blocks 0 to 622517 Checking for bad blocks (read-only test): done Pass completed, 0 bad blocks found. (0/0/0 errors) I thought this was odd, so I ran fsck, but that reported no errors either:
sudo e2fsck -vcck /dev/sdb3
e2fsck 1.42.9 (28-Dec-2013)
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: done                                                 
/dev/sdb3: Updating bad block inode.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

/dev/sdb3: ***** FILE SYSTEM WAS MODIFIED *****

      243306 inodes used (78.16%, out of 311296)
         134 non-contiguous files (0.1%)
         121 non-contiguous directories (0.0%)
             # of inodes with ind/dind/tind blocks: 2469/39/0
      583752 blocks used (93.77%, out of 622518)
           0 bad blocks
           0 large files

      210273 regular files
       28804 directories
         638 character device files
          12 block device files
           1 fifo
        2646 links
        3569 symbolic links (3480 fast symbolic links)
           0 sockets
------------
      245943 files
Now I am confused. what does it mean that I cannot repair the filesystem, yet filesystem checker and badblocks tell me nothing is wrong. Is it just a mechanically failed CF card then? Anticlimactic...It is new after all.
Jared Sanchez (31 rep)
Apr 29, 2022, 08:10 PM • Last activity: Apr 29, 2022, 08:45 PM
1 votes
0 answers
1411 views
How did my ext3 superblocks get so messed up? And how to fix them?
I've recently found an old external hard disk and I have no idea what's on it. There was only one partition, encrypted with [dm-crypt](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-crypt.html). Luckily I still remembered some of the passphrases that I used back in the day and o...
I've recently found an old external hard disk and I have no idea what's on it. There was only one partition, encrypted with [dm-crypt](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-crypt.html) . Luckily I still remembered some of the passphrases that I used back in the day and one of them actually worked. With the right passphrase, cryptsetup revealed an **ext3 filesystem** inside. Unfortunately, I couldn't mount it:
$ sudo mount --read-only --types ext3 /dev/mapper/mystery-disk /mnt/mystery-disk/
mount: /mnt/mystery-disk: wrong fs type, bad option, bad superblock on /dev/mapper/mystery-disk, missing codepage or helper program, or other error.
Same when I omitted the --types option. A quick fsck run did not help either. Let's Image, Check, Dump ------------------------ At this point I decided to pull an image before any more fixing attempts. I've tried both dd and ddrescue several times and neither [gave me](https://github.com/meeque/mystery-disk/blob/master/out/mystery-disk.dd.log) [any errors](https://github.com/meeque/mystery-disk/blob/master/out/mystery-disk.ddrescue.log) . So I think the **disk itself is fine**: With the image, I tried a couple more runs of e2fsck, but it didn't get me anywhere:
$ e2fsck mystery-disk.img
e2fsck 1.45.5 (07-Jan-2020)
ext2fs_open2: The ext2 superblock is corrupt
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: The ext2 superblock is corrupt while trying to open mystery-disk.img
e2fsck: Trying to load superblock despite errors...
ext2fs_check_desc: Corrupt group descriptor: bad block for block bitmap
e2fsck: Group descriptors look bad... trying backup blocks...
Error reading block 1198463829 (Attempt to read block from filesystem resulted in short read).  Ignore error? yes
Force rewrite? yes
Superblock has an invalid journal (inode 8).
Clear? yes
*** journal has been deleted ***

Corruption found in superblock.  (r_blocks_count = 3755560972).

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 
 or
    e2fsck -b 32768 


mystery-disk.img: ***** FILE SYSTEM WAS MODIFIED *****
$ e2fsck mystery-disk.img
e2fsck 1.45.5 (07-Jan-2020)
ext2fs_open2: The ext2 superblock is corrupt
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: The ext2 superblock is corrupt while trying to open mystery-disk.img
e2fsck: Trying to load superblock despite errors...
ext2fs_check_desc: Corrupt group descriptor: bad block for block bitmap
e2fsck: Group descriptors look bad... trying backup blocks...
Superblock has an invalid journal (inode 8).
Clear? yes
*** journal has been deleted ***

Corruption found in superblock.  (r_blocks_count = 3755560972).

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 
 or
    e2fsck -b 32768 


mystery-disk.img: ***** FILE SYSTEM WAS MODIFIED *****
The dumpe2fs tool didn't help either. Just complained about corrupt superblock, but didn't say why it's corrupt:
$ dumpe2fs -o superblock=32768 -o blocksize=4096 mystery-disk.img
dumpe2fs 1.45.5 (07-Jan-2020)
dumpe2fs: The ext2 superblock is corrupt while trying to open mystery-disk.img
Couldn't find valid filesystem superblock.
Superblocks ----------- So apparently the superblock was messed up. But I had no idea how to find an alternative one. Eventually I stumbled across this other question here, about [recovering ext4 superblocks](https://unix.stackexchange.com/questions/33284/recovering-ext4-superblocks) . (And I think that most of this applies to my ext3, too.) I tried the trick with mke2fs -n, but I was uncertain which blocksize my ext3 fs might have used. So I tried the usual suspects: 1024, 2048, 4096. This is what I got for 4096, which later turned out to be the correct one:
$ mke2fs -n -b 4096 mystery-disk.img 
[...]
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424
I then tried to pass all of these to e2fsck, with or without specifying a block-size. But it always complained about corruption in superblock. Outputs where pretty much the same as when using the default superblock. And I had no idea, if mke2fs -n had even produced useful results. Some other sources said that it only works, if called with the same parameters as when the filesystem was created. But what parameters could I have used over a decade ago? I'm not even sure that the mke2fs from back then would be compatible with the one that I use today. More Superblocks? ----------------- I searched the web for other methods for finding ext3 superblocks. I found surprisingly little, but eventually I stumbled across some technical documentation of ext4 superblock data-structures in the [Linux kernel docs](https://www.kernel.org/doc/html/latest/filesystems/ext4/globals.html) . These mentioned magic bytes and some enumerated values, so I came up with a regexp based on the superblock fields s_magic, s_state, and s_errors:
$ LANG=C grep --only-matching --byte-offset --binary --text --perl-regexp '\x53\xEF[\x00-\x07]\x00[\x01-\x03]\x00' mystery-disk.img
This gave me a reasonable number of hits. So I wrote [a script](https://github.com/meeque/mystery-disk/blob/master/find-super.sh) around it, to calculate and print superblock numbers, sizes, etc. Some of the hits were clearly false positives, e.g. because they indicated unlikely block sizes or offsets. But the script also confirmed all the superblocks that I had found with mke2fs -n earlier. Here's an excerpt of the [script outputs](https://github.com/meeque/mystery-disk/blob/master/out/mystery-disk.find-super.log) :
$ ./find-super.sh mystery-disk.img
[...]

Scan for superblocks complete. Found 21 candidate superblocks.
Printing superblock meta data...

Processing candidate superblock with magic bytes at 1080...
Superblock offset:     1024   (at 1024 in block 0)
Block size:            4096   (2**(10+2))
Filesystem size:       11525729222656 bytes   (2813898736 blocks, ~10734 GiB)

Processing candidate superblock with magic bytes at 26870840...
Superblock offset:     26870784   (at 1024 in block 6560)
Block size:            4096   (2**(10+2))
Filesystem size:       2889757667328 bytes   (705507243 blocks, ~2691 GiB)

[...]

Processing candidate superblock with magic bytes at 134217784...
Superblock offset:     134217728   (at 0 in block 32768)
Block size:            4096   (2**(10+2))
Filesystem size:       6483460464640 bytes   (1582876090 blocks, ~6038 GiB)

[...]

Processing candidate superblock with magic bytes at 37909356558...
Superblock offset:     37909356502   (at 982 in block 37020855)
Block size:            1024   (2**(10+0))
Filesystem size:       0 bytes   (0 blocks, ~0 GiB)

Processing candidate superblock with magic bytes at 46036680760...
Superblock offset:     46036680704   (at 0 in block 11239424)
Block size:            4096   (2**(10+2))
Filesystem size:       10750096064512 bytes   (2624535172 blocks, ~10011 GiB)
Still no luck with e2fsck or dumpe2fs with any of these superblocks. They always say "superblock is corrupt", but never say why. So I've hex-dumped some of the superblocks, see [here](https://github.com/meeque/mystery-disk/blob/master/out/mystery-disk.hexdump-superblock-0.log) and [here](https://github.com/meeque/mystery-disk/blob/master/out/mystery-disk.hexdump-superblock-32768.log) . Not sure, if they are plausible. They are a little heavy on the zeros. In particular, all the **checksums are zero** (see field s_checksum at the end of each superblock). What puzzles me most, is that each of these superblocks indicates a **different filesystem size** (as calculated from the s_blocks_count_lo and s_log_block_size fields). Ignoring outliers like 0, alleged sizes range from **~712 GiB** to **~15957 GiB**. But my disk image is only **77G**, same as the external hard disk. Is there any chance to figure out **which of the superblocks** might be suited best for further rescue attempts? If so, what would be the **next steps?** Any **other ideas?** Am I missing **something trivial**?
meeque (21 rep)
Apr 15, 2022, 09:47 PM
0 votes
0 answers
554 views
statvfs returns invalid information
My eMMC partition is ext4 formatted and has about 100MB. When I issue statvfs() on that partition, I get huge return value for f_blocks field of struct statvfs. struct statvfs { unsigned long f_bsize; /* file system block size */ unsigned long f_frsize; /* fragment size */ fsblkcnt_t f_blocks; /* si...
My eMMC partition is ext4 formatted and has about 100MB. When I issue statvfs() on that partition, I get huge return value for f_blocks field of struct statvfs. struct statvfs { unsigned long f_bsize; /* file system block size */ unsigned long f_frsize; /* fragment size */ fsblkcnt_t f_blocks; /* size of fs in f_frsize units */ fsblkcnt_t f_bfree; /* # free blocks */ fsblkcnt_t f_bavail; /* # free blocks for non-root */ fsfilcnt_t f_files; /* # inodes */ fsfilcnt_t f_ffree; /* # free inodes */ fsfilcnt_t f_favail; /* # free inodes for non-root */ unsigned long f_fsid; /* file system ID */ unsigned long f_flag; /* mount flags */ unsigned long f_namemax; /* maximum filename length */ }; For your reference, above is the struct statvfs. The value printed for the field f_blocks is 18446744073659310077 which is a huge one considering my partition is 100MB. df command also returns used and size of 16Z! Filesystem Size Used Avail Use% Mounted on /dev/mmcblk2p4 16Z 16Z 79M 100% /data Any idea what could be wrong? fsck on the partition succeeds. Not sure if there's a problem in the super block or what? Below is the code for your reference. if (statvfs_works ()) { struct statvfs vfsd; if (statvfs (file, &vfsd) fsu_blocks = PROPAGATE_ALL_ONES (vfsd.f_blocks); .... } I am badly stuck and need to understand why the used and size field in df command returns value of 16Z. Any pointers would be greatly appreciated. Update with more info====================================== Output from /proc/partitions is as follows: 179 3 102400 mmcblk2p4 strace reports the below output: statfs64("/data", 88, {f_type="EXT2_SUPER_MAGIC", f_bsize=1024, f_blocks=18446744073659310077, f_bfree=87628, f_bavail=80460, f_files=25688, f_ffree=25189, f_fsid={-1446355608, 1063639410}, f_namelen=255, f_frsize=1024, f_flags=4128}) = 0 As can be seen f_blocks is a huge number. what could this mean? My kernel is 4.9.31. How do I get past this issue and make df report the correct output? The output dumpe2fs returns the following Block count: 102400 Reserved block count: 5120 Overhead blocks: 50343939 The overhead blocks is a huge number. What could this mean? Regards
Embedded Enthusiast (21 rep)
Mar 21, 2022, 03:45 PM • Last activity: Mar 22, 2022, 01:15 PM
3 votes
1 answers
1519 views
ext3 vs ext4 - same disk size, less usable disk space under ext4 comparing to ext3. why?
Files copied from ext3 filesystem do not fit on same size ext4 filesystem. How to reproduce: create two 1GB files, setup loop device for each, create filesystem one ext3, one ext4, mount, rsync files from /usr/lib into ext3 folder until full, try to rsync files from ext3 folder into ext4 but this fa...
Files copied from ext3 filesystem do not fit on same size ext4 filesystem. How to reproduce: create two 1GB files, setup loop device for each, create filesystem one ext3, one ext4, mount, rsync files from /usr/lib into ext3 folder until full, try to rsync files from ext3 folder into ext4 but this failes due to disk space. I would expect all files from ext3 folder to fit into ext4 folder (see log, next file to copy is larger than available space, and still some more files left to copy) Can someone explain why? or better what to do to get same or more usable disk space using ext4? this is debian8 system root@blackoil:~# uname -a Linux blackoil 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt7-1 (2015-03-01) x86_64 GNU/Linux root@blackoil:~# dd if=/dev/zero of=e3fs bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 6.07722 s, 173 MB/s root@blackoil:~# dd if=/dev/zero of=e4fs bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 6.21495 s, 169 MB/s root@blackoil:~# losetup /dev/loop3 e3fs root@blackoil:~# losetup /dev/loop4 e4fs root@blackoil:~# mkfs.ext3 -m 0 /dev/loop3 mke2fs 1.42.12 (29-Aug-2014) Discarding device blocks: done Creating filesystem with 256000 4k blocks and 64000 inodes Filesystem UUID: 8871cd27-5c82-4fa9-acaa-11c2ca200d08 Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done root@blackoil:~# mkfs.ext4 /dev/loop4 mke2fs 1.42.12 (29-Aug-2014) Discarding device blocks: done Creating filesystem with 256000 4k blocks and 64000 inodes Filesystem UUID: 71b69de9-0858-4189-8e1f-907efd61f51d Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done root@blackoil:~# mkdir -p /mnt/e3 root@blackoil:~# mkdir -p /mnt/e4 root@blackoil:~# dumpe2fs /dev/loop3 dumpe2fs 1.42.12 (29-Aug-2014) Filesystem volume name: Last mounted on: Filesystem UUID: 8871cd27-5c82-4fa9-acaa-11c2ca200d08 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 64000 Block count: 256000 Reserved block count: 0 Free blocks: 247557 Free inodes: 63989 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 62 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8000 Inode blocks per group: 500 Filesystem created: Mon Jul 20 15:21:53 2015 Last mount time: n/a Last write time: Mon Jul 20 15:21:53 2015 Mount count: 0 Maximum mount count: -1 Last checked: Mon Jul 20 15:21:53 2015 Check interval: 0 () Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: a35b09fe-b755-49b6-a94d-eb8f2960e5a6 Journal backup: inode blocks Journal features: (none) Journal size: 16M Journal length: 4096 Journal sequence: 0x00000001 Journal start: 0 root@blackoil:~# dumpe2fs /dev/loop4 dumpe2fs 1.42.12 (29-Aug-2014) Filesystem volume name: Last mounted on: Filesystem UUID: 71b69de9-0858-4189-8e1f-907efd61f51d Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 64000 Block count: 256000 Reserved block count: 12800 Free blocks: 247562 Free inodes: 63989 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 62 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8000 Inode blocks per group: 500 Flex block group size: 16 Filesystem created: Mon Jul 20 15:22:16 2015 Last mount time: n/a Last write time: Mon Jul 20 15:22:16 2015 Mount count: 0 Maximum mount count: -1 Last checked: Mon Jul 20 15:22:16 2015 Check interval: 0 () Lifetime writes: 16 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 33846c93-9d1a-4117-a7c8-1566e96e8e73 Journal backup: inode blocks Journal features: (none) Journal size: 16M Journal length: 4096 Journal sequence: 0x00000001 Journal start: 0 root@blackoil:~# mount /dev/loop3 /mnt/e3 root@blackoil:~# mount /dev/loop4 /mnt/e4 root@blackoil:~# rsync -a /usr/lib /mnt/e3 rsync: recv_generator: mkdir "/mnt/e3/lib/python2.6/xml/sax" failed: No space left on device (28) *** Skipping any contents from this failed directory *** rsync: write failed on "/mnt/e3/lib/python2.6/httplib.py": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1] root@blackoil:~# df |grep '/mnt/e' /dev/loop3 991512 991512 0 100% /mnt/e3 /dev/loop4 991512 1264 922664 1% /mnt/e4 root@blackoil:~# cd /mnt/e3 root@blackoil:/mnt/e3# rsync -a . /mnt/e4 rsync: write failed on "/mnt/e4/lib/pepperflashplugin-nonfree/libpepflashplayer.so": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1] root@blackoil:/mnt/e3# df | grep '/mnt/e' /dev/loop3 991512 991512 0 100% /mnt/e3 /dev/loop4 991512 959372 0 100% /mnt/e4 root@blackoil:/mnt/e3# df -i | grep '/mnt/e' /dev/loop3 64000 8939 55061 14% /mnt/e3 /dev/loop4 64000 8531 55469 14% /mnt/e4 root@blackoil:/mnt/e3# rsync -av . /mnt/e4 sending incremental file list lib/pepperflashplugin-nonfree/ lib/pepperflashplugin-nonfree/libpepflashplayer.so lib/pepperflashplugin-nonfree/manifest.json lib/pepperflashplugin-nonfree/pubkey-google.txt lib/pkgconfig/dbus-python.pc lib/pkgconfig/geoclue-2.0.pc lib/pkgconfig/gnome-system-tools.pc lib/pkgconfig/keybinder.pc lib/pkgconfig/libgdiplus.pc lib/pkgconfig/libquvi-scripts.pc lib/pkgconfig/libwnck-1.0.pc lib/pkgconfig/libxfce4menu-0.1.pc lib/pkgconfig/libxklavier.pc lib/pkgconfig/notify-python.pc lib/pkgconfig/pm-utils.pc lib/pkgconfig/tomboy-addins.pc lib/pkgconfig/unique-1.0.pc lib/pkgconfig/xkbcomp.pc lib/pkgconfig/xorg-wacom.pc lib/pm-utils/ ( ...... some removed due to 30000 lines limit posting question) rsync: write failed on "/mnt/e4/lib/pepperflashplugin-nonfree/libpepflashplayer.so": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1] root@blackoil:/mnt/e3# df |grep 'mnt/e' /dev/loop3 991512 991512 0 100% /mnt/e3 /dev/loop4 991512 959372 0 100% /mnt/e4 root@blackoil:/mnt/e3# ls -l /mnt/e4/lib/pepperflashplugin-nonfree/libpepflashplayer.so ls: cannot access /mnt/e4/lib/pepperflashplugin-nonfree /libpepflashplayer.so: No such file or directory root@blackoil:/mnt/e3# ls -l lib/pepperflashplugin-nonfree/libpepflashplayer.so -rw-r--r-- 1 root root 17370752 Mar 14 09:08 lib/pepperflashplugin-nonfree/libpepflashplayer.so root@blackoil:/mnt/e3# mount |grep 'mnt/e' /dev/loop3 on /mnt/e3 type ext3 (rw,relatime,data=ordered) /dev/loop4 on /mnt/e4 type ext4 (rw,relatime,data=ordered)
lepoitr (31 rep)
Jul 20, 2015, 02:05 PM • Last activity: Mar 10, 2022, 10:02 PM
1 votes
0 answers
152 views
Cannot mount USB External HD with ext3, possible hardware failure
I'm a Software Engineer with limited SysEng experience. I have a few weeks old "WD 4 TB Elements Portable External Hard Drive", that is a USB 4 TB External HD. I've installed it on Raspberry PI, and created a unique ext3 partition. The HD was almost full when last night when, during an `rsycn`, it s...
I'm a Software Engineer with limited SysEng experience. I have a few weeks old "WD 4 TB Elements Portable External Hard Drive", that is a USB 4 TB External HD. I've installed it on Raspberry PI, and created a unique ext3 partition. The HD was almost full when last night when, during an rsycn, it stopped working. I tried a quick reboot, after which I could not mount it. The drive, when on, makes a non encouraging "clank" noise periodically. This is the error when mounting:
ubuntu@ubuntu:~$ sudo mount /dev/sdb1 /mnt/nfts/
mount: /mnt/nfts: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error.
I've googled a bit and here below some commands I've run against the drive: From dmesg
[  147.507499] sd 1:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[  147.507519] sd 1:0:0:0: [sdb] tag#0 Sense Key : Aborted Command [current]
[  147.507533] sd 1:0:0:0: [sdb] tag#0 Add. Sense: No additional sense information
[  147.507550] sd 1:0:0:0: [sdb] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00
[  147.507567] blk_update_request: I/O error, dev sdb, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
More from dmesg
[  278.832149] Buffer I/O error on dev sdb1, logical block 0, lost sync page write
[  278.839728] EXT4-fs (sdb1): I/O error while writing superblock
[  278.845737] EXT4-fs (sdb1): no journal found`
From tune2fs
ubuntu@ubuntu:~$ sudo tune2fs -l /dev/sdb1
tune2fs 1.45.5 (07-Jan-2020)
Filesystem volume name:   
Last mounted on:          /mnt/nfts
Filesystem UUID:          e4e0f609-7beb-4610-bb43-6a807f4f88b5
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         unsigned_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean with errors
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              244187136
Block count:              976745728
Reserved block count:     48837286
Free blocks:              116576385
Free inodes:              242495519
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      791
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:       Wed Jan 12 13:25:22 2022
Last mount time:          Wed Feb 23 13:56:10 2022
Last write time:          Wed Feb 23 14:07:39 2022
Mount count:              8
Maximum mount count:      -1
Last checked:             Wed Jan 12 13:25:22 2022
Check interval:           0 ()
Lifetime writes:          2976 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:	          256
Required extra isize:     32
Desired extra isize:      32
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      13549940-8c27-4c2a-87e2-e3f23ba4daa3
Journal backup:           inode blocks
FS Error count:           279
First error time:         Sun Jan 16 11:37:40 2022
First error function:     ext4_mb_generate_buddy
First error line #:       744
First error inode #:      0
First error block #:      0
Last error time:          Wed Feb 23 14:07:39 2022
Last error function:      ext4_mb_generate_buddy
Last error line #:        744
Last error inode #:       0
Last error block #:       0
I tried fsck but I got no significant output after one night, so stopped and rebooted. The HD is obviosly experiencing an hardware failure and I guess there are ton of bad sectors. Luckily the content of the HD are thousands of small files, and my hope is to recover as many of them as possible. Can someone point me in the right direction? What can I do to investigate the issue further and try to restore access to whatever is still stored on healthy sectors? Thanks in advance. Giovanni
Giovanni G (11 rep)
Feb 27, 2022, 08:38 AM • Last activity: Mar 9, 2022, 11:46 AM
23 votes
2 answers
2705 views
How to compact a directory
Every so often, some application runs wild and fills a directory with a huge amount of files. Once we fix the bug and clean up the files, the directory stays big (>50MB) even though there's only 20-30 files in it. Is there some command that compacts a directory without having to recreate it? Bonus p...
Every so often, some application runs wild and fills a directory with a huge amount of files. Once we fix the bug and clean up the files, the directory stays big (>50MB) even though there's only 20-30 files in it. Is there some command that compacts a directory without having to recreate it? Bonus points: does a huge empty directory affect access performance of that directory? I'm assuming it does, but maybe it's not worth bothering. It seems slower to do ls on such a directory.
Mathieu Longtin (331 rep)
May 14, 2012, 01:48 PM • Last activity: Jan 31, 2022, 04:12 PM
1 votes
1 answers
2067 views
How to resize root ext3 file system without LVM
My Debian vmware image has run out of space. I've expanded the disk image but now need to increase my root partition to see the additional space. My volume is setup as follows Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors Disk model: VMware Virtual S Units: sectors of 1 * 512 = 512 byt...
My Debian vmware image has run out of space. I've expanded the disk image but now need to increase my root partition to see the additional space. My volume is setup as follows Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors Disk model: VMware Virtual S Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x37ce2932 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 48236543 48234496 23G 83 Linux /dev/sda2 48238590 52426751 4188162 2G 5 Extended /dev/sda5 48238592 52426751 4188160 2G 82 Linux swap / Solaris I understand that in order to expand sda1, any new space has to be directly after it. All the examples I've read either a) use LVM or b) dont have an Extended sda2 partition directly after sda1. Can anyone point me to a reference that will show me how to expand sda1 in this scenario? I know I will have to switch off/remove swap on sda5, but what do I do about sda2?
Matt (13 rep)
Dec 7, 2021, 10:04 PM • Last activity: Dec 7, 2021, 10:33 PM
8 votes
3 answers
4836 views
Optimize ext4 for always full operation
Our application writes data to disk as a huge ring buffer (30 to 150TB); writing new files while deleting old files. As such, by definition, the disk is always "near full". The *writer* process creates various files at a net input speed of about 100-150 Mbits/s. Data files are a mixture of 1GB 'data...
Our application writes data to disk as a huge ring buffer (30 to 150TB); writing new files while deleting old files. As such, by definition, the disk is always "near full". The *writer* process creates various files at a net input speed of about 100-150 Mbits/s. Data files are a mixture of 1GB 'data' files and several smaller meta data files. (The input speed is constant, but note new file sets are created only once per two minutes). There is a separate *deleter* process which deletes the "oldest" files every 30s. It keeps deleting until there it reaches 15GB free space headroom on the disk. So in stable operation, all data partitions have only 15GB free space. On this SO question relating to file system slowdown, DepressedDaniel commented: > Sync hanging just means the filesystem is working hard to save the > latest operations consistently. It is most certainly trying to shuffle > data around on the disk in that time. I don't know the details, but > I'm pretty sure if your filesystem is heavily fragmented, ext4 will > try to do something about that. And that can't be good if the > filesystem is nearly 100% full. The only reasonable way to utilize a > filesystem at near 100% of capacity is to statically initialize it > with some files and then overwrite those same files in place (to avoid > fragmenting). Probably works best with ext2/3. Is ext4 a bad choice for this application? Since we are running live, what tuning can be done to ext4 to avoid fragmentation, slow downs, or other performance limitations? Changing from ext4 would be quite difficult... (and re-writing statically created files means rewriting the entire application) Thanks! **EDIT I** The server has 50 to 100 TB of disks attached (24 drives). The Areca RAID controller manages the 24 drives as a RAID-6 raid set. From there we divide into several partitions/volumes, with each volume being 5 to 10TB. So the size of any one volume is not huge. The "writer" process finds the first volume with "enough" space and writes a file there. After the file is written the process is repeated. For a brand new machine, the volumes are filled up in order. If all volumes are "full" then the "deleter" process starts deleting the oldest files until "enough" space is available. Over a long time, because of the action of other processes, the time sequence of files becomes randomly distributed across all volumes. **EDIT II** Running fsck shows very low fragmentation: 1 - 2%. However, in the meantime, slow filesystem access has been traced to various system calls like fclose(), fwrite(), ftello() etc taking a very long time to execute (5 to 60 seconds!). So far no solution on this problem. See more details at this SO question: How to debug very slow (200 sec) fwrite()/ftello()/fclose()? I've disabled sysstat and raid-check to see if any improvement.
Danny (653 rep)
Dec 5, 2016, 12:52 PM • Last activity: Sep 15, 2021, 05:23 PM
Showing page 1 of 20 total questions