Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
2
votes
2
answers
2272
views
I/O errors and undeletable directories
For some unknown reason, there are 2 directories I can't delete. First directory called **sw.old** is empty and can be deleted only by `rm`, as `rmdir` won't recognize it. However, even after `rm`, it still shows up: [02:11:36] user@user:/media/user/exthdd/docs$ ls -il total 1072064 1456 drwx------...
For some unknown reason, there are 2 directories I can't delete.
First directory called **sw.old** is empty and can be deleted only by
rm
, as rmdir
won't recognize it. However, even after rm
, it still shows up:
[02:11:36] user@user:/media/user/exthdd/docs$ ls -il
total 1072064
1456 drwx------ 1 user user 0 Aug 12 10:04 1old.or.probably.unfinished
5717 drwx------ 1 user user 8192 Jan 27 22:58 videos
6528 -rw------- 1 user user 1097779088 Nov 5 16:15 release_Remix_OS_for_PC_Android_M_64bit_B2016112101.zip
8008 drwx------ 1 user user 4096 Jan 28 00:55 txt
64 drwx------ 1 user user 0 Dec 25 22:15 sw.old
[02:12:03] user@user:/media/user/exthdd/docs$ rmdir sw.old/
rmdir: failed to remove ‘sw.old/’: No such file or directory
[02:12:57] user@user:/media/user/exthdd/docs$ rm -rf sw.old/
[02:13:15] user@user:/media/user/exthdd/docs$ ls -il
total 1072064
1456 drwx------ 1 user user 0 Aug 12 10:04 1old.or.probably.unfinished
5717 drwx------ 1 user user 8192 Jan 27 22:58 videos
6528 -rw------- 1 user user 1097779088 Nov 5 16:15 release_Remix_OS_for_PC_Android_M_64bit_B2016112101.zip
8008 drwx------ 1 user user 4096 Jan 28 00:55 txt
64 drwx------ 1 user user 0 Dec 25 22:15 sw.old
Second one called **misc** has a corrupted file inside it:
[02:24:32] user@user:/media/user/exthdd/docs/txt$ ls -il
total 0
22607 drwx------ 1 user user 0 Dec 31 16:09 misc
[02:24:36] user@user:/media/user/exthdd/docs/txt$ ls -il misc/
ls: cannot access misc/patterns.mp4: Input/output error
total 0
? -????????? ? ? ? ? ? patterns.mp4
[02:24:54] user@user:/media/user/exthdd/docs/txt$ rm -rf misc/
rm: cannot remove ‘misc/patterns.mp4’: Input/output error
How can I remove those directories (and corrupted file inside one of them) without formatting?
PKM
(131 rep)
Jan 28, 2018, 12:40 AM
• Last activity: Jul 28, 2025, 07:02 PM
3
votes
1
answers
70
views
Is it a good idea to have inode size of 2048 bytes + inline_data on an ext4 filesystem?
I only recently "discovered" the `inline_data` feature of ext4, although it seems to have been around for 10+ years. I ran a few statistics on various of my systems (desktop/notebook + server), specifically on the root filesystems, and found out that: - Around 5% of all files are < 60 bytes in size....
I only recently "discovered" the
inline_data
feature of ext4, although it seems to have been around for 10+ years.
I ran a few statistics on various of my systems (desktop/notebook + server), specifically on the root filesystems, and found out that:
- Around 5% of all files are < 60 bytes in size. The 60 byte threshold is relevant, because that's how much inline data you can fit in a standard 256 byte inode
- Another ~20-25% of files are between 60 and 896 bytes in size. Again, the "magic number" 896 is how much you fit in a 1KB inode
- Further 20% are in the 896-1920 byte range (you guess it - 1920 is what you fit into a 2KB inode)
- That percentage is even more stunning for directories - 30-35% are below 60 bytes, and further 60% are below 1920 bytes.
This means that with an inode size of 2048 bytes you can ***inline roughly half of all files and 95% of all directories on an average root filesystem***! This came as quite a shocker to me...
Now, of course since inodes are preallocated and fixed for the lifetime of a filesystem, large inodes lead to a lot of "wasted" space, if you have a lot of them (i.e. a low inode_ratio
setting). But then again, allocating a 4KB block for a 5 byte file is also a waste of space. And according to above statistic, half of all files on the filesystem and virtually all directories can't even fill half of a 4KB block, so that wasted space is not insignificant. The only difference between wasting that space in the inode table and in the data blocks is that you have one more level of indirection, plus potential for fragmentation, etc.
The advantages I see in that setup are:
- When the kernel loads an inode, it reads at least one page size (4KB) from disk, no matter if the inode is 128 bytes or 2KB, so you have zero overhead in terms of raw disk IO...
- ... but you have the data preloaded as soon as you stat
the file, no additional IO needed to read the contents
- The kernel caches inodes more aggressively than data blocks, so inlined data is more likely to stay longer in cache
- Inodes are stored in a fixed, contiguous region of the partition, so you can't ever have fragmentation there
- Inlining is especially useful for directories, a) since such a high portion of them are small, and b) because you're very likely to need the contents of the directory, so having it preloaded makes a lot of sense
What do you think about this setup? Am I missing something here, and are there some potential risks I don't see?
I stress again that I'm talking about a root filesystem, hosting basically the operating system, config files, and some caches and logs. Obviously the picture would be quite different for a /home
partition hosting user directories, and even more different for a fileserver, webserver, mailserver, etc.
(I know there are a few threads describing some corner cases where inline_data
does not play well with journaling, but those are 5+ years old, so I hope those issues have been sorted out.)
**EDIT**: Since there are doubts expressed in the comments if directory inlining works - it does. I have already implemented the setup described here, and the machine I'm writing on right now actually is running on a root filesystem with 2KB inodes with inlining. Here's what /usr
looks like in ls
:
`
# ls -l /usr
total 160
drwxr-xr-x 2 root root 36864 Jul 1 00:35 bin
drwxr-xr-x 2 root root 60 Mar 4 13:20 games
drwxr-xr-x 4 root root 1920 Jun 16 21:32 include
drwxr-xr-x 64 root root 1920 Jun 25 21:16 lib
drwxr-xr-x 2 root root 1920 Jun 9 01:48 lib64
drwxr-xr-x 16 root root 4096 Jun 22 02:58 libexec
drwxr-xr-x 11 root root 1920 Jun 9 00:10 local
drwxr-xr-x 2 root root 12288 Jun 26 20:22 sbin
drwxr-xr-x 191 root root 4096 Jun 26 20:22 share
drwxr-xr-x 2 root root 60 Mar 4 13:20 src
`
And if you dive even deeper and use debuge2fs
to examine those directories, the ones having 60 or 1920 byte size have 0 allocated data blocks, while those having 4096 and more do have data blocks.
Mike
(477 rep)
Jul 1, 2025, 02:05 PM
• Last activity: Jul 2, 2025, 04:28 PM
112
votes
14
answers
260777
views
How do I count all the files recursively through directories
I want to see how many files are in subdirectories to find out where all the inode usage is on the system. Kind of like I would do this for space usage du -sh /* which will give me the space used in the directories off of root, but in this case I want the number of files, not the size.
I want to see how many files are in subdirectories to find out where all the inode usage is on the system. Kind of like I would do this for space usage
du -sh /*
which will give me the space used in the directories off of root, but in this case I want the number of files, not the size.
xenoterracide
(61203 rep)
Nov 16, 2010, 11:02 AM
• Last activity: May 27, 2025, 01:23 PM
1
votes
1
answers
55
views
Forensics to recover the second-to-last access timestamp of a file on btrfs on HDD
I searched online, to no avail. Is there some way to recover the access timestamp of my file on BTRFS, before the access timestamp which appears currently? Using HDD (not SSD). Please let me know. Is this question better suited for superuser? I made no snapshots (willingly), using Fedora and the met...
I searched online, to no avail. Is there some way to recover the access timestamp of my file on BTRFS, before the access timestamp which appears currently? Using HDD (not SSD). Please let me know. Is this question better suited for superuser? I made no snapshots (willingly), using Fedora and the metadata change dates back some two weeks... In fact to be precise I'm interested in two timestamps ago, which happened in rapid succession.
user324831
(113 rep)
May 23, 2025, 06:15 PM
• Last activity: May 23, 2025, 08:35 PM
0
votes
0
answers
73
views
Not able to modify inode metadata
I am working on a ext4 file-system tool that aims to delete files such that they are not recoverable later on. My goal is to clear the inode metadata of a deleted file, but despite my efforts, the changes I make to the inode metadata are not reflecting when I inspect the inode later using stat. Here...
I am working on a ext4 file-system tool that aims to delete files such that they are not recoverable later on. My goal is to clear the inode metadata of a deleted file, but despite my efforts, the changes I make to the inode metadata are not reflecting when I inspect the inode later using stat.
Here's the process I follow to modify the inode:
1. Fetch the inode using ext2fs_read_inode.
2. Modify the inode metadata by setting values like i_mode, i_links_count, etc., to zero or NULL.
3. Write the modified inode back to the filesystem using
ext2fs_write_inode.
void clear_inode_metadata(ext2_filsys &fs, ext2_ino_t inode_num)
{
ext2_inode inode;
ext2fs_read_inode(fs, inode_num, &inode);
inode.i_mode = 0;
inode.i_links_count = 0;
inode.i_size = 0;
inode.i_size_high = 0;
inode.i_blocks = 0;
memset(inode.i_block, 0, sizeof(inode.i_block));
ext2fs_write_inode(fs, inode_num, &inode); // Write the cleared inode back to disk
}
void delete_file(const string &path, ext2_filsys &fs, const string &device)
{
ext2_ino_t inode_num = fetch_file(path, fs, device);
if (inode_num != 0)
{
overwrite_blocks(fs, device, path); // traverses extents and writes random data
clear_inode_metadata(fs, inode_num);
}
}
The inode metadata stays the same even after invoking clear_inode_metadata()
.
Also I have a file recovery module that uses a disk snapshot to recover files and to my surprise it is able to recover the file even after I overwrite the file using overwrite_blocks()
. The snapshot stores extents used by an inode and the total size of the file.
My questions:
1. Why aren't the changes to the inode metadata reflecting after calling ext2fs_write_inode?
2. Why is the file recovery tool still able to recover the file after I overwrite its blocks?
Dhruv
(1 rep)
Apr 27, 2025, 08:14 AM
• Last activity: Apr 27, 2025, 08:16 AM
0
votes
0
answers
29
views
Unexpected network namespace inode when accessing /var/run/netns/ from pod in host network namespace
I'm running a Kubernetes cluster with RKE2 v1.30.5+rke2r1 on Linux nixos 6.6.56 amd64, using Cilium CNI. Here's the setup: I have two pods (yaml manifests at the bottom): Pod A (xfrm-pod) is running in the default network namespace. Pod B (charon-pod) is running in the host network namespace (hostNe...
I'm running a Kubernetes cluster with RKE2 v1.30.5+rke2r1 on Linux nixos 6.6.56 amd64, using Cilium CNI.
Here's the setup:
I have two pods (yaml manifests at the bottom):
Pod A (xfrm-pod) is running in the default network namespace.
Pod B (charon-pod) is running in the host network namespace (hostNetwork: true).
On Pod A, I check the inode of its network namespace using:
readlink /proc/$$/ns/net
This gives the expected value, e.g., net:
.
Then i mount /var/run/netns
on pod B e.g. to /netns
and run ls -li /netns
, the inode for Pod A's network namespace is a strange value, like 53587.
Permission show this is the only file there is write access to. (I can delete it)
However, when I ls -li /var/run/netns
directly on the host, the inode and file name are what I expect: the correct namespace symlink and inode number.
Why is the inode different inside the host-network pod? And why does it appear writable, unlike other netns files?
Any idea why this happens, and how I can get consistent behavior inside host network pods?
Pod yaml manifests (fetched with kubectl get pod -o yaml since i create them in a controller in go):
Pod A:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2025-04-24T14:57:55Z"
name: xfrm-pod
namespace: ims
resourceVersion: "7200524"
uid: dd08aa88-460f-4bdd-8019-82a433682825
spec:
containers:
- command:
- bash
- -c
- while true; do sleep 1000; done
image: ubuntu:latest
imagePullPolicy: Always
name: xfrm-container
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /netns
name: netns-dir
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-cszxx
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: nixos
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
sysctls:
- name: net.ipv4.ip_forward
value: "1"
- name: net.ipv4.conf.all.rp_filter
value: "0"
- name: net.ipv4.conf.default.rp_filter
value: "0"
- name: net.ipv4.conf.all.arp_filter
value: "1"
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- hostPath:
path: /var/run/netns/
type: Directory
name: netns-dir
- name: kube-api-access-cszxx
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
Pod B:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2025-04-24T14:57:45Z"
labels:
ipserviced: "true"
name: charon-pod
namespace: ims
resourceVersion: "7200483"
uid: 1c5542ba-16c8-4105-9556-7519ea50edef
spec:
containers:
- image: someimagewithstrongswan
imagePullPolicy: IfNotPresent
name: charondaemon
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
- NET_BIND_SERVICE
drop:
- ALL
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/
name: charon-volume
- mountPath: /etc/swanctl
name: charon-conf
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-jjkpm
readOnly: true
- image: someimagewithswanctl
imagePullPolicy: Always
name: restctl
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
drop:
- ALL
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/
name: charon-volume
- mountPath: /etc/swanctl
name: charon-conf
- mountPath: /netns
name: netns-dir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-jjkpm
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostIPC: true
hostNetwork: true
hostPID: true
initContainers:
- command:
- sh
- -c
- "echo 'someconfig'
> /etc/swanctl/swanctl.conf"
image: busybox:latest
imagePullPolicy: Always
name: create-conf
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/swanctl
name: charon-conf
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-jjkpm
readOnly: true
nodeName: nixos
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- emptyDir: {}
name: charon-volume
- emptyDir: {}
name: charon-conf
- hostPath:
path: /var/run/netns/
type: Directory
name: netns-dir
- name: kube-api-access-jjkpm
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
rrekaF
(1 rep)
Apr 25, 2025, 07:07 AM
2
votes
0
answers
53
views
HDD Write Performance: Pre-allocate vs. Append (Inode Impact?)
When writing a large (10GB) file to an HDD, I'm considering two approaches for writing the data: 1. Pre-allocate: Create a 10GB file upfront and then write data sequentially (e.g., using fallocate on Linux or similar methods on other OS). 2. Append: Start with an empty file and append data until 10G...
When writing a large (10GB) file to an HDD, I'm considering two approaches for writing the data:
1. Pre-allocate: Create a 10GB file upfront and then write data sequentially (e.g., using fallocate on Linux or similar methods on other OS).
2. Append: Start with an empty file and append data until 10GB.
I'm curious about the performance differences and if inode management plays a significant role, especially considering the potential for contiguous space allocation with pre-allocation versus repeated metadata updates with appending.
Are there performance best practices for large file writes to HDDs, and do different OS/file systems behave differently in this regard? Any insights or experiences are appreciated.
Long Bùi Hải
(21 rep)
Apr 10, 2025, 02:20 AM
2
votes
1
answers
111
views
Why does inode usage go from 1% to 100% on a single file creation?
Inode usage go from 1 to 100% on a single file creation in a raid array on Debian. First, clean boot, then: ```sh sudo cryptsetup luksOpen /dev/RaidVG/LVMVol CVol sudo mount /dev/mapper/CVol /mnt/raid/ ``` Checking inode usage ```sh $ df -ih Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper...
Inode usage go from 1 to 100% on a single file creation in a raid array on Debian.
First, clean boot, then:
sudo cryptsetup luksOpen /dev/RaidVG/LVMVol CVol
sudo mount /dev/mapper/CVol /mnt/raid/
Checking inode usage
$ df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/CVol 117M 11 117M 1% /mnt/raid
Then, doing any touch
on /mnt/raid
, it failed saying disk is full.
My inode usage ramped up at 100% :
$ df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/CVol 117M 117M 0 100% /mnt/raid
Counting files inside /mnt/raid
returns :
$ find | cut -d/ -f2 | uniq -c | sort -n
1 .
6033 d1
14070 d2
31211 d3
145866 d4
184352 d5
fsck
can't seems to finish
$ sudo fsck /dev/mapper/CVol
fsck from util-linux 2.33.1
e2fsck 1.44.5 (15-Dec-2018)
/dev/mapper/CVol contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
/lost+found not found. Create? no
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Signal (6) SIGABRT si_code=SI_TKILL
Also df -h
return wrong values : there is more than 1T in use in reality:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/CVol 1.8T 77M 1.7T 1% /mnt/raid
I don't really know what to do or where to look at. My file system is "read only" but is there a risk of losing data here? How to fix the problem and be able to write on this disk again?
**EDIT**
smartctl -a
=== START OF INFORMATION SECTION ===
Vendor: WD
Product: My Passport
Revision: 1028
Compliance: SPC-4
User Capacity: 2,000,365,289,472 bytes [2.00 TB]
Logical block size: 512 bytes
LU is resource provisioned, LBPRZ=0
Rotation Rate: 5400 rpm
Serial number: WX22A30FX287
Device type: disk
Local Time is: Fri Mar 28 12:10:55 2025 GMT
SMART support is: Unavailable - device lacks SMART capability.
=== START OF READ SMART DATA SECTION ===
Current Drive Temperature: 0 C
Drive Trip Temperature: 0 C
Error Counter logging not supported
No self-tests have been logged
mdadm --examine-badblocks
Bad-blocks list is empty in /dev/sda1
Bad-blocks list is empty in /dev/sdb1
fdisk -l /dev/sda /dev/sdb
Disk /dev/sda: 1.8 TiB, 2000365289472 bytes, 3906963456 sectors
Disk model: My Passport
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x1dfd4f21
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 3906963455 3906961408 1.8T fd Linux raid autodetect
Disk /dev/sdb: 1.8 TiB, 2000365289472 bytes, 3906963456 sectors
Disk model: My Passport
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9f2cb37d
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 3906963455 3906961408 1.8T fd Linux raid autodetect
pvs
PV VG Fmt Attr PSize PFree
/dev/md0 RaidVG lvm2 a--
Alicya Ambre
(21 rep)
Mar 28, 2025, 09:03 AM
• Last activity: Mar 29, 2025, 07:44 AM
0
votes
2
answers
269
views
Searching whole system for files with specific inode
I technically know how to do all these things, but combining it is problematic. Inode is saved in first line of text file (I can eventually read it directly from file), I need results saved to the same file. How can I do this?
I technically know how to do all these things, but combining it is problematic.
Inode is saved in first line of text file (I can eventually read it directly from file), I need results saved to the same file.
How can I do this?
Corporal Girrafe
(13 rep)
May 24, 2019, 12:06 PM
• Last activity: Mar 19, 2025, 06:03 PM
1
votes
0
answers
1013
views
How do I get ext3/ext4 filesystem features to apply in mke2fs?
As many of you know, `ext3/ext4` have filesystem features that provide their special functionalities. Some of these features can be retrieved using `dumpe2fs`, for example, this output: **Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_s...
As many of you know,
ext3/ext4
have filesystem features that provide their special functionalities.
Some of these features can be retrieved using dumpe2fs
, for example, this output:
**Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize**
So my question is, where in the system or which command can I use to get the full listing of filesystem features to apply when using mke2fs?
If anyone happens to know a useful website/link I'd appreciate it too.
strkIV
(56 rep)
May 8, 2017, 08:22 PM
• Last activity: Mar 17, 2025, 08:32 AM
248
votes
10
answers
532659
views
Find where inodes are being used
So I received a warning from our monitoring system on one of our boxes that the number of free inodes on a filesystem was getting low. `df -i` output shows this: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/xvda1 524288 422613 101675 81% / As you can see, the root partition has 81% of its ino...
So I received a warning from our monitoring system on one of our boxes that the number of free inodes on a filesystem was getting low.
df -i
output shows this:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda1 524288 422613 101675 81% /
As you can see, the root partition has 81% of its inodes used.
I suspect they're all being used in a single directory. But how can I find where that is at?
phemmer
(73711 rep)
Feb 26, 2014, 05:55 PM
• Last activity: Mar 15, 2025, 02:34 AM
1
votes
0
answers
204
views
Kernel BUG at fs/inode.c:613 during installation of Debian 12.5.0
I used the Debian 12.5.0 ISO DVD image from the official website and wrote it to a USB flash drive via Rufus. During the "Load installer components from installation media" process got stuck, and by switching to the command line via CTRL+ALT+Fn, I could look at the Log and found that it triggered a...
I used the Debian 12.5.0 ISO DVD image from the official website and wrote it to a USB flash drive via Rufus. During the "Load installer components from installation media" process got stuck, and by switching to the command line via CTRL+ALT+Fn, I could look at the Log and found that it triggered a Kernel Bug, is there any relevant clues?
Edited: I've tried many times and the problem occurs at different steps of the installation, but the error remains the same and 95% of the time it occurs at the "Load installer components from installation media" step.
There are two paragraphs in the Log, the first of which is:
[ 20.009517] WARNING: CPU: 0 PID: 876 at mm/shmem.c:1193 shmem_evict_inode+0x26e/0x2b0
[ 20.009521] Modules linked in: storage_ssd_mod intel_spi_cr_platform fat isofs hid_generic usbhid hid ...
[ 20.009523] CPU: 0 PID: 876 Comm: kworker/u8:1 Not tainted 6.1.0-10-amd64 #1 Debian 6.1.76-1
[ 20.009524] Hardware name: ASUSTeK COMPUTER INC. System Product Name/Pro WS W680-ACE IPMI, BIOS 3501 04/19/2024
[ 20.009525] RIP: 0010:shmem_evict_inode+0x26e/0x2b0
The second is:
[ 20.009567] kernel BUG at fs/inode.c:613!
[ 20.009568] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
[ 20.009569] CPU: 0 PID: 876 Comm: debconf Tainted: G W 6.1.0-18-amd64 #1 Debian 6.1.76-1
[ 20.009571] Hardware name: ASUSTeK COMPUTER INC. System Product Name/Pro WS W680-ACE IPMI, BIOS 3501 04/12/2024
[ 20.009572] RIP: 0010:clear_inode+0x7z/0x80
See image for details:



Redstone1024
(11 rep)
Jun 6, 2024, 08:25 AM
• Last activity: Feb 27, 2025, 01:27 PM
1
votes
2
answers
96
views
Two files with different contents in the linux overlay file system have the same inode
I'm learning about the linux overlay file system and I'm having a problem that's beyond my knowledge. Can anyone explain the technical rationale behind this? ```bash mkdir ./{merged,work,upper,lower} echo "message from lower" >> ./lower/h sudo mount -t overlay overlay -o lowerdir=./lower,upperdir=./...
I'm learning about the linux overlay file system and I'm having a problem that's beyond my knowledge. Can anyone explain the technical rationale behind this?
mkdir ./{merged,work,upper,lower}
echo "message from lower" >> ./lower/h
sudo mount -t overlay overlay -o lowerdir=./lower,upperdir=./upper,workdir=./work ./merged
# copy lower/h on write, and will save to lower/h
echo "message from merged" >> ./merged/h
# check files content: merged/h and upper/h have same content
cat ./lower/h
cat ./merged/h
cat ./upper/h
# this command show merged/h and lower/h have same inode, why isn't upper/h and merged/h have same inode
stat ./lower/h ./upper/h ./merged/h
I think merged/h and upper/h should have the same inode, and lower with different inodes. However, this is not the case with the above experimental results
user25075193
(11 rep)
Dec 12, 2024, 01:42 AM
• Last activity: Dec 12, 2024, 07:59 AM
0
votes
0
answers
41
views
Is there a way to use linux "find" and filter if specific process was the only one accessed it?
I am trying to find out a way to do incremental antivirus scan. My current approach under evaluation is using "find". You can see relevant question here: https://unix.stackexchange.com/questions/787860/is-there-a-reason-why-i-cant-use-find-to-scan-modified-files-for-viruses-and-ma/ Please note - the...
I am trying to find out a way to do incremental antivirus scan. My current approach under evaluation is using "find". You can see relevant question here: https://unix.stackexchange.com/questions/787860/is-there-a-reason-why-i-cant-use-find-to-scan-modified-files-for-viruses-and-ma/
Please note - the question already posted above is using modified time but I found out that clamav scans files even if it is accessed or opened not just modified.
The antivirus in question is clamav. Clamav, just like every other AV has option of on access scanning i.e. it would scan a file as soon as it's **acccessed** or **opened**. That means I have to change\use command:
find /test/ -type f -atime -1 1>./find_ctime.out 2>./find.errors
Unfortunately I cannot use clamav "On Access Scanning" feature - one reason being it's too heavy on system.
If I use "find" this is a problem I run into:
- I start first scan at 12:00 midnight on 10 files.
- When I use "find" next day at 12:00 midnight, I have to give start time at 12:00 midnight previous day. I cannot given start time when scan was finished to find
because that would leave system vulnerable.
- Problem with this approach is, it would include 10 files clamav (clamscan) accessed yesterday.
- I cannot blindly exclude yesterday's list, __even if__ access timestamp was in yesterday's scan window because that will make system vulnerable, even though chances are very small.
- and the cycle will repeat next day and so on until it would include ALL files on the system.
So I wanted to exclude a file if **only** clamscan\clamdscan has accessed it since last scan. I used stat
and do not see any relevant field in output. I also searched in find
documentation but I could not find it.
Is there a way to use linux "find" and filter if specific process was the only one accessed it?
Thanks in advance!
user1578026
(161 rep)
Dec 11, 2024, 04:49 AM
0
votes
1
answers
185
views
Determine bytes per inode ratio of existing ext4 filesystem
I created a filesystem following this guide: https://wiki.archlinux.org/title/Ext4#Create_a_new_ext4_filesystem However it looks like I made a mistake because the ratio for /dev/sdc doesn't add up: ``` asdf@pve:~# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda2 2953572160 622647 29529...
I created a filesystem following this guide: https://wiki.archlinux.org/title/Ext4#Create_a_new_ext4_filesystem
However it looks like I made a mistake because the ratio for /dev/sdc doesn't add up:
asdf@pve:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda2 2953572160 622647 2952949513 1% /mnt/my8tb
/dev/sdb1 183148544 102711 183045833 1% /mnt/my3tb
/dev/sdc 1907840 3584 1904256 1% /mnt/my2tb
asdf@pve:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 7814008828 5995014812 1818994016 77% /mnt/my8tb
/dev/sdb1 2883129520 1237379688 1499220196 46% /mnt/my3tb
/dev/sdc 1951774200 1277927944 576154144 69% /mnt/my2tb
I created it to be a largefile4 drive, but it didn't seem to work - [2 TB for this amount of inodes comes to around 1 MB per inode](https://www.wolframalpha.com/input?i=2+TB+%2F+1907840) . (and [again](https://www.wolframalpha.com/input?i=%28%281854082088+*+1KB%29+%2F+1907840%29+in+bytes)) I would expect ~4MB per inode in largefile4. Am I interpreting these results correctly?
Atomic Tripod
(103 rep)
Oct 15, 2024, 06:15 PM
• Last activity: Oct 20, 2024, 09:20 AM
0
votes
0
answers
74
views
e2fsck prompts for inode optimization: safe to proceed?
I am trying to utilize `e2fsck` but it produces the following: ``` sudo e2fsck -f /dev/vgtrisquel/home e2fsck 1.46.5 (30-Dec-2021) Pass 1: Checking inodes, blocks, and sizes Inode extent tree (at level 1) could be shorter. Optimize ? ``` It continues to show many other Inodes that could be shorter....
I am trying to utilize
e2fsck
but it produces the following:
sudo e2fsck -f /dev/vgtrisquel/home
e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Inode extent tree (at level 1) could be shorter. Optimize?
It continues to show many other Inodes that could be shorter.
Should I say yes to optimizing all of them? Does this have any potential issues? The data on this drive has been backed up.
Thank you so much for any support you can provide.
Kitty Cat
(157 rep)
Sep 21, 2024, 06:11 AM
• Last activity: Sep 21, 2024, 08:21 AM
0
votes
1
answers
497
views
reconstructing ext4 inode structure after folder deletion
My `ext4` partition that had my whatsapp data had its entire folder deleted ( that is file / folder delete - no `partition` ). As I understand `inodes` have the `metadata tree` for files including their names location etc. The problem I face is the raw files ( without original filenames and paths )...
My
approach to do it.
ext4
partition that had my whatsapp data had its entire folder deleted ( that is file / folder delete - no partition
).
As I understand inodes
have the metadata tree
for files including their names location etc. The problem I face is the raw files ( without original filenames and paths ) can be recovered b*ut original original file names with paths are missing* . It was the app that deleted folder. This folder was mounted
to ext4
partition on android phone. Partition image is already copied & to backup
**What I did**
& I tried following Testdisk
-- No valid partition Extundelete
and ext4Magic
: No luck . Inode
data came out empty and foremost
got some data but metadata
missing ( explained further down ) 3rd Party tools : *Easus data recovery / Hetmann / r-studio* : Please see screenshot . The folder and files I am looking for are 0 byte Running entire deep scan takes some 8 hrs and I get RAW files without original file names ( e.g. filexxx.jpg ) and that is of NO USE . I need file names with metadata e,g. original filenames and their paths i.e. somehow the raw files recovered should be mapped to the 0 byte file names or any other way the original file names can be recovered
See the attached picture.
Is there any utility that can reconstruct the deleted inodes
? or any methodology
user1874594
(133 rep)
Mar 13, 2024, 11:44 AM
• Last activity: Jul 18, 2024, 04:10 PM
1
votes
1
answers
461
views
resize2fs: New size results in too many block group descriptors
I want to replace the NVMe drive in an embedded computer that runs Linux. The original drive is a 256 GB gen 3 with 5 partitions. After cloning to a larger drive, partition 5, the `data` partition, should grow to all utilize all available space. I used `gparted` to clone the original NVMe to a 4 TB...
I want to replace the NVMe drive in an embedded computer that runs Linux. The original drive is a 256 GB gen 3 with 5 partitions. After cloning to a larger drive, partition 5, the
data
partition, should grow to all utilize all available space.
I used gparted
to clone the original NVMe to a 4 TB drive, and I used gparted
to attempt to expand the data
partition to use all available space. Although the partition did grow, the filesystem did not. I got this error:
resize2fs: New size results in too many block group descriptors
The clone currently boots and the device works. However, if the filesystem is not enlarged, the larger capacity will not be used. Here is some helpful information:
$ sudo tune2fs -l /dev/sdb5
tune2fs 1.47.0 (5-Feb-2023)
Filesystem volume name: data
Last mounted on: /data
Filesystem UUID: 64f5365e-edfc-4781-a285-9a053e6937fa
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr dir_index filetype meta_bg extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 27583488
Block count: 220661060
Reserved block count: 8826764
Overhead clusters: 6962786
Free blocks: 122392011
Free inodes: 27535064
First block: 1
Block size: 1024
Fragment size: 1024
Group descriptor size: 64
Blocks per group: 8192
Fragments per group: 8192
Inodes per group: 1024
Inode blocks per group: 256
First meta block group: 258
Flex block group size: 16
Filesystem created: Fri Jan 13 14:00:05 2023
Last mount time: Fri Jun 14 11:27:22 2024
Last write time: Fri Jun 14 11:34:14 2024
Mount count: 0
Maximum mount count: -1
Last checked: Fri Jun 14 11:34:14 2024
Check interval: 0 ()
Lifetime writes: 266 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: ef21ef18-27b0-4363-a679-6e709cc7027b
Journal backup: inode blocks
Checksum type: crc32c
Checksum: 0x4649ffc4
Perhaps the block size is too small?
$ sudo blockdev --setbsz 4096 /dev/sdb5
$ sudo blockdev --getbsz /dev/sdb5
4096
I understand that the result from blockdev --getbsz
cannot be trusted after a blockdev --setbsz
unless the file system is mounted.
After mounting /dev/sdb5
:
$ sudo blockdev --getbsz /dev/sdb5
1024
Hmm, the block size is unchanged. What should I do to complete the resizing?
Mike Slinn
(263 rep)
Jun 14, 2024, 05:53 PM
• Last activity: Jun 22, 2024, 10:21 AM
1
votes
1
answers
177
views
ext2 How to choose bytes/inode ratio
How approximately calc [bytes-per-inode][1] for ext2? I have 7.3GB storage (15320519 sectors 512B each). I have made ext2 filesystem with block size 4096 mke2fs /dev/sda2 -i 524288 -m 0 -L "SSD" -F -b 4096 -U 11111111-2222-3333-4444-555555555555 -O none,filetype,sparse_super,large_file Filesystem la...
How approximately calc bytes-per-inode for ext2?
I have 7.3GB storage (15320519 sectors 512B each). I have made ext2 filesystem with block size 4096
mke2fs /dev/sda2 -i 524288 -m 0 -L "SSD" -F -b 4096 -U 11111111-2222-3333-4444-555555555555 -O none,filetype,sparse_super,large_file
Filesystem label=SSD
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
15104 inodes, 1915064 blocks
0 blocks (0%) reserved for the super user
First data block=0
Maximum filesystem blocks=4194304
59 block groups
32768 blocks per group, 32768 fragments per group
256 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Usually all my files has size 100kB (and about 5 files can be 400MB). I try to read this and this . But still not clear how approximately calc bytes-per-inode? Current 524288 is not enough, for now I can't make new files in sda2 but still have a lot of free space.
P.S. Extra info
# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/root ext4 146929 84492 59365 59% /
devtmpfs devtmpfs 249936 0 249936 0% /dev
tmpfs tmpfs 250248 0 250248 0% /dev/shm
tmpfs tmpfs 250248 56 250192 0% /tmp
tmpfs tmpfs 250248 116 250132 0% /run
/dev/sda2 ext2 7655936 653068 7002868 9% /mnt/sda2
# df -h
Filesystem Size Used Available Use% Mounted on
/dev/root 143.5M 82.5M 58.0M 59% /
devtmpfs 244.1M 0 244.1M 0% /dev
tmpfs 244.4M 0 244.4M 0% /dev/shm
tmpfs 244.4M 56.0K 244.3M 0% /tmp
tmpfs 244.4M 116.0K 244.3M 0% /run
/dev/sda2 7.3G 637.8M 6.7G 9% /mnt/sda2
# fdisk -l
Disk /dev/sda: 7.45 GiB, 8001552384 bytes, 15628032 sectors
Disk model: 8GB ATA Flash Di
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0a19a8af
Device Boot Start End Sectors Size Id Type
/dev/sda1 309 307508 307200 150M 83 Linux
/dev/sda2 307512 15628030 15320519 7.3G 83 Linux
Андрей Тернити
(303 rep)
May 22, 2024, 04:31 AM
• Last activity: May 22, 2024, 01:24 PM
3
votes
1
answers
268
views
mkfs ext2 ignore number-of-inodes
I want to make ext2 file system. I want to set "[number-of-inodes][1]" option to some number. I tried several values: - if *-N 99000* then Inode count: 99552 - if *-N 3500* then Inode count: 3904 - if *-N 500* then Inode count: 976 But always my value is **not the same**. Why? I call mkfs this way s...
I want to make ext2 file system. I want to set "number-of-inodes " option to some number. I tried several values:
- if *-N 99000* then Inode count: 99552
- if *-N 3500* then Inode count:
3904
- if *-N 500* then Inode count: 976
But always my value is **not the same**. Why?
I call mkfs this way
sudo mkfs -q -t ext2 -F /dev/sda2 -b 4096 -N 99000 -O none,sparse_super,large_file,filetype
I check results this way
$ sudo tune2fs -l /dev/sda2
tune2fs 1.46.5 (30-Dec-2021)
Filesystem volume name:
Last mounted on:
Filesystem UUID: 11111111-2222-3333-4444-555555555555
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: filetype sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 99552
Block count: 1973720
Reserved block count: 98686
Overhead clusters: 6362
Free blocks: 1967353
Free inodes: 99541
First block: 0
Block size: 4096
Fragment size: 4096
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 1632
Inode blocks per group: 102
Filesystem created: Thu Apr 6 20:00:45 2023
Last mount time: n/a
Last write time: Thu Apr 6 20:01:49 2023
Mount count: 0
Maximum mount count: -1
Last checked: Thu Apr 6 20:00:45 2023
Check interval: 0 ()
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Default directory hash: half_md4
Directory Hash Seed: 61ff1bad-c6c8-409f-b334-f277fb29df54
Андрей Тернити
(303 rep)
Apr 6, 2023, 11:24 AM
• Last activity: May 22, 2024, 04:21 AM
Showing page 1 of 20 total questions