Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

2 votes
1 answers
1872 views
mount: overlapping loop device
I getting these problems when I try to mount this filesystem DOS/MBR #file bag.vhd bag.vhd: DOS/MBR boot sector; partition 1 : ID=0x83, active, start-CHS (0x0,1,1), end-CHS (0x9,254,63), startsector 63, 160587 sectors #sudo mount -t auto -o ro,loop,offset=82252288 bag.vhd /mnt/floppy/ mount: /mnt/fl...
I getting these problems when I try to mount this filesystem DOS/MBR #file bag.vhd bag.vhd: DOS/MBR boot sector; partition 1 : ID=0x83, active, start-CHS (0x0,1,1), end-CHS (0x9,254,63), startsector 63, 160587 sectors #sudo mount -t auto -o ro,loop,offset=82252288 bag.vhd /mnt/floppy/ mount: /mnt/floppy/: overlapping loop device exists for /home/ffha/Documents/descon/bag.vhd. I have been created /mnt/floppy.
fica (21 rep)
Oct 6, 2019, 01:42 PM • Last activity: Aug 5, 2025, 11:04 PM
7 votes
2 answers
466 views
Btrfs read-only file system and corruption errors
# Goal I am trying to figure out why my file system has become read-only so I can address any potential hardware or security issues (main concern) and maybe fix the issue without having to reinstall everything and migrate my files from backup (I might lose some data but probably not much). According...
# Goal I am trying to figure out why my file system has become read-only so I can address any potential hardware or security issues (main concern) and maybe fix the issue without having to reinstall everything and migrate my files from backup (I might lose some data but probably not much). According to the manual of btrfs check: > Do not use --repair unless you are advised to do so by a developer or an experienced user, and then only after having accepted that no fsck successfully repair all types of filesystem corruption. Eg. some other software or hardware bugs can fatally damage a volume. I am thinking of trying the --repair option or btrfs scrub but want input from a more experienced user. # What I’ve tried I first noticed a read-only file system when trying to update my system in the terminal. I was told: Cannot open log file: (30) - Read-only file system [/var/log/dnf5. log] I have run basic checks (using at least 3 different programs) of my SSD without anything obviously wrong. The SSD and everything else in my computer is about 6 and a half years old, so maybe something is failing. Here is the SMART Data section of the output from sudo smartctl -a /dev/nvme0n1:
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02, NSID 0x1)
Critical Warning: 0x00
Temperature: 31 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 1%
Data Units Read: 33,860,547 [17.3 TB]
Data Units Written: 31,419,841 [16.0 TB]
Host Read Commands: 365,150,063
Host Write Commands: 460,825,882
Controller Busy Time: 1,664
Power Cycles: 8,158
Power On Hours: 1,896
Unsafe Shutdowns: 407
Media and Data Integrity Errors: 0
Error Information Log Entries: 4,286
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 31 Celsius
Temperature Sensor 2: 30 Celsius

Error Information (NVMe Log 0x01, 16 of 64 entries)
No Errors Logged

Self-test Log (NVMe Log 0x06, NSID Oxffffffff)
Self-test status: No self-test in progress
No Self-tests Logged
I tried the following I think from a live disk sudo mount -o remount,rw /mount/point but that output an error such as, cannot complete read-only system. sudo btrfs device stats /home **and** sudo btrfs device stats / outputs:
[/dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d].write_io_errs 0
[/dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d].read_io_errs 0
[/dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d].flush_io_errs 0
[/dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d].corruption_errs 14
[/dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d].generation_errs 0
**This seems to suggest that corruption is only in the /home directory.** **However, sudo btrfs check /dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d stops at [5/8] checking fs roots with the end of the output at the top of this image:** <code class=sudo btrfs check /dev/mapper/luks-7215db73-54d1-437e-875d-f82fae508b5d" class="img-fluid rounded" style="max-width: 100%; height: auto; margin: 10px 0;" loading="lazy"> **Some of these files may be in the / directory, but I’m not sure without looking into further.** sudo btrfs fi usage / provides: <code class=sudo btrfs fi usage /" class="img-fluid rounded" style="max-width: 100%; height: auto; margin: 10px 0;" loading="lazy"> **I think that Data,single, Metadata,DUP, and System,DUP might be saying I can repair the corruption if it’s only in metadata or system but not if it’s the actual file data. Might be something to explore more.** Here is vi /etc/fstab: <code class=vi /etc/fstab" class="img-fluid rounded" style="max-width: 100%; height: auto; margin: 10px 0;" loading="lazy"> sudo dmesg | grep -i “btrfs” states: <code class=sudo dmesg | grep -i “btrfs”" class="img-fluid rounded" style="max-width: 100%; height: auto; margin: 10px 0;" loading="lazy"> The file system is indeed unstable. Once, I wasn’t able to list any files in my /home directory, but I haven't run into this issue again across several reboots. # What I think might be causing this I suspect that changing my username, hostname, and display name (shown on the login screen) recently may have caused problems because my file system became read-only about a week to a week and a half after doing so. I followed some tutorials online, but I noticed that many of my files still had the group and possibly user belonging to the old username. So I created a symbolic link at the top of my home directory pointing the old username to the new one, and it seemed like everything was fine until the read-only issue. There may have been more I did, but I don’t remember exactly as it’s been a few weeks now. I have a history of most or all of the commands I ran if it might be helpful. I think it may be something hardware related, something I did, software bugs (maybe introduced by a recent update — I have a picture of packages affected in my most recent dnf upgrade transaction, but I was unable to rollback or undo the upgrade because of the read-only file system), improper shutdowns (may have done this while making changes to the username, hostname, and display name), or a security issue.
Growing My Roots (351 rep)
Aug 1, 2025, 10:38 PM • Last activity: Aug 3, 2025, 02:37 AM
2 votes
1 answers
2619 views
"structure needs cleaning", hardware failure?
The drive that my `/home` folder lives on is showing signs of failing, and I'm trying to migrate to a&#160;new drive.&#160; I&#160;purchased a 4TB SSD, formatted it with `ext4`, mounted it as an external drive with a&#160;USB/SATA connector, and `rsync`’ed my `/home` folder over. So far, so good. Bu...
The drive that my /home folder lives on is showing signs of failing, and I'm trying to migrate to a new drive.  I purchased a 4TB SSD, formatted it with ext4, mounted it as an external drive with a USB/SATA connector, and rsync’ed my /home folder over. So far, so good. But when I swapped it in place of the failing drive and rebooted, my OS reported:
unable to mount local folders
structure needs cleaning
That sounds like a corrupt file system, but fsck reported no errors. Maybe the new hardware is faulty, but I ran badblocks on it, and it also came back with no errors. I formatted it again and tried again, and came up with the same error. Weirdly, if I log in as root and manually mount the new /home drive, it mounts okay, and seems to accept read/writes. However, dmesg did show some errors for /dev/sdb (that's the /home drive on this system). I've copied them below, although I'm fluent enough myself to parse them. Any ideas? For context I'm running Gentoo Linux.
[    0.914006] sd 6:0:0:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
[    0.914052] sd 6:0:0:0: [sdb] Write Protect is off
[    0.914074] sd 6:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[    0.914117] sd 6:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    0.914224] sd 6:0:0:0: [sdb] Preferred minimum I/O size 512 bytes
[    0.915929]  sdb: sdb1
[    0.916093] sd 6:0:0:0: [sdb] Attached SCSI disk
[    5.012731] sd 6:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
[    5.012740] sd 6:0:0:0: [sdb] tag#0 Sense Key : Illegal Request [current] 
[    5.012747] sd 6:0:0:0: [sdb] tag#0 Add. Sense: Unaligned write command
[    5.012753] sd 6:0:0:0: [sdb] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 08 10 00 00 00 08 00 00
[    5.012757] I/O error, dev sdb, sector 2064 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2
[    5.012786] sd 6:0:0:0: [sdb] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
[    5.012792] sd 6:0:0:0: [sdb] tag#1 Sense Key : Illegal Request [current] 
[    5.012797] sd 6:0:0:0: [sdb] tag#1 Add. Sense: Unaligned write command
[    5.012802] sd 6:0:0:0: [sdb] tag#1 CDB: Read(16) 88 00 00 00 00 00 00 00 08 18 00 00 00 08 00 00
[    5.012805] I/O error, dev sdb, sector 2072 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2
[    5.012817] sd 6:0:0:0: [sdb] tag#31 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
[    5.012822] sd 6:0:0:0: [sdb] tag#31 Sense Key : Illegal Request [current] 
[    5.012827] sd 6:0:0:0: [sdb] tag#31 Add. Sense: Unaligned write command
[    5.012832] sd 6:0:0:0: [sdb] tag#31 CDB: Read(16) 88 00 00 00 00 00 00 00 08 08 00 00 00 08 00 00
[    5.012836] I/O error, dev sdb, sector 2056 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2
[   35.852468] sd 6:0:0:0: [sdb] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=30s
[   35.852476] sd 6:0:0:0: [sdb] tag#13 Sense Key : Illegal Request [current] 
[   35.852483] sd 6:0:0:0: [sdb] tag#13 Add. Sense: Unaligned write command
[   35.852490] sd 6:0:0:0: [sdb] tag#13 CDB: Read(16) 88 00 00 00 00 00 00 00 08 28 00 00 05 40 00 00
[   35.852494] I/O error, dev sdb, sector 2088 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 2
[   35.852574] sd 6:0:0:0: [sdb] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=30s
[   35.852581] sd 6:0:0:0: [sdb] tag#14 Sense Key : Illegal Request [current] 
[   35.852586] sd 6:0:0:0: [sdb] tag#14 Add. Sense: Unaligned write command
[   35.852591] sd 6:0:0:0: [sdb] tag#14 CDB: Read(16) 88 00 00 00 00 00 00 00 0d 68 00 00 05 40 00 00
[   35.852595] I/O error, dev sdb, sector 3432 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 2
[   35.852672] sd 6:0:0:0: [sdb] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=30s
[   35.852677] sd 6:0:0:0: [sdb] tag#15 Sense Key : Illegal Request [current] 
[   35.852682] sd 6:0:0:0: [sdb] tag#15 Add. Sense: Unaligned write command
[   35.852687] sd 6:0:0:0: [sdb] tag#15 CDB: Read(16) 88 00 00 00 00 00 00 00 12 a8 00 00 03 f0 00 00
[   35.852690] I/O error, dev sdb, sector 4776 op 0x0:(READ) flags 0x80700 phys_seg 126 prio class 2
[   36.858014] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 18880 failed (53845!=52774)
[   36.858017] EXT4-fs (sdb1): group descriptors corrupted!
One further experiment: I tried installing another drive into the bay, and it also wouldn't auto mount as /home.  I could not even mount it manually after logging in as root.  As far as I can tell, there's nothing wrong with this third drive, and I can mount it just fine via a USB/SATA adapter.  Both of the new drives are SSDs, while the old failing drive that still mounts is a hard disk. This SATA port is via a SATA/PCIE adapter, so I suppose the problem could be in the adapter.  In that case, though, it's weird that the old hard drive still works.
jyoung (131 rep)
Sep 24, 2023, 11:12 PM • Last activity: Aug 3, 2025, 01:07 AM
1 votes
1 answers
52 views
How can I see how much space was freed by trim on an SSD?
In my current setup, I have three different filesystems on two different SSDs: A FAT partition and a BTRFS partition on one drive, and ext4 on a second drive. When running `fstrim`, [the output is apparently](https://www.reddit.com/r/linuxquestions/comments/vaahg7/comment/ic1es8n/?utm_source=share&u...
In my current setup, I have three different filesystems on two different SSDs: A FAT partition and a BTRFS partition on one drive, and ext4 on a second drive. When running fstrim, [the output is apparently](https://www.reddit.com/r/linuxquestions/comments/vaahg7/comment/ic1es8n/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) [not very usable](https://superuser.com/a/1251947/277646) and basically each of those filesystems reports some meaningless value for the amount that got trimmed. Since truly [free space on an SSD contributes to its performance](https://cdn.mos.cms.futurecdn.net/3XW98AqWgfM956j5FGcodL.png) , at least for QLC NAND modules that use an SLC cache, I wanted to see if I could determine the impact of running fstrim. I know that utilities like df and duf, as well as lsblk provide usage information based on the filesystem, but are there any utilities that can show the drive sectors that are in use vs free? If my understanding of how TRIM on an SSD works is correct, then the filesystem will show reduced space immediately upon deleting a file, but those sectors are still considered in use by the SSD controller. After TRIM, those sectors would be freed. I'm hoping for a way to see the extent of that
Hari (130 rep)
Jul 27, 2025, 05:24 AM • Last activity: Jul 29, 2025, 09:28 AM
0 votes
1 answers
2148 views
Firewalld: Error: Invalid_Zone
I got some error I can not solve while setting up a default zone in firewalld. I added the interface with firewall-cmd --zone=public --change-interface=ens3 and then I saw the default public zone active. so then I `firewall-cmd --reload` *error: Command_failed: 'usr/sbin/ip6tables-restore -w -n' fai...
I got some error I can not solve while setting up a default zone in firewalld. I added the interface with firewall-cmd --zone=public --change-interface=ens3 and then I saw the default public zone active. so then I firewall-cmd --reload *error: Command_failed: 'usr/sbin/ip6tables-restore -w -n' failed: ip6tables-restore v1.8.2 (nf_tables): line 4: Rule_Replace faaled (no Such file or directory: rule in chain INPUT" so ip6tables-restore is trying to do something upon restart of firewalld. Yet when I "iptables -L" I get "bash: iptables: command not found. firewall-cmd --list-all *Error: Invalid_zone* But the zone showed moments ago...
mister mcdoogle (505 rep)
Sep 5, 2021, 01:44 AM • Last activity: Jul 25, 2025, 03:01 PM
9 votes
3 answers
3145 views
Converting ext2 to ext4
I have a file-server with three disk's that are ext2 file-systems, is it possible to change/convert these to ext4 which have much improved characteristics, while data is on the disk's and without data-loss? If so, how is that accomplished? My system is Debian Wheezy, and I use lvm. I've found [this]...
I have a file-server with three disk's that are ext2 file-systems, is it possible to change/convert these to ext4 which have much improved characteristics, while data is on the disk's and without data-loss? If so, how is that accomplished? My system is Debian Wheezy, and I use lvm. I've found this , but I don't know if it is relevant for ext2 to ext4, does this work for me?
somethingSomething (6209 rep)
May 3, 2015, 12:25 AM • Last activity: Jul 24, 2025, 06:43 AM
0 votes
0 answers
37 views
"mnt": Numerical result out of range and wrong fs type issue while connecting shared folder from local system
I have a mac M4 chip and running my ubuntu and centos stream 10 on UTM and when it try to connect the shared folder from my mac i keep getting this issue as "wrong fs type" [![wrong fs type][1]][1] After so many reboot this worked but when i try to open the folder in ubuntu file system it throws me...
I have a mac M4 chip and running my ubuntu and centos stream 10 on UTM and when it try to connect the shared folder from my mac i keep getting this issue as "wrong fs type" wrong fs type After so many reboot this worked but when i try to open the folder in ubuntu file system it throws me an error stating that this location could not be displayed "mnt": Error when getting information for file "/mnt/kku": Numerical result out of range This the prompt that i am getting
N Karthik (1 rep)
Jul 10, 2025, 01:29 PM • Last activity: Jul 23, 2025, 06:18 PM
5 votes
1 answers
5464 views
using overlay2 on CentOS 7.4
How do I install and enable the overlay2 storage driver on CentOS 7? I have done many google searches on this and I see that version 7.4 is required. So I typed the following commands to confirm that the intended server is running version 7.4 of CentOS: [sudoUser@localhost ~]$ cat /etc/centos-releas...
How do I install and enable the overlay2 storage driver on CentOS 7? I have done many google searches on this and I see that version 7.4 is required. So I typed the following commands to confirm that the intended server is running version 7.4 of CentOS: [sudoUser@localhost ~]$ cat /etc/centos-release CentOS Linux release 7.4.1708 (Core) [sudoUser@localhost ~]$ rpm --query centos-release centos-release-7-4.1708.el7.centos.x86_64 [sudoUser@localhost ~]$ But there does not seem to be any yum install overlay2 or yum install overlayfs. >**So what specific steps are required in order to install and enable overlay2 on CentOS 7.4?**
CodeMed (5357 rep)
Apr 9, 2018, 10:47 PM • Last activity: Jul 23, 2025, 11:00 AM
1 votes
1 answers
3111 views
How to change default permission for usb devices filesystem
On Debian, when automounted, all files and directories in USB drives have 777 permissions. I don't like it very much. I know a bit of `udev` rules, and I think I could write a rule of mine to override the default behaviour. But I also would like to know which system rules are involved in this mechan...
On Debian, when automounted, all files and directories in USB drives have 777 permissions. I don't like it very much. I know a bit of udev rules, and I think I could write a rule of mine to override the default behaviour. But I also would like to know which system rules are involved in this mechanism, can you help me?
Daniele (478 rep)
Apr 23, 2018, 02:13 PM • Last activity: Jul 19, 2025, 07:05 PM
2 votes
1 answers
3725 views
Volume group "vgubuntu" not found. My laptop won’t boot
Today, I rebooted my PC (a Dell G5 15) after a update from Ubuntu, you know, the kind of update that pops on the screen and asks you to reboot at the end to complete the update. Well, that’s what I did, and maybe I shouldn’t have. Now, I got this on my screen. (I have put red marks where I have exec...
Today, I rebooted my PC (a Dell G5 15) after a update from Ubuntu, you know, the kind of update that pops on the screen and asks you to reboot at the end to complete the update. Well, that’s what I did, and maybe I shouldn’t have. Now, I got this on my screen. (I have put red marks where I have executed the commands that were recommended above, for this thing to be a little clearer for you.) Screenshot of the boot messages I don’t know what my latest version of Ubuntu that worked was. I tried to search for similar problems, but I couldn’t find anything that helped me in my case. I tried to select the “Advanced option for Ubuntu” when I hard-restarted my computer, but among the possibilities below, none seems to work. Screenshot of the boot menu How can I fix my computer?
Kowalski (21 rep)
May 28, 2022, 11:49 PM • Last activity: Jul 17, 2025, 04:24 PM
6 votes
1 answers
15675 views
How to find files that were most recently created on file system?
I am trying to track the completion of a silent installer by detecting the presence of the last file created by it, but in order to do that I need to find out which file that is. Is there any way to do this? I have found a lot of answers on how to find the most recently *modified* file, but that is...
I am trying to track the completion of a silent installer by detecting the presence of the last file created by it, but in order to do that I need to find out which file that is. Is there any way to do this? I have found a lot of answers on how to find the most recently *modified* file, but that is not effective since many of these files were modified by the original creator in a different order than they are added to the system by the installer.
Pav (61 rep)
Nov 17, 2015, 03:52 PM • Last activity: Jul 8, 2025, 11:11 PM
1 votes
1 answers
2219 views
Git - how to add/link subfolders into one git-repository directory
Assuming I have a file structure like this: ├── Project-1/ │&#160;&#160; ├── files/ │&#160;&#160; └── special-files/ ├── Project-2/ │&#160;&#160; ├── files/ │&#160;&#160; └── special-files/ └── Project-3/ ├── files/ └── special-files/ Now I want to create a Git repository, including all the `special...
Assuming I have a file structure like this: ├── Project-1/ │   ├── files/ │   └── special-files/ ├── Project-2/ │   ├── files/ │   └── special-files/ └── Project-3/ ├── files/ └── special-files/ Now I want to create a Git repository, including all the special-files folders. If it was files, I could create a hardlink ln ./Project-1/special-files ./Git-Project/special-files-1 and so on, so I would get: Git-Project/ ├── .git ├── .gitignore ├── special-files-1/ ├── special-files-2/ └── special-files-3/ Though hardlinks do not work with folders. Symlinks do not get handled by git. **Is there a way to achieve, collecting/linking these folders into a git repository-folder?**
nath (6094 rep)
Aug 5, 2021, 04:48 PM • Last activity: Jul 7, 2025, 01:01 PM
0 votes
1 answers
1910 views
CentOS-7 installation on two disks (SSD + HDD)
I am trying to install CentOS-7(GNOME) on my Dell Laptop which has 30GB SSD and 1TB HDD, RAM of 8GB, i7 Intel processor. I am doing manual partition. I have selected SSD as bootable drive and my `/boot` is on SSD with 2GB as size. In which drive should I put the leftover(28 GB) SSD for better OS per...
I am trying to install CentOS-7(GNOME) on my Dell Laptop which has 30GB SSD and 1TB HDD, RAM of 8GB, i7 Intel processor. I am doing manual partition. I have selected SSD as bootable drive and my /boot is on SSD with 2GB as size. In which drive should I put the leftover(28 GB) SSD for better OS performance? I tried adding it to / but this is only allowing me if I deselect HDD. But then I am not able to add other partitions like /home, swap, /usr, /opt, /var on HDD and they would all default to SSD which has only 30GB in total which I am trying to avoid. I am planning to use this laptop for my development activities eg Java(with Intellij), Docker, Kubernetes, MongoDB, Apache Flink which at times would be resource consuming and I want to utilize SSD as much as feasible. Does the CentOS installation always default to the disk with higher space? Should I simply install to SSD(deselect HDD) and then after installation mount other partitions on HDD? So far, I am trying to follow instructions here : I'll appreciate if someone could help.
Hamid (101 rep)
Jun 8, 2021, 08:22 AM • Last activity: Jun 27, 2025, 10:04 AM
5 votes
1 answers
2441 views
Disable writeback cache throttling - tuning vm.dirty_ratio
I have a workload with extremely high write burst rates for short periods of times. The target disks are rather slow, but I have plenty of RAM and very tolerant to instantaneous data loss. I've tried tuning vm.dirty_ratio to maximize the use of free RAM space to be used for dirty pages. # free -g to...
I have a workload with extremely high write burst rates for short periods of times. The target disks are rather slow, but I have plenty of RAM and very tolerant to instantaneous data loss. I've tried tuning vm.dirty_ratio to maximize the use of free RAM space to be used for dirty pages. # free -g total used free shared buff/cache available Mem: 251 7 213 3 30 239 Swap: 0 0 0 # sysctl -a | grep -i dirty vm.dirty_background_bytes = 0 vm.dirty_background_ratio = 5 vm.dirty_bytes = 0 vm.dirty_expire_centisecs = 90000 vm.dirty_ratio = 90 However, it seems I'm still encountering some writeback throttling based on the underlying disk speed. How can I disable this? # dd if=/dev/zero of=/home/me/foo.txt bs=4K count=100000 oflag=nonblock 100000+0 records in 100000+0 records out 409600000 bytes (410 MB) copied, 10.2175 s, 40.1 MB/s As long as there is free memory and the dirty ratio has not yet been exceeded - I'd like to write at full speed to the page cache.
Linux Questions (51 rep)
Nov 29, 2018, 04:55 AM • Last activity: Jun 25, 2025, 04:07 PM
1 votes
0 answers
65 views
How can I find multiple duplicates of media files,sort, backup them and delete the rest?
I have a 4 TB hard drive containing pictures, sounds, and videos from the last 15 years. These files were copied onto this drive from various sources, including hard drives, cameras, phones, CD-ROMs, DVDs, USB sticks, SD cards, and downloads. The files come in formats such as JPEG, PNG, GIF, SVG, VO...
I have a 4 TB hard drive containing pictures, sounds, and videos from the last 15 years. These files were copied onto this drive from various sources, including hard drives, cameras, phones, CD-ROMs, DVDs, USB sticks, SD cards, and downloads. The files come in formats such as JPEG, PNG, GIF, SVG, VOB, MP4, MPEG, MOV, AVI, SWF, WMV, FLV, 3GP, WAV, WMA, AAC, and OGG. Over the years, the files have been copied back and forth between different file systems, including FAT, exFAT, NTFS, HFS+/APFS, and ext3/ext4. Currently, the hard drive uses the ext4 file system. There are folders and files that appear multiple times (duplicates, triplicates, or even more). The problem is that the folder and filenames are not always identical. For example: 1. A folder named "bilder_2012" might appear elsewhere as "backup_bilder_2012" or "media_2012_backup_2016". 2. In some cases, newer folders contain additional files that were not present in the older versions. 3. The files themselves may have inconsistent names, such as "bild1", "bild2" in one folder and "bilder2018(1)", "bilder2018(2)" in another. What I Want to Achieve: 1. Sort and clean up the files and Remove all duplicates and copy the remaining files to a new hard drive. 2. Identify the original copies and Is there a way to determine which version of a file is the earliest/original? 3. Preserve the original folder names and For example, I know that "bilder_2012" was the first name given to a folder, and I like to keep that name if possible. 4. Standardize file naming and After copying, I like the files to follow a consistent naming scheme, such as. Folder: "bilder2012" and Files: "bilder2012(1).jpeg", "bilder2012(2).jpeg", etc. Is there a way to automate this process while ensuring the oldest/original files are preserved and duplicates are safely removed?
Bernd Kunze (11 rep)
Jun 21, 2025, 09:49 AM • Last activity: Jun 25, 2025, 07:26 AM
1 votes
1 answers
1923 views
Kernel support for HFS+ or APFS
What is the current status for the Linux kernel's support for APFS or HFS+ (filesystems used by MacOS)? Does it support write, journaling etc? I saw https://unix.stackexchange.com/questions/481949/is-an-apple-file-system-apfs-driver-for-linux-available-or-in-progress but the responses there mostly d...
What is the current status for the Linux kernel's support for APFS or HFS+ (filesystems used by MacOS)? Does it support write, journaling etc? I saw https://unix.stackexchange.com/questions/481949/is-an-apple-file-system-apfs-driver-for-linux-available-or-in-progress but the responses there mostly deals with FUSE-solutions. I am, for performance reasons¹, interested in kernel based solutions. ¹ I am gonna run it on an embedded device with a slow CPU, low memory etc.
d-b (2047 rep)
Mar 15, 2021, 05:35 PM • Last activity: Jun 24, 2025, 02:02 AM
0 votes
0 answers
37 views
Is there any kernel filesystem extension that can recognize uri paths?
Is there any extension of linux filesystems that can solve uri paths in terminal to avoid doing things like: cat https://example.com/file.txt cat file:///some-file-path/file.txt instead of curl -vs google.com 2>&1 | less
Is there any extension of linux filesystems that can solve uri paths in terminal to avoid doing things like: cat https://example.com/file.txt cat file:///some-file-path/file.txt instead of curl -vs google.com 2>&1 | less
arthur.afarias (1 rep)
Jun 22, 2025, 04:39 PM
0 votes
1 answers
2046 views
best practice for mounting filesystems
say I have two virtual disks (`dev/sda` and `dev/sdb`added to my ESX CentOS 7 VM). I create two partitions (`dev/sda1` and `/dev/sdb1`). Then I create a physical volume for each and create a volume group named `storage-vg` and two logical volumes `data` and `application`. File system created as well...
say I have two virtual disks (dev/sda and dev/sdbadded to my ESX CentOS 7 VM). I create two partitions (dev/sda1 and /dev/sdb1). Then I create a physical volume for each and create a volume group named storage-vg and two logical volumes data and application. File system created as well: mkfs.ext3 /dev/storage-vg/data mkfs.ext3 /dev/storage-vg/application My question is where do I mount them now ? I mean if I mount to a directory that contains already data the data will be hidden so to say. What is the best practice ? Should I create two empty directories under / ? It might be a silly question but for someone will almost only Windows experience it's hard to understand.
yesOrMaybeWhatever (121 rep)
Apr 12, 2018, 12:45 PM • Last activity: Jun 22, 2025, 06:09 AM
0 votes
0 answers
43 views
Cannot access files to delete
I currently use freefilesync on ubuntu 24.04 to back up my data. A couple of months ago I started to get errors on the external backup disk that a quick (and ineffective) look didn't resolve so I just ignored them. Today I delved deeper. If I use Files to open one of the problem directories it gives...
I currently use freefilesync on ubuntu 24.04 to back up my data. A couple of months ago I started to get errors on the external backup disk that a quick (and ineffective) look didn't resolve so I just ignored them. Today I delved deeper. If I use Files to open one of the problem directories it gives an error "This location could not be displayed" with a file name stating: input/output error. Double Commander (my preferred file manager) opens the directory fine but shows no files. However, using ls -l in a terminal lists a number of files with reasonable file names but where the attributes, owner, group, etc should be listed there are only question marks. Unsurprisingly attempting to run chmod, chown, rm, etc just results in an imput/output error. I also tried to check the partition with GParted but it crashed, Is there an easy way top resolve this issue? I suspect the best option is to bite the bullet and reformat the external disk and backup my 2TB data from scratch. My only concern is that, despite knowing better, I have just the one external disk so would be without a backup for a short while. Any suggestions gratefully received Keith
KeithW (1 rep)
Jun 20, 2025, 08:25 PM • Last activity: Jun 21, 2025, 05:01 AM
1 votes
1 answers
2086 views
Using rm --one-file-system to only delete files on the local filesystem
I have a FUSE mount located in `/media/writable`, however, sometimes this FUSE mount disconnects, but some programs will still attempt to write to `/media/writable`. When I restart the FUSE mount service, it will fail to mount because the directory is non-empty. How can I use `rm`'s `--one-file-syst...
I have a FUSE mount located in /media/writable, however, sometimes this FUSE mount disconnects, but some programs will still attempt to write to /media/writable. When I restart the FUSE mount service, it will fail to mount because the directory is non-empty. How can I use rm's --one-file-system argument to ensure that when I rm /media/writable, it will only delete files located on the **local** filesystem, as opposed to the **fuse mounted** filesystem? There are other folders located in /media/ so I am unable to run rm in the parent folder. Am I better off moving the folder one layer down (eg. /media/writable/mount) so I can rm the parent folder, or is there a way of selecting which filesystem I wish to delete from? I'm running Ubuntu 18.04.1 LTS with coreutils 8.28. Edit: My current method is this, but I'd like to see if there's a better way to do it: ExecStartPre=-/bin/mountpoint /media/writable/ || /usr/bin/stat /media/writable/MOUNTED || /bin/rm/ --one-file-system -rf /media/writable/*
Connor Bell (23 rep)
Mar 9, 2019, 01:27 PM • Last activity: Jun 16, 2025, 06:02 PM
Showing page 1 of 20 total questions