Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
32 views
How is an overlayfs different from just mounting another disk/partition over a directory?
I have OpenWRT installed on some of my routers and to add additional storage for settings as well as programs that might be installed on the router and maybe logs, OpenWRT recommends you plug storage into it and use an overlayfs. I also have a SBC where I just mount an external drive overtop of my h...
I have OpenWRT installed on some of my routers and to add additional storage for settings as well as programs that might be installed on the router and maybe logs, OpenWRT recommends you plug storage into it and use an overlayfs. I also have a SBC where I just mount an external drive overtop of my home directory on boot to store the home directory externally off of the SD Card that the bootloader and OS are installed on; since the storage on the external drive is more reliable than the SD Card, despite running slower. What is the difference between these two strategies? They are both basically Single Board computers with Linux, and when the external drive fails to mount, in both cases we're left with a directory full of the content of the original directory, where the drive would have been mounted before. The only think I can think of that is different, is that the settings directory for OpenWRT (/etc) is being mounted on the external drive, where this is not the case on the SBC.
leeand00 (4927 rep)
Aug 5, 2025, 08:58 PM • Last activity: Aug 6, 2025, 05:22 AM
6 votes
4 answers
6912 views
Unable to access/(auto)-mount SD card on Fedora 28
I am trying to access SD cards on Fedora 28, but do not have any success. System info is as follows: $ lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4....
I am trying to access SD cards on Fedora 28, but do not have any success. System info is as follows: $ lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch Distributor ID: Fedora Description: Fedora release 28 (Twenty Eight) Release: 28 Codename: TwentyEight I was not able to access different SD cards using two different card readers. Despite of being accessible on both macOS and Windows, none of them is shown in the Nautilus file browser, the desktop or elsewhere obvious. The card readers are recognized by the system as per lsusb output: $ lsusb -v # some other USB devices Bus 001 Device 005: ID 058f:6362 Alcor Micro Corp. Flash Card Reader/Writer Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x058f Alcor Micro Corp. idProduct 0x6362 Flash Card Reader/Writer bcdDevice 1.29 iManufacturer 1 iProduct 2 iSerial 3 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 250mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk-Only iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 # some other USB devices Bus 001 Device 006: ID 0dda:2027 Integrated Circuit Solution, Inc. USB 2.0 Card Reader Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0dda Integrated Circuit Solution, Inc. idProduct 0x2027 USB 2.0 Card Reader bcdDevice 1.6e iManufacturer 1 iProduct 2 iSerial 3 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 4 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk-Only iInterface 5 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 # some other USB devices I then had a look at the udev events while plugging a card in and out: $ udevadm monitor monitor will print the received events for: UDEV - the event which udev sends out after rule processing KERNEL - the kernel uevent KERNEL[701.434565] change /devices/pci0000:00/0000:00:1d.7/usb1/1-3/1-3:1.0/host4/target4:0:0/4:0:0:2/block/sde (block) UDEV [714.263816] change /devices/pci0000:00/0000:00:1d.7/usb1/1-3/1-3:1.0/host4/target4:0:0/4:0:0:2/block/sde (block) KERNEL[748.477184] change /devices/pci0000:00/0000:00:1d.7/usb1/1-3/1-3:1.0/host4/target4:0:0/4:0:0:2/block/sde (block) UDEV [761.338940] change /devices/pci0000:00/0000:00:1d.7/usb1/1-3/1-3:1.0/host4/target4:0:0/4:0:0:2/block/sde (block) In addition, I had a look at the kernel messages: $ dmesg [ 603.846840] usb-storage 1-3:1.0: USB Mass Storage device detected [ 603.847749] scsi host4: usb-storage 1-3:1.0 [ 605.703531] scsi 4:0:0:0: Direct-Access Generic CF 1.6E PQ: 0 ANSI: 0 CCS [ 605.704982] scsi 4:0:0:1: Direct-Access Generic MS 1.6E PQ: 0 ANSI: 0 CCS [ 606.509034] scsi 4:0:0:2: Direct-Access Generic MMC/SD 1.6E PQ: 0 ANSI: 0 CCS [ 606.510387] scsi 4:0:0:3: Direct-Access Generic SM 1.6E PQ: 0 ANSI: 0 CCS [ 606.511519] sd 4:0:0:0: Attached scsi generic sg4 type 0 [ 606.511943] sd 4:0:0:1: Attached scsi generic sg5 type 0 [ 606.512177] sd 4:0:0:2: Attached scsi generic sg6 type 0 [ 606.512408] sd 4:0:0:3: Attached scsi generic sg7 type 0 [ 608.924586] sd 4:0:0:1: [sdd] Attached SCSI removable disk [ 629.830776] sd 4:0:0:2: [sde] Attached SCSI removable disk [ 633.048754] sd 4:0:0:3: [sdf] Attached SCSI removable disk [ 639.490479] sd 4:0:0:0: [sdc] Attached SCSI removable disk Both the output of dmesg and udevadm monitor are telling that the card should be shown as sde. However, fdisk -l does not list sde. Besides that, trying to mount the device manually, raises an error: $ mount -t auto /dev/sde /mnt/ mount: /mnt: no medium found on /dev/sde. I am not sure, whether the needed driver module is loaded properly, since there is no mmc0-like entry in the dmesg output (as I am used to know from Debian-based systems). lsmod does not list the mmc0 kernel module either: $ lsmod | grep mm rtl8192c_common 61440 1 rtl8192cu rtlwifi 98304 3 rtl8192c_common,rtl_usb,rtl8192cu The only mmc-like modules which seems to be available but are not loaded are mmc_block and mmc_core: $ modprobe mm # listing suggestions using tab auto-completion mma7660 mmc_block mmc_core mms114 How could I solve this problem or at least narrow it down?
albert (191 rep)
Jul 16, 2018, 07:57 PM • Last activity: Aug 3, 2025, 09:48 AM
2 votes
1 answers
2619 views
"structure needs cleaning", hardware failure?
The drive that my `/home` folder lives on is showing signs of failing, and I'm trying to migrate to a new drive.  I purchased a 4TB SSD, formatted it with `ext4`, mounted it as an external drive with a USB/SATA connector, and `rsync`’ed my `/home` folder over. So far, so good. Bu...
The drive that my /home folder lives on is showing signs of failing, and I'm trying to migrate to a new drive.  I purchased a 4TB SSD, formatted it with ext4, mounted it as an external drive with a USB/SATA connector, and rsync’ed my /home folder over. So far, so good. But when I swapped it in place of the failing drive and rebooted, my OS reported:
unable to mount local folders
structure needs cleaning
That sounds like a corrupt file system, but fsck reported no errors. Maybe the new hardware is faulty, but I ran badblocks on it, and it also came back with no errors. I formatted it again and tried again, and came up with the same error. Weirdly, if I log in as root and manually mount the new /home drive, it mounts okay, and seems to accept read/writes. However, dmesg did show some errors for /dev/sdb (that's the /home drive on this system). I've copied them below, although I'm fluent enough myself to parse them. Any ideas? For context I'm running Gentoo Linux.
[    0.914006] sd 6:0:0:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
[    0.914052] sd 6:0:0:0: [sdb] Write Protect is off
[    0.914074] sd 6:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[    0.914117] sd 6:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    0.914224] sd 6:0:0:0: [sdb] Preferred minimum I/O size 512 bytes
[    0.915929]  sdb: sdb1
[    0.916093] sd 6:0:0:0: [sdb] Attached SCSI disk
[    5.012731] sd 6:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
[    5.012740] sd 6:0:0:0: [sdb] tag#0 Sense Key : Illegal Request [current] 
[    5.012747] sd 6:0:0:0: [sdb] tag#0 Add. Sense: Unaligned write command
[    5.012753] sd 6:0:0:0: [sdb] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 08 10 00 00 00 08 00 00
[    5.012757] I/O error, dev sdb, sector 2064 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2
[    5.012786] sd 6:0:0:0: [sdb] tag#1 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
[    5.012792] sd 6:0:0:0: [sdb] tag#1 Sense Key : Illegal Request [current] 
[    5.012797] sd 6:0:0:0: [sdb] tag#1 Add. Sense: Unaligned write command
[    5.012802] sd 6:0:0:0: [sdb] tag#1 CDB: Read(16) 88 00 00 00 00 00 00 00 08 18 00 00 00 08 00 00
[    5.012805] I/O error, dev sdb, sector 2072 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2
[    5.012817] sd 6:0:0:0: [sdb] tag#31 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
[    5.012822] sd 6:0:0:0: [sdb] tag#31 Sense Key : Illegal Request [current] 
[    5.012827] sd 6:0:0:0: [sdb] tag#31 Add. Sense: Unaligned write command
[    5.012832] sd 6:0:0:0: [sdb] tag#31 CDB: Read(16) 88 00 00 00 00 00 00 00 08 08 00 00 00 08 00 00
[    5.012836] I/O error, dev sdb, sector 2056 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2
[   35.852468] sd 6:0:0:0: [sdb] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=30s
[   35.852476] sd 6:0:0:0: [sdb] tag#13 Sense Key : Illegal Request [current] 
[   35.852483] sd 6:0:0:0: [sdb] tag#13 Add. Sense: Unaligned write command
[   35.852490] sd 6:0:0:0: [sdb] tag#13 CDB: Read(16) 88 00 00 00 00 00 00 00 08 28 00 00 05 40 00 00
[   35.852494] I/O error, dev sdb, sector 2088 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 2
[   35.852574] sd 6:0:0:0: [sdb] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=30s
[   35.852581] sd 6:0:0:0: [sdb] tag#14 Sense Key : Illegal Request [current] 
[   35.852586] sd 6:0:0:0: [sdb] tag#14 Add. Sense: Unaligned write command
[   35.852591] sd 6:0:0:0: [sdb] tag#14 CDB: Read(16) 88 00 00 00 00 00 00 00 0d 68 00 00 05 40 00 00
[   35.852595] I/O error, dev sdb, sector 3432 op 0x0:(READ) flags 0x84700 phys_seg 168 prio class 2
[   35.852672] sd 6:0:0:0: [sdb] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=30s
[   35.852677] sd 6:0:0:0: [sdb] tag#15 Sense Key : Illegal Request [current] 
[   35.852682] sd 6:0:0:0: [sdb] tag#15 Add. Sense: Unaligned write command
[   35.852687] sd 6:0:0:0: [sdb] tag#15 CDB: Read(16) 88 00 00 00 00 00 00 00 12 a8 00 00 03 f0 00 00
[   35.852690] I/O error, dev sdb, sector 4776 op 0x0:(READ) flags 0x80700 phys_seg 126 prio class 2
[   36.858014] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 18880 failed (53845!=52774)
[   36.858017] EXT4-fs (sdb1): group descriptors corrupted!
One further experiment: I tried installing another drive into the bay, and it also wouldn't auto mount as /home.  I could not even mount it manually after logging in as root.  As far as I can tell, there's nothing wrong with this third drive, and I can mount it just fine via a USB/SATA adapter.  Both of the new drives are SSDs, while the old failing drive that still mounts is a hard disk. This SATA port is via a SATA/PCIE adapter, so I suppose the problem could be in the adapter.  In that case, though, it's weird that the old hard drive still works.
jyoung (131 rep)
Sep 24, 2023, 11:12 PM • Last activity: Aug 3, 2025, 01:07 AM
1 votes
1 answers
1927 views
Mounting nested ZFS filesystems exported via NFS
I have a linux (ubuntu) server with a zfs pool containing nested fileystem. E.g.: zfs_pool/root_fs/fs1 zfs_pool/root_fs/fs2 zfs_pool/root_fs/fs3 I have enabled NFS sharing on the root filesystem (via zfs, not by editing `/etc/exports`). Nested filesystems inherit this property. NAME PROPERTY VALUE S...
I have a linux (ubuntu) server with a zfs pool containing nested fileystem. E.g.: zfs_pool/root_fs/fs1 zfs_pool/root_fs/fs2 zfs_pool/root_fs/fs3 I have enabled NFS sharing on the root filesystem (via zfs, not by editing /etc/exports). Nested filesystems inherit this property. NAME PROPERTY VALUE SOURCE zfs_pool/root_fs sharenfs rw=192.168.1.0/24,root_squash,async local NAME PROPERTY VALUE SOURCE zfs_pool/root_fs/fs1 sharenfs rw=192.168.1.0/24,root_squash,async inherited from zfs_pool/root_fs On the client machines (linux, mostly ubuntu), the only filesystem I explicitly mount is the root filesystem. mount -t nfs zfsserver:/zfs_pool/root_fs /root_fs_mountpoint Nested filesystems are mounted automatically when they are accessed. I didn't need to configure anything to make this work. This is great, but I'd like to know who is providing this feature. Is it ZFS? Is it NFS? Is it something else on the client side (something like autofs, which isn't even installed). I'd like to change the timeout after which nested filesystems are unmounted, but I don't even know which configuration to edit and which documentation to read.
lgpasquale (291 rep)
Oct 15, 2018, 09:22 AM • Last activity: Jul 21, 2025, 04:08 PM
5 votes
1 answers
2832 views
Use fstab to mount luks encrypted drive to subfolder within home
Fresh install of Lubuntu 20.04 on system with Windows 10 and Lubuntu installed on 256GB NVME drive to dual boot. Boot drive is /dev/nvme0n1p2 Home folder is therefore /dev/nvme0n1p2/home/username I have a 1TB HDD with two partitions: /dev/sda1 736GB encrypted ext4/LUKS /dev/sda2 195GB ntfs For conte...
Fresh install of Lubuntu 20.04 on system with Windows 10 and Lubuntu installed on 256GB NVME drive to dual boot. Boot drive is /dev/nvme0n1p2 Home folder is therefore /dev/nvme0n1p2/home/username I have a 1TB HDD with two partitions: /dev/sda1 736GB encrypted ext4/LUKS /dev/sda2 195GB ntfs For context, the purpose of the ntfs partition is so that I can easily share files between my Lubuntu environment and Windows 10. My objective is to be able to: 1) Boot into Lubuntu 2) Log in 3) Open File Manager and navigate to /home/Filestore 4) Be prompted to enter password I have read this guide: https://www.linuxbabe.com/desktop-linux/how-to-automount-file-systems-on-linux And I can successfully automount the ntfs drive to /home/WindowsShare But I cannot mount the LUKS filesystem to /home/Filestore Using 'ext4' as the filesystem gives me this error message:
mount: /home/luke/Filestore: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error.
The entry for the partition in blkid is:
/dev/sda1: UUID="redacted" TYPE="crypto_LUKS" PARTUUID="redacted"
So I therefore tried using "crypto_LUKS" as the filesystem in fstab and got this:
mount: /home/luke/Filestore: unknown filesystem type 'crypto_LUKS'.
I have looked for guides on automounting encrypted filesystems and found numerous. Here is one: https://blog.tinned-software.net/automount-a-luks-encrypted-volume-on-system-start/ Everything I have found involves using a shared key to auto-decrypt the filesystem on boot. I don't want to do this as I don't have an encrypted area on my boot drive in order to store the key. Is my stated aim possible?
Luke Richards (81 rep)
Jan 17, 2021, 10:45 AM • Last activity: Jul 20, 2025, 02:03 AM
14 votes
2 answers
13778 views
How to deal with freezes caused by autofs after network disconnect
I mount four servers (3 via `cifs`, 1 via `sshfs`) using `autofs`. `auto.master` /- /etc/auto.all --timeout=60 --ghost `auto.all ` /mnt \ /server1 -fstype=cifs,rw,credentials=/etc/.smbcredentials.txt,uid=1000,file_mode=0775,dir_mode=0775,users ://server1/ \ /server2/ -fstype=cifs,rw,credentials=/etc...
I mount four servers (3 via cifs, 1 via sshfs) using autofs. auto.master /- /etc/auto.all --timeout=60 --ghost auto.all /mnt \ /server1 -fstype=cifs,rw,credentials=/etc/.smbcredentials.txt,uid=1000,file_mode=0775,dir_mode=0775,users ://server1/ \ /server2/ -fstype=cifs,rw,credentials=/etc/.smbcredentials.txt,uid=1000,file_mode=0775,dir_mode=0775,users ://server2/ \ /server3 -fstype=cifs,rw,credentials=/etc/.smbcredentials.txt,uid=1000,file_mode=0775,dir_mode=0775,users ://server3/ \ /server4 -fstype=fuse,rw,allow_other,uid=1000,users,reconnect,cache=yes,kernel_cache,compression=no,large_read,Ciphers=arcfour :sshfs\#user@server\:/home ``` Everything is fine when I make a clean boot. I connect to my network (using a VPN) and autofs mounts everything. # Problem When there is a network disconnect, e.g. when I hibernate my laptop or connect to a different network, autofs causes my explorer (dolphin) to freeze because it tries to load the remote share infinitely. It becomes unresponsive and does not even react to SIGTERM commands. Sometimes I am lucky and calling sudo service autofs stop and sudo automount helps to resolve the issue. However, often it still stays freezed. Sometimes even, my whole dock freezes due to this making all applications unselectable. Then I have to make a full reboot.. I've searched for weeks now for solution how to deal with autofs in such situations. Before using autofs, I had everything mounted via /etc/fstab but that also required a manual remount after every network interruption. I thought autofs would help me here but it causes me even more trouble. # Questions 1. Is there any point I overlooked that could solve the freezing problem? 2. Is there a completely different approach that may be better suited for my situation than autofs? PS: I'm on Kubuntu 16.04
pat-s (348 rep)
Jan 9, 2018, 11:27 PM • Last activity: Jul 12, 2025, 11:09 AM
2 votes
1 answers
4392 views
How to solve failure in plugging external DVD driver in Ubuntu 20.04?
I have bought an [external DVD driver][1] but, after plugging (or booting with the unit already plugged) I see the unit listed in the resources of the computer but I cannot access it: [![enter image description here][2]][2] If I try to access the driver via VLC, I get the error `VLC is unable to ope...
I have bought an external DVD driver but, after plugging (or booting with the unit already plugged) I see the unit listed in the resources of the computer but I cannot access it: enter image description here If I try to access the driver via VLC, I get the error VLC is unable to open the MRL 'cdda:///dev/sr0'. The content of fstab is: $ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # # / was on /dev/nvme0n1p2 during installation UUID=341faa1b-4e49-49d7-85a4-e33acecb2212 / ext4 errors=remount-ro 0 1 # /boot/efi was on /dev/nvme0n1p1 during installation UUID=24D6-7429 /boot/efi vfat umask=0077 0 1 /swapfile none swap sw 0 0 What is the right way to plug a DVD in Ubuntu 20.04? Is it a problem with the driver (I need to buy another brand more Linux-prone)? Or do I need to change the permission of the driver with some sudo commands? After I plug the DVD driver I get: $ ls -lt | less | grep sr0 lrwxrwxrwx 1 root root 3 May 27 21:15 cdrom -> sr0 lrwxrwxrwx 1 root root 3 May 27 21:15 cdrw -> sr0 lrwxrwxrwx 1 root root 3 May 27 21:15 dvd -> sr0 lrwxrwxrwx 1 root root 3 May 27 21:15 dvdrw -> sr0 brw-rw----+ 1 root cdrom 11, 0 May 27 21:15 sr0 Thank you
Gigiux (557 rep)
May 23, 2021, 06:10 AM • Last activity: Jul 10, 2025, 06:00 AM
20 votes
4 answers
27619 views
How to auto mount / permanently mount external devices on NixOS
I have a USB stick and an NTFS hard drive partition that I'd want to use in NixOS. On some other distribution, I'd mount it using `ntfs-3g` in `/mnt`. But on NixOS, the directory doesn't exist; I suppose NixOS has some other canonical way and/or place of doing that. In NixOS, how should one set up a...
I have a USB stick and an NTFS hard drive partition that I'd want to use in NixOS. On some other distribution, I'd mount it using ntfs-3g in /mnt. But on NixOS, the directory doesn't exist; I suppose NixOS has some other canonical way and/or place of doing that. In NixOS, how should one set up automounting of external partitions, preferably using configuration.nix?
stefkin (338 rep)
Jun 30, 2015, 06:50 PM • Last activity: Jul 7, 2025, 01:56 PM
1 votes
1 answers
4563 views
Disk mount failed with result 'dependency'
I mount drive using command `mount` and modify `fstab` file (so disk should be visible after restart). Unfortunately, after system reboot, mount isn't visible in the system `lsblk -a` says, disk is configured but without mounting point. ``` NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 250G 0 di...
I mount drive using command mount and modify fstab file (so disk should be visible after restart). Unfortunately, after system reboot, mount isn't visible in the system lsblk -a says, disk is configured but without mounting point.
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  250G  0 disk 
vda    254:0    0  100G  0 disk 
├─vda1 254:1    0  100G  0 part /
└─vda2 254:2    0    2M  0 part
Here is a log from journalctl:
May 15 09:23:34 srv  systemd: dev-disk-by\x2duuid-XXX\XXX\XXXX\ZZZZZ\YYYY.device: Job dev-disk-by\x2duuid-XXX\XXX\XXXX\ZZZZZ\YYYY.device/start timed >
May 15 09:23:34 srv  systemd: Timed out waiting for device /dev/disk/by-uuid/XXXX-XXXX-XXX-XXX-XXXXXXX.
May 15 09:23:34 srv  systemd: Dependency failed for Mount DO Volume dev-volume.
May 15 09:23:34 srv  systemd: mnt-dev_volume.mount: Job mnt-dev_volume.mount/start failed with result 'dependency'.
How to fix it and mount disk automatically?
Mateusz Przybylek (111 rep)
May 15, 2023, 02:27 PM • Last activity: Jul 7, 2025, 08:09 AM
1 votes
4 answers
2773 views
Automatic Mount at Boot For Xubuntu
I'm trying to follow these instruction to mount a second hard drive: https://help.ubuntu.com/community/InstallingANewHardDrive For automatic mount at boot it says I need to enter this into the terminal: sudo nano -Bw /etc/fstab Then: Add this line to the end (for ext3 file system): /dev/sdb1 /media/...
I'm trying to follow these instruction to mount a second hard drive: https://help.ubuntu.com/community/InstallingANewHardDrive For automatic mount at boot it says I need to enter this into the terminal: sudo nano -Bw /etc/fstab Then: Add this line to the end (for ext3 file system): /dev/sdb1 /media/mynewdrive ext3 defaults 0 2 Add this line to the end (for fat32 file system): /dev/sdb1 /media/mynewdrive vfat defaults 0 2 I'm not really sure what file system I am working with. I'm also not sure what it means to add the line to the end. End of what? This is a screen shot of what happens when I type in "sudo nano -Bw /etc/fstab" Image
boy (75 rep)
Jun 9, 2016, 08:37 PM • Last activity: Jun 16, 2025, 07:05 PM
0 votes
0 answers
26 views
How to automount eCryptfs volume at boot (without login)?
We have two servers. Application sever A and NFS file server B. Server B is shared among multiple various applications and it's generic NFS storage host that we don't have access to and it's corporate shared storage. Application server A processes very sensitive data and then stores them on NFS shar...
We have two servers. Application sever A and NFS file server B. Server B is shared among multiple various applications and it's generic NFS storage host that we don't have access to and it's corporate shared storage. Application server A processes very sensitive data and then stores them on NFS shared with everyone. Since it's far from perfect situation, we need to store data from Application server A on NFS in encrypted form so that it can't be read/processed even if one would have full access to NFS server B. We've set this up with gocryptfs but we're suffering from severe performance issues so this time we decided to give ecryptfs a go. I tried to crawl through ecryptfs and encfs tutorials and docs but all of them seem to be focused on automounting filesystem at login. For us there will be no login. It's autonomous machine that is supposed to automatically boot after power failure and automatically mount encrypted volume at boot time, without human intervention. We need to provide passphrase via file stored on Application server A disk. How can I do that? We tried to use fstab with: /mnt/nfs_encrypted /mnt/nfs ecryptfs nofail,rw,relatime,ecryptfs_sig=5d6b2xxxxxxx35,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs 0 0 but it fails to mount at boot time since keyring is empty after each reboot.
Lapsio (1363 rep)
Jun 9, 2025, 03:23 PM
1 votes
1 answers
115 views
Why is this docker container process not triggering a mount for my systemd auto-mounted drive?
I've been struggling to make sense of something, so would appreciate some help. I am mounting a remote NFS drive onto my Debian system with the following fstab entry which uses the systemd automounter, and is set to auto-unmount after 120 seconds of inactivity: ``` 192.168.0.67:/mnt/SSD_240GB/backup...
I've been struggling to make sense of something, so would appreciate some help. I am mounting a remote NFS drive onto my Debian system with the following fstab entry which uses the systemd automounter, and is set to auto-unmount after 120 seconds of inactivity:
192.168.0.67:/mnt/SSD_240GB/backups/TIG_backups  /mnt/nfs/SSD_240GB/backups/TIG_backups   nfs auto,_netdev,bg,soft,x-systemd.automount,x-systemd.idle-timeout=120,timeo=14,nofail,noatime,nolock,tcp,actimeo=1800 0 0
Now on this Debian host system I am running a docker container (Telegraf ), to monitor some metrics of the Debian host. To facilitate this, I am bind-mounting the host filesystem and proc directory (as recommended here in the docs ), as well as bind-mounting the NFS drive. The docker run command looks like this:
docker run -d \
--name telegraf_container \
--user 1001:1001 \
--network docker_monitoring_network \
--mount type=bind,source=/,destination=/hostfs \
--mount type=bind,source=/mnt/nfs/SSD_240GB/backups/TIG_backups/telegraf_backups,destination=/mnt/nfs/SSD_240GB/backups/TIG_backups/telegraf_backups \
-e HOST_MOUNT_PREFIX=/hostfs \
-e HOST_PROC=/hostfs/proc \
telegraf:latest
I am using the Telegraf Disk Input plugin because I want to gather disk usage metrics once every hour for the NFS drive (used, free, total). The problem is that the disk is unmounted automatically 120s after system boot as expected, *but it is never remounted*. I would expect the telegraf container to trigger an automount every hour. The reason I expect this is because the container is essentially running a .go program (as seen here in the source code) which is querying the filesystem. I believe under the hood it is calling some .go libraries (here and here ), which are essentially calling statfs(). I was under the impression that statfs() should trigger a systemd automount. Here in the Debian host's logs, I can see the NFS drive mounting correctly at boot up, and then unmounting after a couple of minutes automatically (but then it never remounts):
root@docker-debian:/home/monitoring/docker_files/scripts# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.automount -b
Jun 05 13:54:12 docker-debian systemd[1] : Set up automount mnt-nfs-SSD_240GB-backups-TIG_backups.automount.
Jun 05 13:54:18 docker-debian systemd[1] : mnt-nfs-SSD_240GB-backups-TIG_backups.automount: Got automount request for /mnt/nfs/SSD_240GB/backups/TIG_backups, triggered by 893 (runc:[2:INIT])

root@docker-debian:/home/monitoring/docker_files/scripts# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.mount -b
Jun 05 13:54:18 docker-debian systemd[1] : Mounting mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups...
Jun 05 13:54:18 docker-debian systemd[1] : Mounted mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups.
Jun 05 13:57:39 docker-debian systemd[1] : Unmounting mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups...
Jun 05 13:57:39 docker-debian systemd[1] : mnt-nfs-SSD_240GB-backups-TIG_backups.mount: Deactivated successfully.
Jun 05 13:57:39 docker-debian systemd[1] : Unmounted mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups.
After the drive has auto-unmounted, it is missing from the host as expected:
monitoring@docker-debian:/$ df
Filesystem     1K-blocks    Used Available Use% Mounted on
udev              983908       0    983908   0% /dev
tmpfs             201420     816    200604   1% /run
/dev/sda1       15421320 4779404   9836748  33% /
tmpfs            1007084       0   1007084   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs             201416       0    201416   0% /run/user/1001
But it is present in the container:
monitoring@docker-debian:/$ docker exec -it telegraf_container df
Filesystem                                                       1K-blocks     Used Available Use% Mounted on
overlay                                                           15421320  4779404   9836748  33% /
tmpfs                                                                65536        0     65536   0% /dev
shm                                                                  65536        0     65536   0% /dev/shm
/dev/sda1                                                         15421320  4779404   9836748  33% /hostfs
udev                                                                983908        0    983908   0% /hostfs/dev
tmpfs                                                              1007084        0   1007084   0% /hostfs/dev/shm
tmpfs                                                               201420      820    200600   1% /hostfs/run
tmpfs                                                                 5120        0      5120   0% /hostfs/run/lock
192.168.0.67:/mnt/SSD_240GB/backups/TIG_backups/telegraf_backups 229608448 42336256 175535104  20% /mnt/nfs/SSD_240GB/backups/TIG_backups/telegraf_backups
tmpfs                                                              1007084        0   1007084   0% /proc/acpi
tmpfs                                                              1007084        0   1007084   0% /sys/firmware
tmpfs                                                               201416        0    201416   0% /hostfs/run/user/1001
In case it's relevant, the Telegraf config is here:
# GLOBAL SETTINGS
[agent]
  hostname = "docker-debian"
  flush_interval = "60m"
  interval = "60m"

# COLLECT DISK USAGE OF THIS VM
[[inputs.disk]]
  mount_points = ["/", "/mnt/nfs/SSD_240GB/backups/TIG_backups"]  # Only these will be checked
  fieldpass = [ "free", "total", "used", "used_percent" ]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]

# VIEW COLLECTED METRICS
[[outputs.file]]
  files = ["stdout"]
Why is the container not triggering an automount, which leads to it not being able to collect the metrics on the drive? --- **EDIT** After the answer from @grawity, I did a simpler check: - I removed the idle timeout (by setting x-systemd.idle-timeout=0) - I removed explicit bind-mounts for these drives from the docker run command In this situation, I found the following: 1) Immediately after boot, an automount is set up, but nothing triggered it yet, as expected:
root@docker-debian:# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.automount -b
Jun 06 12:22:20 docker-debian systemd[1] : Set up automount mnt-nfs-SSD_240GB-backups-TIG_backups.automount.

root@docker-debian:# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.mount -b
-- No entries --
2) I start a simple container up, with no explicit bind mounts for those drives (only the hostfs structure) :
docker run -d \
--name telegraf_container \
--mount type=bind,source=/,destination=/hostfs \
-e HOST_MOUNT_PREFIX=/hostfs \
-e HOST_PROC=/hostfs/proc \
telegraf:latest
This still does not trigger any automounts on the host. 3) Now I manually trigger an automount on the host by accessing the drive:
ls /mnt/nfs/SSD_240GB/backups/TIG_backups/
The automount is triggered and mounts the drive successfully:
root@docker-debian:# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.automount -b
Jun 06 12:22:20 docker-debian systemd[1] : Set up automount mnt-nfs-SSD_240GB-backups-TIG_backups.automount.
Jun 06 12:35:20 docker-debian systemd[1] : mnt-nfs-SSD_240GB-backups-TIG_backups.automount: Got automount request for /mnt/nfs/SSD_240GB/backups/TIG_backups, triggered by 936 (ls)

root@docker-debian:# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.mount -b
Jun 06 12:35:21 docker-debian systemd[1] : Mounting mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups...
Jun 06 12:35:21 docker-debian systemd[1] : Mounted mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups.
Interestingly, the mounted drive now *automatically* appears inside the container (even though no bind-mounts have been used), but it appears under /hostfs instead:
monitoring@docker-debian:~$ docker exec -it telegraf_container df
Filesystem                                     1K-blocks    Used Available Use% Mounted on
overlay                                         15421320 4686888   9929264  33% /
tmpfs                                              65536       0     65536   0% /dev
shm                                                65536       0     65536   0% /dev/shm
/dev/sda1                                       15421320 4686888   9929264  33% /hostfs
udev                                              983908       0    983908   0% /hostfs/dev
tmpfs                                            1007084       0   1007084   0% /hostfs/dev/shm
tmpfs                                             201420     656    200764   1% /hostfs/run
tmpfs                                               5120       0      5120   0% /hostfs/run/lock
tmpfs                                             201416       0    201416   0% /hostfs/run/user/1001
tmpfs                                            1007084       0   1007084   0% /proc/acpi
tmpfs                                            1007084       0   1007084   0% /sys/firmware
192.168.0.67:/mnt/SSD_240GB/backups/TIG_backups  16337920 5799936   9682944  38% /hostfs/mnt/nfs/SSD_240GB/backups/TIG_backups
If I unmount the drive directly on the host (using umount), then it disappears from the container again. 4) I repeated this but instead using an idle timeout of 2mins now. What I found was that having the docker container running *prevents* the autounmount after 2 mins from happening (even though the container does not explicitly bind-mount in the drive, but instead appears automatically in the container under /hostfs). If I stop and remove the container, then the idle timeout unmounts the drive after the 2mins:
root@docker-debian:# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.mount -b
Jun 06 12:49:40 docker-debian systemd[1] : Mounting mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups...
Jun 06 12:49:41 docker-debian systemd[1] : Mounted mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups.
Jun 06 13:10:28 docker-debian systemd[1] : Unmounting mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups...
Jun 06 13:10:28 docker-debian systemd[1] : mnt-nfs-SSD_240GB-backups-TIG_backups.mount: Deactivated successfully.
Jun 06 13:10:28 docker-debian systemd[1] : Unmounted mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups.
This makes me think a couple of things: - If I want to use telegraf to monitor drives that are mounted on the host, I don't need to bind mount them in explicitly, because they are present already due to the /hostfs bind-mount. - I should never see what I was originally expecting - namely, a drive automatically unmounting due to the idle timeout, and then the container triggering a remount. Because I observed above that once a drive has been mounted in (in my case at /hostfs), the container actually prevents it from ever being auto-unmounted.
teeeeee (305 rep)
Jun 5, 2025, 03:04 PM • Last activity: Jun 6, 2025, 01:08 PM
0 votes
1 answers
2637 views
Using /etc/auto.master.d together with wildcard `*` to mount `/home`
My problem is similar to that described in https://unix.stackexchange.com/q/375516/320598: I wanted to mount individual users' home directories to `/home/*user*` using both, `*` wildcard and directory `/etc/auto.master.d` in SLES15 SP4, but when I try to mount some directory via `ls -l /home/*user*`...
My problem is similar to that described in https://unix.stackexchange.com/q/375516/320598 : I wanted to mount individual users' home directories to /home/*user* using both, * wildcard and directory /etc/auto.master.d in SLES15 SP4, but when I try to mount some directory via ls -l /home/*user*, nothing happens (even when having activated automount debugging, I see no log messages related to my attempt to mount). I've created an /etc/auto.master.d/homes, containing /home /etc/auto.homes, and the latter file itself contains * -bg,rw,hard,intr,nfsvers=3 nfs.server.org:/exports/home/&. I can test-mount my test-user's home directory manually without a problem, however. I'm not quite sure I understood how to use /etc/auto.master.d correctly, so an answer explaining my error could also point me in the right direction.
U. Windl (1715 rep)
Feb 3, 2023, 09:35 AM • Last activity: Jun 5, 2025, 10:53 AM
2 votes
1 answers
2201 views
systemd: autofs containing autofs does not unmount
I'm trying to set up two directories, each automounted: * `/mnt/dir` * `/mnt/dir/subdir` In my case, these are: * `/mnt/btrfs-vol/rootfs` (read only) * `/mnt/btrfs-vol/rootfs/btrbk-snap` (RW for taking snapshots with [`btrbk`](https://github.com/digint/btrbk)) My `/etc/fstab` contains: LABEL=rootfs...
I'm trying to set up two directories, each automounted: * /mnt/dir * /mnt/dir/subdir In my case, these are: * /mnt/btrfs-vol/rootfs (read only) * /mnt/btrfs-vol/rootfs/btrbk-snap (RW for taking snapshots with [btrbk](https://github.com/digint/btrbk)) My /etc/fstab contains: LABEL=rootfs /mnt/btrfs-vol/rootfs btrfs ro,subvol=/,lazytime,compress=lzo,ssd,discard,noauto,x-systemd.automount,x-systemd.idle-timeout=2 LABEL=rootfs /mnt/btrfs-vol/rootfs/btrbk-snap btrfs rw,subvol=/btrbk-snap,lazytime,compress=lzo,ssd,discard,noauto,x-systemd.automount,x-systemd.idle-timeout=2,x-systemd.requires-mounts-for=/mnt/btrfs-vol/rootfs I do: svelte ~# systemctl daemon-reload && systemctl restart local-fs.target svelte ~# mount | grep btrfs-vol/rootfs systemd-1 on /mnt/btrfs-vol/rootfs type autofs (rw,relatime,fd=32,pgrp=1,timeout=2,minproto=5,maxproto=5,direct) Strangely, /mnt/btrfs-vol/rootfs, is already mounted. If I unmount /mnt/btrfs-vol/rootfs, it is immediately remounted: svelte ~# umount /mnt/btrfs-vol/rootfs svelte ~# mount | grep btrfs-vol/rootfs systemd-1 on /mnt/btrfs-vol/rootfs type autofs (rw,relatime,fd=32,pgrp=1,timeout=2,minproto=5,maxproto=5,direct) Now if I ping the subdirectory, it automounts: svelte ~# (cd /mnt/btrfs-vol/rootfs/btrbk-snap/ && mount | grep btrfs-vol/rootfs) systemd-1 on /mnt/btrfs-vol/rootfs type autofs (rw,relatime,fd=32,pgrp=1,timeout=2,minproto=5,maxproto=5,direct) /dev/mapper/vg_svelte-rootfs on /mnt/btrfs-vol/rootfs type btrfs (ro,relatime,lazytime,compress=lzo,ssd,discard,space_cache,subvolid=5,subvol=/) Note that the fstype of /dev/mapper/vg_svelte-rootfs has changed from autofs to btrfs. A few seconds later (I have timeout=2 for testing`): svelte ~# mount | grep btrfs-vol/rootfssystemd-1 on /mnt/btrfs-vol/rootfs type autofs (rw,relatime,fd=32,pgrp=1,timeout=2,minproto=5,maxproto=5,direct) The subdirectory is unmounted, and the fstype of /dev/mapper/vg_svelte-rootfs reverts to autofs, *but it stays mounted*. **How do I get it to automatically unmount?** Possibly useful information: journal output: Feb 21 17:16:07 svelte systemd: Reloading. Feb 21 17:16:23 svelte systemd: Mounting /mnt/btrfs-vol/rootfs... Feb 21 17:16:23 svelte systemd: Set up automount mnt-btrfs\x2dvol-home-btrbk\x2dsnap.automount. Feb 21 17:16:23 svelte systemd: Mounted /mnt/btrfs-vol/rootfs. Feb 21 17:16:23 svelte systemd: mnt-btrfs\x2dvol-rootfs-btrbk\x2dsnap.automount: Directory /mnt/btrfs-vol/rootfs/btrbk-snap to mount over is not empty, mounting anyway. Feb 21 17:16:23 svelte systemd: Set up automount mnt-btrfs\x2dvol-rootfs-btrbk\x2dsnap.automount. Feb 21 17:16:23 svelte systemd: Reached target Local File Systems. Feb 21 17:16:25 svelte systemd: Stopped target Local File Systems. Feb 21 17:16:25 svelte systemd: Unset automount mnt-btrfs\x2dvol-rootfs-btrbk\x2dsnap.automount. Feb 21 17:16:25 svelte systemd: Unmounting /mnt/btrfs-vol/rootfs... Feb 21 17:16:25 svelte systemd: Unmounted /mnt/btrfs-vol/rootfs. Feb 21 17:17:44 svelte systemd: Unset automount mnt-btrfs\x2dvol-home-btrbk\x2dsnap.automount. Checking that nothing has the directory open: svelte ~# lsof /mnt/btrfs-vol/rootfs lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. svelte ~# ls -l /run/user/1000 | grep gvfs ls: cannot access '/run/user/1000/gvfs': Permission denied d????????? ? ? ? ? ? gvfs I've never seen ? where I'd expect the rwx placehoders to be before.
Tom Hale (32892 rep)
Feb 21, 2017, 10:04 AM • Last activity: May 30, 2025, 09:06 AM
2 votes
1 answers
3097 views
Problems with automounting NFS on RHEL8
I've run into some problems when configuring **autofs** on CentOS and I would really appreciate your help with it ^^ I have to VMs: 1. **CentOS Linux release 7.8.2003**, IP address: **10.110.163.10** Two NFS shares defined in */etc/exports*: /nfs-directory *(rw,no_root_squash) /additional-nfs *(rw,n...
I've run into some problems when configuring **autofs** on CentOS and I would really appreciate your help with it ^^ I have to VMs: 1. **CentOS Linux release 7.8.2003**, IP address: **10.110.163.10** Two NFS shares defined in */etc/exports*: /nfs-directory *(rw,no_root_squash) /additional-nfs *(rw,no_root_squash) 2. **Red Hat Enterprise Linux release 8.2** I'm trying to automount NFS from CentOS here showmount -e 10.110.163.10 gives me the following: Export list for 10.110.163.10: /additional-nfs * /nfs-directory * I've installed *autofs*, created the configuration file */etc/auto.master.d/nfs-directory.autofs* and defined the following: /nfs-directory /etc/nfs-mount.auto And in the file */etc/nfs-mount.auto* I defined: * -fstype=nfsv4,rw,sync 10.110.163.10:/nfs-directory/& I enabled and restarted autofs, it does create the */nfs-directory*, but it's empty, I can't see any files inside it... When I type mount 10.110.163.10:/nfs-directory /mnt, everything works fine, NFS mounts correctly and I can access file within the share, but I can't manage to do the same with the automounter.
Kioko Key (21 rep)
Nov 17, 2020, 10:20 AM • Last activity: May 25, 2025, 08:06 PM
5 votes
1 answers
2488 views
How to execute a script after every systemd automount?
I am trying to setup a system such that a script gets executed everytime any USB storage device is mounted (in this case, automounted by systemd). Based on a few references [here][1], [here][2] and [here][3], systemd allows for the execution of custom scripts after a specific device is mounted, but...
I am trying to setup a system such that a script gets executed everytime any USB storage device is mounted (in this case, automounted by systemd). Based on a few references here , here and here , systemd allows for the execution of custom scripts after a specific device is mounted, but these either: - Need a specific device or mountpoint. - Use udev, which triggers too early, and holds the mounting process. - Use audits or logs, which isn't very satisfying. Is there anyway to be *less* specific in systemd units, allowing for the use of ExecStart after any succesful (auto)mount?
John WH Smith (16408 rep)
Jul 14, 2017, 02:37 PM • Last activity: May 17, 2025, 04:05 AM
4 votes
2 answers
2429 views
unable to chmod inside shared folder of virtualbox
I have shared a folder from Windows on to my virtual machine. The shared folder is being mounted correctly, and I am able to read write within the folder, but unable to change permissions of any file within the shared folder. Below are the mount options of the shared folder myVM on /media/sf_myVM ty...
I have shared a folder from Windows on to my virtual machine. The shared folder is being mounted correctly, and I am able to read write within the folder, but unable to change permissions of any file within the shared folder. Below are the mount options of the shared folder myVM on /media/sf_myVM type vboxsf (rw,nodev,relatime,ttl=0,iocharset=utf8,uid=0,gid=999,dmode=0770,fmode=0770,tag=VBoxAutomounter) user is already part of vboxsf group uid=1000(vmuser) gid=1000(vmuser) groups=1000(vmuser),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),116(lpadmin),126(sambashare),999(vboxsf),1001(sftp) Below error is thrown when try to change permissions using chmod for any files inside shared folder. chmod: changing permissions of 'perm.txt': Operation not permitted
DarkKnight (193 rep)
Oct 27, 2019, 02:09 PM • Last activity: May 11, 2025, 02:02 PM
3 votes
2 answers
2215 views
NFS Share on Centos 7 Fails to automount
I have a fresh install of Centos 7. I cannot seem to auto mount an NFS share located on `192.168.254.105:/srv/nfsshare` from the Centos client. Mounting the share manually however, works perfectly. **/etc/auto.master** has been commented out completely to simplify the problem, save for the following...
I have a fresh install of Centos 7. I cannot seem to auto mount an NFS share located on 192.168.254.105:/srv/nfsshare from the Centos client. Mounting the share manually however, works perfectly. **/etc/auto.master** has been commented out completely to simplify the problem, save for the following line: /- /etc/auto.nfsshare **/etc/auto.nfsshare** holds the following line: /tests/nfsshare -fstype=nfs,credentials=/etc/credentials.txt 192.168.254.105:/srv/nfsshare **/etc/credentials.txt** holds: user=user password=password The expected behavior is that when I ls -l /tests/nfsshare, I will see a few files that my fileserver's **/srv/nfsshare** directory holds. It does not. Instead, it shows nothing. The logs from **sudo journalctl --unit=autofs.service** shows this when it starts (debug enabled): Nov 20 00:25:38 localhost.localdomain systemd: Starting Automounts filesystems on demand... Nov 20 00:25:38 localhost.localdomain automount: Starting automounter version 5.0.7-48.el7, master map auto.master Nov 20 00:25:38 localhost.localdomain automount: using kernel protocol version 5.02 Nov 20 00:25:38 localhost.localdomain automount: lookup_nss_read_master: reading master files auto.master Nov 20 00:25:38 localhost.localdomain automount: parse_init: parse(sun): init gathered global options: (null) Nov 20 00:25:38 localhost.localdomain automount: spawn_mount: mtab link detected, passing -n to mount Nov 20 00:25:38 localhost.localdomain automount: spawn_umount: mtab link detected, passing -n to mount Nov 20 00:25:38 localhost.localdomain automount: lookup_read_master: lookup(file): read entry /- Nov 20 00:25:38 localhost.localdomain automount: master_do_mount: mounting /- Nov 20 00:25:38 localhost.localdomain automount: automount_path_to_fifo: fifo name /run/autofs.fifo-- Nov 20 00:25:38 localhost.localdomain automount: lookup_nss_read_map: reading map file /etc/auto.nfsshare Nov 20 00:25:38 localhost.localdomain automount: parse_init: parse(sun): init gathered global options: (null) Nov 20 00:25:38 localhost.localdomain automount: spawn_mount: mtab link detected, passing -n to mount Nov 20 00:25:38 localhost.localdomain automount: spawn_umount: mtab link detected, passing -n to mount Nov 20 00:25:38 localhost.localdomain automount: mounted direct on /tests/nfsshare with timeout 300, freq 75 seconds Nov 20 00:25:38 localhost.localdomain automount: do_mount_autofs_direct: mounted trigger /tests/nfsshare Nov 20 00:25:38 localhost.localdomain automount: st_ready: st_ready(): state = 0 path /- Nov 20 00:25:38 localhost.localdomain systemd: Started Automounts filesystems on demand. The following appears in my logs when I attempt to force mounting of the nfs share via **ls -l /tests/nfsshare**: Nov 20 00:48:05 localhost.localdomain automount: handle_packet: type = 5 Nov 20 00:48:05 localhost.localdomain automount: handle_packet_missing_direct: token 21, name /tests/nfsshare, request pid 22057 Nov 20 00:48:05 localhost.localdomain automount: attempting to mount entry /tests/nfsshare Nov 20 00:48:05 localhost.localdomain automount: lookup_mount: lookup(file): looking up /tests/nfsshare Nov 20 00:48:05 localhost.localdomain automount: lookup_mount: lookup(file): /tests/nfsshare -> -fstype=nfs,credentials=/etc/credenti...fsshare Nov 20 00:48:05 localhost.localdomain automount: parse_mount: parse(sun): expanded entry: -fstype=nfs,credentials=/etc/credentials.tx...fsshare Nov 20 00:48:05 localhost.localdomain automount: parse_mount: parse(sun): gathered options: fstype=nfs,credentials=/etc/credentials.txt Nov 20 00:48:05 localhost.localdomain automount: [90B blob data] Nov 20 00:48:05 localhost.localdomain automount: dev_ioctl_send_fail: token = 21 Nov 20 00:48:05 localhost.localdomain automount: failed to mount /tests/nfsshare Nov 20 00:48:05 localhost.localdomain automount: handle_packet: type = 5 Nov 20 00:48:05 localhost.localdomain automount: handle_packet_missing_direct: token 22, name /tests/nfsshare, request pid 22057 Nov 20 00:48:05 localhost.localdomain automount: dev_ioctl_send_fail: token = 22 Additionally, **ls -l /tests/nfsshare** actually produces the error: ls: cannot access nfsshare/: No such file or directory How can I fix this issue? As stated before, manual mounting the share works fine. -------------------- EDIT: as requested, output of ls -la /etc/auto.nfsshare -rw-r--r--. 1 root root 99 Nov 20 00:25 /etc/auto.nfsshare
steelmonkey (53 rep)
Nov 19, 2015, 08:49 PM • Last activity: May 7, 2025, 02:10 AM
0 votes
1 answers
86 views
systemd nfs mount/automount unit with changing network environments
I have set up on my laptop a systemd mount and automount unit to mount a NFS share on demand. Naturally, this works as long as I am in the same network as the NFS server. If I leave my home network, any access to the mount fails (to be expected). Problem is, that the unit does not recover when I am...
I have set up on my laptop a systemd mount and automount unit to mount a NFS share on demand. Naturally, this works as long as I am in the same network as the NFS server. If I leave my home network, any access to the mount fails (to be expected). Problem is, that the unit does not recover when I am back in my home network. I.e., when I am back in my home network, I have to restart the unit explicitly. Is there a way to make a NFS mount/automount unit combination more resilient, that it can survice changing network conditions? One idea I had was to add a pre command as condition, that polls the NFS server - however, this would not allow for a recovery after the fact.
THX (327 rep)
Apr 12, 2025, 09:43 AM • Last activity: Apr 12, 2025, 11:33 AM
2 votes
0 answers
55 views
Change mount attribute, but how?
I'm on Automotive Grade Linux. My system has a partition `/dev/sda15` which is mounted to `/data`. I want to change the mount attribute "noexec" permanently, but 1. There is no directive for sda15 in `/etc/fstab`. 2. There is no corresponding systemd service in `/etc/systemd/system`. 3. GPT auto mou...
I'm on Automotive Grade Linux. My system has a partition /dev/sda15 which is mounted to /data. I want to change the mount attribute "noexec" permanently, but 1. There is no directive for sda15 in /etc/fstab. 2. There is no corresponding systemd service in /etc/systemd/system. 3. GPT auto mounter is off by kernel parameter systemd.gpt_auto=0. How can I find the subsystem which mounts the partition? Kernel console does not mention sda15, UUID, PARTUUID or PARTLABEL. Journald is not indicating anything here. Remounting manually works (mount -o remount,exec /data) but is obviously not persistent. Adding an explicit entry in fstab does not work, after reboot, sda15 is still mounted noexec.
Twonky (191 rep)
Apr 11, 2025, 08:45 AM • Last activity: Apr 11, 2025, 09:53 AM
Showing page 1 of 20 total questions