Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
3
votes
1
answers
2682
views
Autofs fails to mount nfs-shared autofs bind mounted directory, but manual mount succeeds
Apologies for the long read below. Hopefully this will give enough context and traces so someone can help me. # Context I store user home directories on some zfs dataset (one per user), where each user has one different home directory per Linux distribution and version (e.g. Debian Buster), but wher...
Apologies for the long read below. Hopefully this will give enough context and traces so someone can help me.
# Context
I store user home directories on some zfs dataset (one per user), where each user has one different home directory per Linux distribution and version (e.g. Debian Buster), but where XDG directories (e.g., Downloads or Desktop) of all these homes actually points to the same directory so that the user has access to all its files regardless of the Linux distro run by the terminal the user logged on. I have the same kind of structure for binaries stored in home. See the example below (see below for meaning of numbers in square brackets):
/media/zfs/home/user1/XDG-DIRS/Downloads
| /Videos
| /
|
/homes/debian_buster/
| /ubuntu_groovy/
|
/usr/share/
| /include/
|
/local/linux-x86_64/bin
| /lib
|
/linux-armhf/bin
/lib
Using specific mounts to correct points in the structure above, one can reconstitute a home directory with all user setting files matching the desktop environment served by the distro and its version, but still access all personal files and even home-installed binaries compatible with the OS and architecture the user logged on. For example, a user logging on a Linux Debian Buster running on a x86_64 architecture, should have its home directory structure as below. A number in square bracket at the end of a directory names means the content of that directory should actually be content in the directory with the same number at the beginning of its name in the structure above. I let you guess what the home directory of a user logged on a raspberry pie running ubuntu groovy looks like.
/home/AD.EXAMPLE.COM/user1/Downloads
/Videos
/.local/bin
/lib
One can achieve this by running autofs on the terminal computer and using an executable map file that results in as many nfs mounts as necessary (5 per user in the example above, typically more because of more XDG directories). The problem I anticipate with this is that if the user wants to move a files from "Downloads" to "Videos", then because these are two different nfs mount points, the files is actually downloaded to the terminal, then uploaded back to the server. I actually did not test if that results in performance penalties. If you have any insight about this point, please let me know.
In order to limit the performance problem described above, I actually reconstitute home directories for each distro/version (on one side) and for each os/architecture (on the other side) on the server using autofs and then exporting the results through NFS. That means I build the following structure on the server using autofs bind mounts
/media/user_data/unix/user1/home/debian_buster/Downloads
| | /Videos
| | /.local/
| /ubuntu_groovy/Downloads
| /Videos
| /.local/
/local/linux-x86_64/bin
| /lib
/local/linux-armhf/bin
/lib
/etc/auto.master.d/user_data.autofs
(server side)
/media/user_data/unix/ /etc/auto.AD.EXAMPLE.COM.unix --ghost --timeout=120
/etc/auto.AD.EXAMPLE.COM.unix
(server side, executable and read bits set for ugo)
#!/bin/bash
key=$1
echo '- /home -fstype=bind :/media/zfs/home/'$key'/unix/ browse \'
for i in $(ls /media/zfs/home/$key/unix)
do
for j in $(ls /media/zfs/home/$key/XDG_DIRS)
do
echo ' /home/'$i'/'$j' -fstype=bind :/media/zfs/home/'$key'/XDG_DIRS/'$j' browse \'
done
done
for i in $(ls /media/zfs/home/$key/usr)
do
echo ' /local/'$i' -fstype=bind :/media/zfs/home/'$key'/local/ browse \'
for j in $(ls /media/zfs/home/$key/usr/$i)
do
echo ' /local/'$i'/'$j' -fstype=bind :/media/zfs/home/'$key'/usr/'$i'/'$j' browse \'
done
done
echo ''
Here is a sample output of the script shown above
root@server:~# /etc/auto.AD.EXAMPLE.COM.unix user1
- /home -fstype=bind :/media/zfs/home/user1/unix/ browse \
/home/debian_buster/Downloads -fstype=bind :/media/zfs/home/user1/XDG_DIRS/Downloads browse \
/home/debian_buster/Videos -fstype=bind :/media/zfs/home/user1/XDG_DIRS/Videos browse \
/local/linux-armhf -fstype=bind :/media/zfs/home/user1/local/ browse \
/local/linux-armhf/bin -fstype=bind :/media/zfs/home/user1/usr/linux-armhf/bin browse \
/local/linux-armhf/lib -fstype=bind :/media/zfs/home/user1/usr/linux-armhf/lib browse \
/local/linux-armhf/sbin -fstype=bind :/media/zfs/home/user1/usr/linux-armhf/sbin browse \
/local/linux-x86_64 -fstype=bind :/media/zfs/home/user1/local/ browse \
/local/linux-x86_64/bin -fstype=bind :/media/zfs/home/user1/usr/linux-x86_64/bin browse \
/local/linux-x86_64/lib -fstype=bind :/media/zfs/home/user1/usr/linux-x86_64/lib browse \
root@server:~#
This works flawlessly so far, although a tad slow to my taste. I export /media/user_data/unix
through NFS, using the export file below:
#
/media/user_data *(sec=krb5p,rw,crossmnt)
#
At this point, it is worth mentioning that one reason why the "user_data" step exists in this file hierarchy is that because if I export /media/user_data/unix
in /etc/exportfs
(or equivalently, /media/unix
although not demonstrated below), then I get the warnings below. This is not encouraging but I try anyway with this extra user data
layer in the hierarchy, hoping that the crossmnt
will manage to also export what is mounted inside the hierarchy being exported. The system does not seem to complain about this attempt.
root@server:~# cat /etc/exports
# Other exports of unrelated directories
/media/user_data *(sec=krb5p,rw,crossmnt)
/media/user_data/unix *(sec=krb5p,rw,crossmnt)
# More unrelated exports
root@server:~# exportfs -ra; zfs share -a
exportfs: /etc/exports : Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/media/user_data".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /media/user_data/unix does not support NFS export
root@server:~# showmount -e
Export list for server:
/media/user_data *
/media/user_data/unix *
#
root@server:~#
In the following text, I removed export of /media/user_data/unix
in /etc/exports
and ran exportfs -ra
and zfs share -a
so that the NFS server does not complain about any directory that does not support NFS exports.
Finally, autofs on the terminal computer only need to mount the home directory corresponding to the distro it runs and its version, as well as the local
subdirectory corresponding to the OS and architecture, resulting in the following hierarchy for user1 on a Linux Debian Buster x86_64.
/home/AD.EXAMPLE.COM/user1/Downloads
/Videos
/.local
Which I attempt to achieve with the autofs configuration below
/etc/auto.master.d/home.autofs
(terminal side, Linux Debian Buster, x86_64)
/media/AD.EXAMPLE.COM /etc/auto.AD.EXAMPLE.COM.home --timeout=120
/etc/auto.AD.EXAMPLE.COM.home
(terminal side, Linux Debian Buster, x86_64, executable and read bits set for ugo)
#!/bin/bash
key=$1
distributor()
{
lsb_release -i | cut -f 2 -d : | xargs echo | tr '[:upper:]' '[:lower:]'
}
codename()
{
lsb_release -c | cut -f 2 -d : | xargs echo | tr '[:upper:]' '[:lower:]'
}
architecture()
{
uname -m | tr '[:upper:]' '[:lower:]'
}
os()
{
uname -s | tr '[:upper:]' '[:lower:]'
}
echo '- / -fstype=nfs,vers=4.2,sec=krb5p,fsc server.example.com:/media/user_data/unix/'$key'/home/'$(distributor)'_'$(codename)' \'
echo ' /.local -fstype=nfs,vers=4.2,sec=krb5p,fsc server.example.com:/media/user_data/local/'$key'/local/'$(os)'-'$(architecture)' \'
echo ''
Here is a sample output of the script above
root@terminal:~$ /etc/auto.AD.EXAMPLE.COM.exp user1
- / -fstype=nfs,vers=4.2,sec=krb5p,fsc server.example.com:/media/user_data/unix/user1/home/debian_buster \
/.local -fstype=nfs,vers=4.2,sec=krb5p,fsc server.example.com:/media/user_data/local/user1/local/linux-x86_64 \
root@terminal:~$
# The problem
Autofs on the terminal computer fails to mount the reconstituted home directory exported from the server. This is the trace I get from automount when trying to list the content of /home/AD.EXAMPLE.COM/user1
:
root@terminal:~# automount -d -f -v
...
get_nfs_info: called with host server.example.com(192.168.80.101) proto 17 version 0x30
get_nfs_info: nfs v3 rpc ping time: 0.000000
get_nfs_info: host server.example.com cost 0 weight 0
prune_host_list: selected subset of hosts that support NFS3 over TCP
mount_mount: mount(nfs): calling mkdir_path /media/AD.EXAMPLE.COM/user1
mount_mount: mount(nfs): calling mount -t nfs -s -o vers=4.2,sec=krb5p,fsc server.example.com:/media/user_data/unix/user1/home/debian_buster /media/AD.EXAMPLE.COM/user1
>> mount.nfs: mounting server.example.com:/media/user_data/unix/user1/home/debian_buster failed, reason given by server: No such file or directory
mount(nfs): nfs: mount failure server.example.com:/media/user_data/unix/user1/home/debian_buster on /media/AD.EXAMPLE.COM/user1
do_mount_autofs_offset: mount offset /media/AD.EXAMPLE.COM/user1/.local at /media/AD.EXAMPLE.COM/user1
mount_autofs_offset: calling mount -t autofs -s -o fd=16,pgrp=20379,minproto=5,maxproto=5,offset automount /media/AD.EXAMPLE.COM/user1/.local
mounted offset on /media/AD.EXAMPLE.COM/user1/.local with timeout 120, freq 30 seconds
mount_autofs_offset: mounted trigger /media/AD.EXAMPLE.COM/user1/.local at /media/AD.EXAMPLE.COM/user1/.local
dev_ioctl_send_ready: token = 114
mounted /media/AD.EXAMPLE.COM/user1
And the listed content of /home/AD.EXAMPLE.COM/user1
is nothing:
root@terminal:~$ ls /home/AD.EXAMPLE.COM/user1
root@terminal:~$
Although the supposedly mounted directory from the server is full of files:
root@server:~# ls /media/user_data/unix/user1/home/debian_buster
file1 file2 file3
root@server:~#
The automount trace above hints that the directory attempted to mount does not exist on the server, but this is strange, first because trying to list that exact directory from the server does show that is exists (see above), and I can mount this directory manually from the terminal anyway, as shows the trace below:
root@terminal:~$ mount -vvvv -t nfs server.example.com:/media/user_data/unix/user1/home/debian_buster /mnt
mount.nfs: timeout set for Sat Feb 13 22:37:06 2021
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.80.101,clientaddr=192.168.104.1'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=192.168.80.101'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.80.101 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.80.101 prog 100005 vers 3 prot UDP port 39874
root@terminal:~$ ls /mnt
file1 file2 file3
root@terminal:~$
# My attempt for a solution
I thought that autofs on the terminal may not find the directory to mount because it is not mounted (yet) on the server, therefore I attempted to use --ghost
and browser
options (you can see them in the server's /etc/auto.master.d/media.autofs
and /etc/auto.AD.EXAMPLE.COM.unix
files shown above), but to no avail. I am running out of ideas to explore to find a permanent solution.
# Temporary workaround
The temporary workaround I use at the moment is not not use autofs on the server side, but instead bind mount manually all directories to obtain the correct file hierarchy to export. I am not too satisfied with this solution as it requires a lot of mounts to be permanently active and it seems to leave the server in a somewhat unstable state, though I don't know why exactly.
# Remarks
* Both the server and the terminal run Debian Buster (Linux x86_64) in my tests and that produced the traces above.
* NFS complaining that the autofs-reconstituted directory does not support NFS export hints that I should not try to export it at all by sneakingly exporting its parent directory instead. I could not find any reference stating that it is not possible to NFS-export a directory having a autofs-mounted subdirectory, so it is still worth a try. Furthermore, it work fine when I mount --bind
these subdirectories manually on the server side instead of using autofs so there should be some hope.
* This is a rather complex (and fragile?) setup; if you have a simpler (and more robust) suggestion to achieve the same functionalities, I am also interested :)
Nykau
(31 rep)
Feb 14, 2021, 10:29 AM
• Last activity: Jul 23, 2025, 06:05 AM
0
votes
1
answers
2429
views
Why are directories under /mnt not visible when mounting filesystem with NFS?
I setup an NFS share on my NFS server with `/etc/exports` containing `/ *(rw,no_root_squash,no_subtree_check)` Then I do `exportfs -a` to activate the share and restart the nfs server. I mount the share with autofs on the client machine with `/etc/auto.nfs` containing `foo -fstype=nfs4,soft,rw,noati...
I setup an NFS share on my NFS server with
/etc/exports
containing / *(rw,no_root_squash,no_subtree_check)
Then I do exportfs -a
to activate the share and restart the nfs server.
I mount the share with autofs on the client machine with /etc/auto.nfs
containing foo -fstype=nfs4,soft,rw,noatime,allow_other server.tld:/
My auto.master
contains /mnt/nfs /etc/auto.nfs --timeout=30 --ghost
I restart autofs (systemctl restart autofs.service)
Then I see all directories from the server. But when I try to navigate to mounted serverdisks under /mnt/mounteddiskonserver I can't see anything anymore. No files, no directories, no write permission through nemo file browser on the client machine.
I can go to /home/user on the server and see and delete all my files on the server that have same permissions as /mnt/mounteddiskonserver/fileshere.
When I setup the NFS server to share /mnt/mounteddiskonserver specifically with with /etc/exports
containing /mnt/mounteddiskonserver *(rw,no_root_squash,no_subtree_check)
I can see all files and directories under /mnt/mounteddiskonserver and I can read and write.
Michiel Bruijn
(1 rep)
Feb 13, 2022, 05:01 PM
• Last activity: Jul 12, 2025, 07:06 PM
14
votes
2
answers
13781
views
How to deal with freezes caused by autofs after network disconnect
I mount four servers (3 via `cifs`, 1 via `sshfs`) using `autofs`. `auto.master` /- /etc/auto.all --timeout=60 --ghost `auto.all ` /mnt \ /server1 -fstype=cifs,rw,credentials=/etc/.smbcredentials.txt,uid=1000,file_mode=0775,dir_mode=0775,users ://server1/ \ /server2/ -fstype=cifs,rw,credentials=/etc...
I mount four servers (3 via
cifs
, 1 via sshfs
) using autofs
.
auto.master
/- /etc/auto.all --timeout=60 --ghost
auto.all
/mnt \
/server1 -fstype=cifs,rw,credentials=/etc/.smbcredentials.txt,uid=1000,file_mode=0775,dir_mode=0775,users ://server1/ \
/server2/ -fstype=cifs,rw,credentials=/etc/.smbcredentials.txt,uid=1000,file_mode=0775,dir_mode=0775,users ://server2/ \
/server3 -fstype=cifs,rw,credentials=/etc/.smbcredentials.txt,uid=1000,file_mode=0775,dir_mode=0775,users ://server3/ \
/server4 -fstype=fuse,rw,allow_other,uid=1000,users,reconnect,cache=yes,kernel_cache,compression=no,large_read,Ciphers=arcfour :sshfs\#user@server\:/home
```
Everything is fine when I make a clean boot.
I connect to my network (using a VPN) and autofs
mounts everything.
# Problem
When there is a network disconnect, e.g. when I hibernate my laptop or connect to a different network, autofs
causes my explorer (dolphin) to freeze because it tries to load the remote share infinitely.
It becomes unresponsive and does not even react to SIGTERM commands.
Sometimes I am lucky and calling sudo service autofs stop
and sudo automount
helps to resolve the issue.
However, often it still stays freezed.
Sometimes even, my whole dock freezes due to this making all applications unselectable. Then I have to make a full reboot..
I've searched for weeks now for solution how to deal with autofs
in such situations. Before using autofs
, I had everything mounted via /etc/fstab
but that also required a manual remount after every network interruption.
I thought autofs
would help me here but it causes me even more trouble.
# Questions
1. Is there any point I overlooked that could solve the freezing problem?
2. Is there a completely different approach that may be better suited for my situation than autofs
?
PS: I'm on Kubuntu 16.04
pat-s
(348 rep)
Jan 9, 2018, 11:27 PM
• Last activity: Jul 12, 2025, 11:09 AM
0
votes
1
answers
108
views
Unable to install autofs on rhel9
When I try sudo dnf install autofs, I get this error: ``` [MIRROR] autofs-5.1.7-58.el9.x86_64.rpm: Status code: 403 for https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os/Packages/a/autofs-5.1.7-58.el9.x86_64.rpm (IP: 184.51.36.251) [MIRROR] libsss_autofs-2.8.2-2.el9.x86_64.rpm: Status cod...
When I try sudo dnf install autofs, I get this error:
[MIRROR] autofs-5.1.7-58.el9.x86_64.rpm: Status code: 403 for https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os/Packages/a/autofs-5.1.7-58.el9.x86_64.rpm (IP: 184.51.36.251)
[MIRROR] libsss_autofs-2.8.2-2.el9.x86_64.rpm: Status code: 403 for https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os/Packages/l/libsss_autofs-2.8.2-2.el9.x86_64.rpm (IP: 184.51.36.251)
[MIRROR] autofs-5.1.7-58.el9.x86_64.rpm: Status code: 403 for https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os/Packages/a/autofs-5.1.7-58.el9.x86_64.rpm (IP: 184.51.36.251)
[MIRROR] libsss_autofs-2.8.2-2.el9.x86_64.rpm: Status code: 403 for https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os/Packages/l/libsss_autofs-2.8.2-2.el9.x86_64.rpm (IP: 184.51.36.251)
[FAILED] libsss_autofs-2.8.2-2.el9.x86_64.rpm: No more mirrors to try - All mirrors were already tried without success
(2/2): autofs-5.1.7- 9% [=- ] 484 kB/s | 42 kB 00:00 ETA
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'yum clean packages'.
Error: Error downloading packages:
libsss_autofs-2.8.2-2.el9.x86_64: Cannot download, all mirrors were already tried without success
I attempted to unregister, register, and then refresh my subscription, but after that, attempting to install autofs just gives me this error which is arguably worse:
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:54 ago on Fri 31 May 2024 02:02:08 PM CDT.
No match for argument: autofs
Error: Unable to find a match: autofs
I also tried:
wget http://mirror.centos.org/centos/9-stream/AppStream/x86_64/os/Packages/autofs-5.1.6-1.el9.x86_64.rpm
which gave me:
--2024-05-31 14:15:30-- http://mirror.centos.org/centos/9-stream/AppStream/x86_64/os/Packages/autofs-5.1.6-1.el9.x86_64.rpm
Resolving mirror.centos.org (mirror.centos.org)... 64.150.179.24, 2607:ff28:8005:5:225:90ff:fea8:cd64
Connecting to mirror.centos.org (mirror.centos.org)|64.150.179.24|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2024-05-31 14:15:31 ERROR 404: Not Found.
Ironically, I successfully installed autofs on another one of my linux servers running under the same subscription yesterday, but installing does not work on another node today.
paul runner
(11 rep)
May 31, 2024, 08:24 PM
• Last activity: Jun 19, 2025, 04:30 PM
1
votes
1
answers
3265
views
AutoFS - Bash script to check if mount is ALIVE before proceeding or exiting
This is specific to autofs mounts. I've found numerous ways to check a traditional mount, some use the /proc/mounts file. I can see my mount is still in that file even when the mount is not currently accessible. i.e. it _was_ accessible, but now device is off or asleep. These are just some of the me...
This is specific to autofs mounts.
I've found numerous ways to check a traditional mount, some use the /proc/mounts file. I can see my mount is still in that file even when the mount is not currently accessible. i.e. it _was_ accessible, but now device is off or asleep.
These are just some of the methods i tried, which all seem to work for my fstab mounts, but not for my autofs mounts - they simply can't see that the autofs mount is not currently available. Using commands like mount or findmnt seem to hang, and i kill them with CTRL+C.
https://unix.stackexchange.com/questions/38870/how-to-check-if-a-filesystem-is-mounted-with-a-script/39110
https://stackoverflow.com/questions/9422461/check-if-directory-mounted-with-bash
https://serverfault.com/questions/50585/whats-the-best-way-to-check-if-a-volume-is-mounted-in-a-bash-script
The ultimate goal is to script a check to see if its alive before continuing the script or exiting as appropriate.
This is an example from one of the URLs that always thinks its available, because the mount is in /proc/mounts, even after the device is turned off:
if grep -qs '/mnt/Backups' /proc/mounts; then
echo "Destination reachable. Continuing..."
else
echo "Destination unreachable. Exiting."
exit 1
fi
echo "test done"
A different (better?) example in the URLs was using findmnt. But this doesn't completely work either e.g.
if findmnt /mnt/Backups; then
echo "Destination reachable. Continuing..."
else
echo "Destination unreachable. Exiting."
exit 1
fi
echo "test done"
Findmnt hangs: [UPDATE: I just did some more testing, and this didn't hang this time around - instead it reported as the grep method did - it thought the mount was still alive after i put the server to sleep. But earlier today, it froze]
- If the share _was_ mounted, but isn't now (e.g. the server shuts down sometime after my desktop had a connection to it).
Works:
- If the share is mounted - findmnt has no problems (finds mount).
- If the share hasn't been mounted since i booted my desktop (correctly
can't find mount).
I found a RedHat reference. Maybe i need to define a more reliable SOURCE? I'm stuck.
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/s2-sysinfo-filesystems-findmnt
Thanks.
UPDATE:
I've also tried just checking for a directory:
https://stackoverflow.com/questions/59838/check-if-a-directory-exists-in-a-shell-script
I looked for a directory i knew did not exist on my desktop (smeghead) and it responded as it should - "Destination unreachable. Exiting".
if [ -d "/home/Derek/Desktop/smeghead/" ]; then
echo "Destination reachable. Continuing..."
else
echo "Destination unreachable. Exiting."
exit 1
fi
But if i change the path to /mnt/Backups (when Backups disappears thanks to autofs) then the command freezes and i have to CTRL+C.
So it seems the problem is _something_ about autofs and/or the /mnt location that screws it up?
EldestBizarre
(35 rep)
Dec 30, 2019, 11:18 PM
• Last activity: Jun 16, 2025, 12:08 PM
0
votes
1
answers
2639
views
Using /etc/auto.master.d together with wildcard `*` to mount `/home`
My problem is similar to that described in https://unix.stackexchange.com/q/375516/320598: I wanted to mount individual users' home directories to `/home/*user*` using both, `*` wildcard and directory `/etc/auto.master.d` in SLES15 SP4, but when I try to mount some directory via `ls -l /home/*user*`...
My problem is similar to that described in https://unix.stackexchange.com/q/375516/320598 :
I wanted to mount individual users' home directories to
/home/*user*
using both, *
wildcard and directory /etc/auto.master.d
in SLES15 SP4, but when I try to mount some directory via ls -l /home/*user*
, nothing happens (even when having activated automount debugging, I see no log messages related to my attempt to mount).
I've created an /etc/auto.master.d/homes
, containing /home /etc/auto.homes
, and the latter file itself contains * -bg,rw,hard,intr,nfsvers=3 nfs.server.org:/exports/home/&
.
I can test-mount my test-user's home directory manually without a problem, however.
I'm not quite sure I understood how to use /etc/auto.master.d
correctly, so an answer explaining my error could also point me in the right direction.
U. Windl
(1715 rep)
Feb 3, 2023, 09:35 AM
• Last activity: Jun 5, 2025, 10:53 AM
2
votes
1
answers
2201
views
systemd: autofs containing autofs does not unmount
I'm trying to set up two directories, each automounted: * `/mnt/dir` * `/mnt/dir/subdir` In my case, these are: * `/mnt/btrfs-vol/rootfs` (read only) * `/mnt/btrfs-vol/rootfs/btrbk-snap` (RW for taking snapshots with [`btrbk`](https://github.com/digint/btrbk)) My `/etc/fstab` contains: LABEL=rootfs...
I'm trying to set up two directories, each automounted:
*
/mnt/dir
* /mnt/dir/subdir
In my case, these are:
* /mnt/btrfs-vol/rootfs
(read only)
* /mnt/btrfs-vol/rootfs/btrbk-snap
(RW for taking snapshots with [btrbk
](https://github.com/digint/btrbk))
My /etc/fstab
contains:
LABEL=rootfs /mnt/btrfs-vol/rootfs btrfs ro,subvol=/,lazytime,compress=lzo,ssd,discard,noauto,x-systemd.automount,x-systemd.idle-timeout=2
LABEL=rootfs /mnt/btrfs-vol/rootfs/btrbk-snap btrfs rw,subvol=/btrbk-snap,lazytime,compress=lzo,ssd,discard,noauto,x-systemd.automount,x-systemd.idle-timeout=2,x-systemd.requires-mounts-for=/mnt/btrfs-vol/rootfs
I do:
svelte ~# systemctl daemon-reload && systemctl restart local-fs.target
svelte ~# mount | grep btrfs-vol/rootfs
systemd-1 on /mnt/btrfs-vol/rootfs type autofs (rw,relatime,fd=32,pgrp=1,timeout=2,minproto=5,maxproto=5,direct)
Strangely, /mnt/btrfs-vol/rootfs
, is already mounted.
If I unmount /mnt/btrfs-vol/rootfs
, it is immediately remounted:
svelte ~# umount /mnt/btrfs-vol/rootfs
svelte ~# mount | grep btrfs-vol/rootfs
systemd-1 on /mnt/btrfs-vol/rootfs type autofs (rw,relatime,fd=32,pgrp=1,timeout=2,minproto=5,maxproto=5,direct)
Now if I ping the subdirectory, it automounts:
svelte ~# (cd /mnt/btrfs-vol/rootfs/btrbk-snap/ && mount | grep btrfs-vol/rootfs)
systemd-1 on /mnt/btrfs-vol/rootfs type autofs (rw,relatime,fd=32,pgrp=1,timeout=2,minproto=5,maxproto=5,direct)
/dev/mapper/vg_svelte-rootfs on /mnt/btrfs-vol/rootfs type btrfs (ro,relatime,lazytime,compress=lzo,ssd,discard,space_cache,subvolid=5,subvol=/)
Note that the fstype
of /dev/mapper/vg_svelte-rootfs
has changed from autofs
to btrfs
.
A few seconds later (I have timeout=2
for testing`):
svelte ~# mount | grep btrfs-vol/rootfssystemd-1 on /mnt/btrfs-vol/rootfs type autofs (rw,relatime,fd=32,pgrp=1,timeout=2,minproto=5,maxproto=5,direct)
The subdirectory is unmounted, and the fstype
of /dev/mapper/vg_svelte-rootfs
reverts to autofs
, *but it stays mounted*.
**How do I get it to automatically unmount?**
Possibly useful information:
journal
output:
Feb 21 17:16:07 svelte systemd: Reloading.
Feb 21 17:16:23 svelte systemd: Mounting /mnt/btrfs-vol/rootfs...
Feb 21 17:16:23 svelte systemd: Set up automount mnt-btrfs\x2dvol-home-btrbk\x2dsnap.automount.
Feb 21 17:16:23 svelte systemd: Mounted /mnt/btrfs-vol/rootfs.
Feb 21 17:16:23 svelte systemd: mnt-btrfs\x2dvol-rootfs-btrbk\x2dsnap.automount: Directory /mnt/btrfs-vol/rootfs/btrbk-snap to mount over is not empty, mounting anyway.
Feb 21 17:16:23 svelte systemd: Set up automount mnt-btrfs\x2dvol-rootfs-btrbk\x2dsnap.automount.
Feb 21 17:16:23 svelte systemd: Reached target Local File Systems.
Feb 21 17:16:25 svelte systemd: Stopped target Local File Systems.
Feb 21 17:16:25 svelte systemd: Unset automount mnt-btrfs\x2dvol-rootfs-btrbk\x2dsnap.automount.
Feb 21 17:16:25 svelte systemd: Unmounting /mnt/btrfs-vol/rootfs...
Feb 21 17:16:25 svelte systemd: Unmounted /mnt/btrfs-vol/rootfs.
Feb 21 17:17:44 svelte systemd: Unset automount mnt-btrfs\x2dvol-home-btrbk\x2dsnap.automount.
Checking that nothing has the directory open:
svelte ~# lsof /mnt/btrfs-vol/rootfs
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
svelte ~# ls -l /run/user/1000 | grep gvfs
ls: cannot access '/run/user/1000/gvfs': Permission denied
d????????? ? ? ? ? ? gvfs
I've never seen ?
where I'd expect the rwx
placehoders to be before.
Tom Hale
(32892 rep)
Feb 21, 2017, 10:04 AM
• Last activity: May 30, 2025, 09:06 AM
2
votes
1
answers
3097
views
Problems with automounting NFS on RHEL8
I've run into some problems when configuring **autofs** on CentOS and I would really appreciate your help with it ^^ I have to VMs: 1. **CentOS Linux release 7.8.2003**, IP address: **10.110.163.10** Two NFS shares defined in */etc/exports*: /nfs-directory *(rw,no_root_squash) /additional-nfs *(rw,n...
I've run into some problems when configuring **autofs** on CentOS and I would really appreciate your help with it ^^
I have to VMs:
1. **CentOS Linux release 7.8.2003**, IP address: **10.110.163.10**
Two NFS shares defined in */etc/exports*:
/nfs-directory *(rw,no_root_squash)
/additional-nfs *(rw,no_root_squash)
2. **Red Hat Enterprise Linux release 8.2**
I'm trying to automount NFS from CentOS here
showmount -e 10.110.163.10
gives me the following:
Export list for 10.110.163.10:
/additional-nfs *
/nfs-directory *
I've installed *autofs*, created the configuration file */etc/auto.master.d/nfs-directory.autofs* and defined the following:
/nfs-directory /etc/nfs-mount.auto
And in the file */etc/nfs-mount.auto* I defined:
* -fstype=nfsv4,rw,sync 10.110.163.10:/nfs-directory/&
I enabled and restarted autofs, it does create the */nfs-directory*, but it's empty, I can't see any files inside it...
When I type mount 10.110.163.10:/nfs-directory /mnt
, everything works fine, NFS mounts correctly and I can access file within the share, but I can't manage to do the same with the automounter.
Kioko Key
(21 rep)
Nov 17, 2020, 10:20 AM
• Last activity: May 25, 2025, 08:06 PM
3
votes
1
answers
10956
views
autofs shares not updated after reload
What's the best way to make autofs aware of any changes to its map files (e.g. changes in `auto.home` below) without the need to stop the service on RHEL 6.7? According to [autofs man page][1] > If a map is modified then the change will become effective immediately. If the auto.master map is modifie...
What's the best way to make autofs aware of any changes to its map files (e.g. changes in
auto.home
below) without the need to stop the service on RHEL 6.7?
According to autofs man page
> If a map is modified then the change will become effective immediately. If the auto.master map is modified then the autofs script must be rerun to activate the changes.
However, if I change my auto.home
the changes are not automatically seen by autofs. In addition, if I then run service autofs reload
the changes are still not seen. The changes become effective only after I run service autofs restart
. However, this would require all the users to stop working on any NFS-shared folders until the restart of the service completes.
Shouldn't the changes take effect automatically or at least after I run service autofs reload
? What am I doing wrong here?
See below for the configuration I use:
----------
I have the following simple configuration on two RedHat Linux 6.7 machines, one is acting as the NFS server and the second as the client.
NFS Server:
$ cat /etc/exports
/home/user1/NFS-test *(rw,sync)
/home/user2/NFS-test *(rw,sync)
NFS Client:
$ cat /etc/auto.master
/misc /etc/auto.misc
/net -hosts
/- /etc/auto.home --temeout=300
+auto.master
$ cat /etc/auto.home
/home/user1/NFS-test -ro,soft,intr server:/home/user1/NFS-test
/home/user2/NFS-test -ro,soft,intr server:/home/user2/NFS-test
This works fine and both users (user1
and user2
) are able to see their own NFS-test
directory under their home folder on the client machine.
Now the second line is removed from auto.home
such that
$ cat /etc/auto.home
/home/user1/NFS-test -ro,soft,intr server:/home/user1/NFS-test
Then I run service autofs reload
in order to update the shares. However, the change in auto.home
is not seen and /home/user2/NFS-test
continues to be accessible from the client machine.
If on the other hand I run service autofs restart
then the mapping is correctly updated /home/user2/NFS-test
is not visible on the client.
I would like to be able to refresh the NFS shares in response to changes in auto.home
without needing to stop autofs first in order to avoid asking all the users to logout first. Is this possible with reload
? Is there another way for doing this?
----------
**UPDATE**
Since my setup is relatively small (1 server and 3 clients) and with only two folders being exported (one read-only and one read-write), I decided to drop the use of autofs
and use directly NFS by editing /etc/fstab
on each client. For such a small setup I hope that there won't be any noticable difference in performance compared to autofs
. If someone thinks otherwise, please let me know.
In case someone is interested, here is the setup I went for:
The server exports the following folders:
- /export
: where all the software will be located (read-only)
- /home/shared_homes
: here each user has a folder which is exported to all clients and which is automatically linked in to its home directory. For example, the user bob101
will have a folder /home/shared_homes/bob101
which will be linked to /home/bob101/mySharedWorkspace
Zots
(31 rep)
Aug 25, 2015, 10:56 PM
• Last activity: May 16, 2025, 03:02 AM
3
votes
2
answers
2215
views
NFS Share on Centos 7 Fails to automount
I have a fresh install of Centos 7. I cannot seem to auto mount an NFS share located on `192.168.254.105:/srv/nfsshare` from the Centos client. Mounting the share manually however, works perfectly. **/etc/auto.master** has been commented out completely to simplify the problem, save for the following...
I have a fresh install of Centos 7. I cannot seem to auto mount an NFS share located on
192.168.254.105:/srv/nfsshare
from the Centos client.
Mounting the share manually however, works perfectly.
**/etc/auto.master** has been commented out completely to simplify the problem, save for the following line:
/- /etc/auto.nfsshare
**/etc/auto.nfsshare** holds the following line:
/tests/nfsshare -fstype=nfs,credentials=/etc/credentials.txt 192.168.254.105:/srv/nfsshare
**/etc/credentials.txt** holds:
user=user
password=password
The expected behavior is that when I ls -l /tests/nfsshare
, I will see a few files that my fileserver's **/srv/nfsshare** directory holds.
It does not. Instead, it shows nothing.
The logs from **sudo journalctl --unit=autofs.service** shows this when it starts (debug enabled):
Nov 20 00:25:38 localhost.localdomain systemd: Starting Automounts filesystems on demand...
Nov 20 00:25:38 localhost.localdomain automount: Starting automounter version 5.0.7-48.el7, master map auto.master
Nov 20 00:25:38 localhost.localdomain automount: using kernel protocol version 5.02
Nov 20 00:25:38 localhost.localdomain automount: lookup_nss_read_master: reading master files auto.master
Nov 20 00:25:38 localhost.localdomain automount: parse_init: parse(sun): init gathered global options: (null)
Nov 20 00:25:38 localhost.localdomain automount: spawn_mount: mtab link detected, passing -n to mount
Nov 20 00:25:38 localhost.localdomain automount: spawn_umount: mtab link detected, passing -n to mount
Nov 20 00:25:38 localhost.localdomain automount: lookup_read_master: lookup(file): read entry /-
Nov 20 00:25:38 localhost.localdomain automount: master_do_mount: mounting /-
Nov 20 00:25:38 localhost.localdomain automount: automount_path_to_fifo: fifo name /run/autofs.fifo--
Nov 20 00:25:38 localhost.localdomain automount: lookup_nss_read_map: reading map file /etc/auto.nfsshare
Nov 20 00:25:38 localhost.localdomain automount: parse_init: parse(sun): init gathered global options: (null)
Nov 20 00:25:38 localhost.localdomain automount: spawn_mount: mtab link detected, passing -n to mount
Nov 20 00:25:38 localhost.localdomain automount: spawn_umount: mtab link detected, passing -n to mount
Nov 20 00:25:38 localhost.localdomain automount: mounted direct on /tests/nfsshare with timeout 300, freq 75 seconds
Nov 20 00:25:38 localhost.localdomain automount: do_mount_autofs_direct: mounted trigger /tests/nfsshare
Nov 20 00:25:38 localhost.localdomain automount: st_ready: st_ready(): state = 0 path /-
Nov 20 00:25:38 localhost.localdomain systemd: Started Automounts filesystems on demand.
The following appears in my logs when I attempt to force mounting of the nfs share via **ls -l /tests/nfsshare**:
Nov 20 00:48:05 localhost.localdomain automount: handle_packet: type = 5
Nov 20 00:48:05 localhost.localdomain automount: handle_packet_missing_direct: token 21, name /tests/nfsshare, request pid 22057
Nov 20 00:48:05 localhost.localdomain automount: attempting to mount entry /tests/nfsshare
Nov 20 00:48:05 localhost.localdomain automount: lookup_mount: lookup(file): looking up /tests/nfsshare
Nov 20 00:48:05 localhost.localdomain automount: lookup_mount: lookup(file): /tests/nfsshare -> -fstype=nfs,credentials=/etc/credenti...fsshare
Nov 20 00:48:05 localhost.localdomain automount: parse_mount: parse(sun): expanded entry: -fstype=nfs,credentials=/etc/credentials.tx...fsshare
Nov 20 00:48:05 localhost.localdomain automount: parse_mount: parse(sun): gathered options: fstype=nfs,credentials=/etc/credentials.txt
Nov 20 00:48:05 localhost.localdomain automount: [90B blob data]
Nov 20 00:48:05 localhost.localdomain automount: dev_ioctl_send_fail: token = 21
Nov 20 00:48:05 localhost.localdomain automount: failed to mount /tests/nfsshare
Nov 20 00:48:05 localhost.localdomain automount: handle_packet: type = 5
Nov 20 00:48:05 localhost.localdomain automount: handle_packet_missing_direct: token 22, name /tests/nfsshare, request pid 22057
Nov 20 00:48:05 localhost.localdomain automount: dev_ioctl_send_fail: token = 22
Additionally, **ls -l /tests/nfsshare** actually produces the error:
ls: cannot access nfsshare/: No such file or directory
How can I fix this issue? As stated before, manual mounting the share works fine.
--------------------
EDIT: as requested, output of ls -la /etc/auto.nfsshare
-rw-r--r--. 1 root root 99 Nov 20 00:25 /etc/auto.nfsshare
steelmonkey
(53 rep)
Nov 19, 2015, 08:49 PM
• Last activity: May 7, 2025, 02:10 AM
0
votes
1
answers
27
views
How to mount a root path to another root path using autofs?
auto.master: > /scratch auto.scratch --timeout 60 auto.scratch: > * -fstype=nfs my.server.org:/scratch The above configuration mounts by sub directory only e.g. `/scratch/dir1` , `/scratch/dir2`, etc. How do I mount without the sub directory i.e. `/scratch` to `/scratch`? In other words, how do I mo...
auto.master:
> /scratch auto.scratch --timeout 60
auto.scratch:
> * -fstype=nfs my.server.org:/scratch
The above configuration mounts by sub directory only e.g.
/scratch/dir1
, /scratch/dir2
, etc. How do I mount without the sub directory i.e. /scratch
to /scratch
? In other words, how do I mount a root path to another root path using autofs?
tash
(121 rep)
Apr 24, 2025, 06:18 PM
• Last activity: Apr 24, 2025, 06:31 PM
0
votes
0
answers
1079
views
can't delete folders on samba share
I am dealing with a really weird issue I can't identify the cause of. I have a samba share mounted using autofs in Rocky 9.2 with the flags `-fstype=cifs,rw,nounix,file_mode=0700,dir_mode=0700,multiuser,sec=krb5,user=username,cruid=username,gid=primarygroup,_netdev`. It mounts fine and I can add and...
I am dealing with a really weird issue I can't identify the cause of. I have a samba share mounted using autofs in Rocky 9.2 with the flags
-fstype=cifs,rw,nounix,file_mode=0700,dir_mode=0700,multiuser,sec=krb5,user=username,cruid=username,gid=primarygroup,_netdev
.
It mounts fine and I can add and delete files fine but folders are behaving very weirdly when I try to delete things. For example the following:
$ mkdir dir
$ mkdir dir/{a,b}
$ touch dir/{a,b}/f{1..5}
$ tree dir
dir
├── a
│ ├── f1
│ ├── f2
│ ├── f3
│ ├── f4
│ └── f5
└── b
├── f1
├── f2
├── f3
├── f4
└── f5
2 directories, 10 files
$ rm --recursive --force --verbose dir
removed 'dir/b/f2'
removed 'dir/b/f4'
removed 'dir/b/f5'
removed 'dir/b/f1'
removed 'dir/b/f3'
removed directory 'dir/b'
removed 'dir/a/f2'
removed 'dir/a/f4'
removed 'dir/a/f5'
removed 'dir/a/f1'
removed 'dir/a/f3'
removed directory 'dir/a'
rm: cannot remove 'dir': Directory not empty
$ tree dir
dir
├── a
└── b
2 directories, 0 files
$ rm --recursive --force --verbose dir
rm: cannot remove 'dir': Directory not empty
$ ls --all --recursive dir # in the output note the lack of . and .. in dir/a and dir/b
dir:
. .. a b
dir/a:
dir/b:
$ rmdir dir/a
rmdir: failed to remove 'dir/a': No such file or directory
$ rmdir dir/b
rmdir: failed to remove 'dir/b': No such file or directory
$ tree dir
dir
├── a
└── b
2 directories, 0 files
$ ls --all --recursive -l dir
dir:
total 0
drwx------. 2 username primarygroup 0 Jun 9 16:23 .
drwx------. 2 username primarygroup 0 Jun 9 16:23 ..
drwx------. 2 username primarygroup 0 Jun 9 16:23 a
drwx------. 2 username primarygroup 0 Jun 9 16:23 b
dir/a:
total 0
dir/b:
total 0
I've tried doing it as root, making the permissions 0777
for both files and directories, and mounting it manually rather than with autofs and I get the same behaviour.
No relevant messages seem to be turning up in the logs (not discounting I may not be looking at the right log).
Update 1: Switching off SELinux didn't help. Nor did turning off the firewall.
Update 2: It seems like restarting autofs will clear them (usually requires a restart). But it won't consistently remove all of them. After 2 restarts and with no extra calls to rm
or rmdir
the folders had completely disappeared. But I can still recreate the problem. So I guess the SMB server must have the correct information but it's not being represented locally maybe...?
Update 3: I turned on logging for cifs
. I did this with echo 7 | sudo tee /proc/fs/cifs/cifsFYI
. Then when I go through a simpler version of the steps again:
$ mkdir --parent h/a
at the same time in logs we get:
Jun 12 16:04:56 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
$ rm --recursive h
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0x80000006 STATUS_NO_MORE_FILES
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0x80000006 STATUS_NO_MORE_FILES
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:12 localhost kernel: CIFS: Status code returned 0xc0000101 STATUS_DIRECTORY_NOT_EMPTY
$ rm --recursive h
Jun 12 16:05:27 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:27 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:27 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:27 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:27 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:27 localhost kernel: CIFS: Status code returned 0x80000006 STATUS_NO_MORE_FILES
Jun 12 16:05:27 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:05:27 localhost kernel: CIFS: Status code returned 0xc0000056 STATUS_DELETE_PENDING
Jun 12 16:05:27 localhost kernel: CIFS: Status code returned 0xc0000056 STATUS_DELETE_PENDING
Jun 12 16:05:27 localhost kernel: CIFS: Status code returned 0xc0000056 STATUS_DELETE_PENDING
$ tree h
h
└── a
1 directory, 0 files
Jun 12 16:07:28 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:07:28 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:07:28 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:07:28 localhost kernel: CIFS: Status code returned 0x80000006 STATUS_NO_MORE_FILES
Jun 12 16:07:28 localhost kernel: CIFS: Status code returned 0xc0000034 STATUS_OBJECT_NAME_NOT_FOUND
Jun 12 16:07:28 localhost kernel: CIFS: Status code returned 0xc0000056 STATUS_DELETE_PENDING
Jun 12 16:07:28 localhost kernel: CIFS: Status code returned 0xc0000056 STATUS_DELETE_PENDING
BumpySiren
(11 rep)
Jun 9, 2023, 04:30 AM
• Last activity: Mar 19, 2025, 11:41 AM
0
votes
2
answers
96
views
What is the purpose +auto.master in /etc/auto.master?
I looked into the default /etc/auto.master and I saw the following. ``` # # Include central master map if it can be found using # nsswitch sources. # # Note that if there are entries for /net or /misc (as # above) in the included master map any keys that are the # same will not be seen as the first...
I looked into the default /etc/auto.master and I saw the following.
#
# Include central master map if it can be found using
# nsswitch sources.
#
# Note that if there are entries for /net or /misc (as
# above) in the included master map any keys that are the
# same will not be seen as the first read key seen takes
# precedence.
#
+auto.master
May I know what does +auto.master means and why is it there?
caramel1995
(109 rep)
Feb 7, 2025, 03:53 AM
• Last activity: Feb 7, 2025, 07:17 AM
1
votes
0
answers
605
views
autofs nfs share served and mounted on same PC reporting permission denied when writing
I am using nfs between multiple Debian 10 machines in my local network, with the same basic setting across the board, and they are all working as expected. I am using autofs as root to mount and unmount the nfs shares on each machine, also using the same basic settings everywhere. Users on each clie...
I am using nfs between multiple Debian 10 machines in my local network, with the same basic setting across the board, and they are all working as expected.
I am using autofs as root to mount and unmount the nfs shares on each machine, also using the same basic settings everywhere. Users on each client machine can access the mounted shares because they are owned by
nobody:nogroup
.
So far, I have always been mounting shares from a server machine on a client machine, with no issues. So two separate machines.
I am now attempting to mount a share served from a machine by using autofs on the same machine. So server and client on the same machine.
autofs is mounting the share without issues, and non-root users can list folder contents and view files no problem, but when non-root users attempt to write to the share, they get "permission denied":
$ touch test.file
touch: cannot touch 'test.file': Permission denied
$ echo "content" > test.file
bash: test.file: Permission denied
This happens whether I use 127.0.0.1 on loopback or 192.168.x.y on ethernet or wifi interface to access the share. Other machines use autofs to mount these same shares with the same settings without issue and this machine similarly use autofs to mount shares from other machines using the same settings again without issue.
These are the nfs server settings:
$ sudo exportfs -v
/exports/share 127.0.0.1/32(rw,wdelay,root_squash,all_squash,sec=sys,rw,secure,root_squash,all_squash)
/exports/share 192.168.0.0/16(rw,wdelay,root_squash,all_squash,sec=sys,rw,secure,root_squash,all_squash)
These are the autofs settings, which are loaded via a master map in another file:
$ cat /etc/auto.shares
share_loopback -fstype=nfs4,rw,retry=0,hard,noac,noexec,proto=tcp 127.0.0.1:/exports/share
share_network -fstype=nfs4,rw,retry=0,hard,noac,noexec,proto=tcp 192.168.x.y:/exports/share
When a user manually mounts the same nfs share on the same machine using the same basic settings, everything works fine, without permission problems.
$ sudo mount -t nfs4 -o rw,hard,noac,noexec,proto=tcp 127.0.0.1:/exports/share /media/share_loopback
$ sudo mount -t nfs4 -o rw,hard,noac,noexec,proto=tcp 192.168.x.y:/exports/share /media/share_network
It seems like autofs under the hood is doing something differently than when I mount manually.
So what is causing this mount through autofs to report "Permission denied", and how can I get it to work?
tompi
(292 rep)
Aug 1, 2021, 09:18 PM
• Last activity: Oct 18, 2024, 11:52 AM
0
votes
1
answers
2075
views
Autofs configuration on RHEL9: NFS directories appear empty despite successful mount
I am trying to set up my NFS system between my server node and my client node running on **rhel9**. They are both on different physical servers on the same network. The nfs-server daemon is properly running on my **server node** and showmount -e returns: ``` Export list for host: /mnt/name1 x.x.x.0/...
I am trying to set up my NFS system between my server node and my client node running on **rhel9**. They are both on different physical servers on the same network. The nfs-server daemon is properly running on my **server node** and showmount -e returns:
Export list for host:
/mnt/name1 x.x.x.0/16
/mnt/name2 x.x.x.0/16
/mnt/name3 x.x.x.0/16
Here is the /etc/exports file on the **server**:
/mnt/name1 x.x.x.0/16(rw,sync,no_root_squash,no_subtree_check)
/mnt/name2 x.x.x.0/16(rw,sync,no_root_squash,no_subtree_check)
/mnt/name3 x.x.x.0/16(rw,sync,no_root_squash,no_subtree_check)
Here is the status for the autofs daemon on the **client** node:
● autofs.service - Automounts filesystems on demand
Loaded: loaded (/usr/lib/systemd/system/autofs.service; enabled; preset: disabled)
Active: active (running) since Wed 2024-06-05 16:15:16 CDT; 5s ago
Main PID: 739859 (automount)
Tasks: 5 (limit: 98138)
Memory: 3.8M
CPU: 42ms
CGroup: /system.slice/autofs.service
└─739859 /usr/sbin/automount --systemd-service --dont-check-daemon
Jun 05 16:15:16 host systemd: Starting Automounts filesystems on demand...
Jun 05 16:15:16 host automount: setautomntent: lookup(sss): setautomountent: entry for map auto.master not found
Jun 05 16:15:16 host automount: setautomntent: lookup(sss): setautomountent: entry for map auto_master not found
Jun 05 16:15:16 host systemd: Started Automounts filesystems on demand.
Here is /etc/auto.master:
/- /etc/auto.nfs-shares
Here is auto.nfs-shares:
/mnt/name1 -fstype=nfs,rw,soft x.x.x.0:/mnt/name1
/mnt/name2 -fstype=nfs,rw,soft x.x.x.0:/mnt/name2
/mnt/name3 -fstype=nfs,rw,soft x.x.x.0:/mnt/name3
On the client node, when I execute mount, it returns among other things:
/etc/auto.nfs-shares on /mnt/name1 type autofs (rw,relatime,fd=7,pgrp=739859,timeout=300,minproto=5,maxproto=5,direct,pipe_ino=1745483)
/etc/auto.nfs-shares on /mnt/name2 type autofs (rw,relatime,fd=7,pgrp=739859,timeout=300,minproto=5,maxproto=5,direct,pipe_ino=1745483)
/etc/auto.nfs-shares on /mnt/name3 type autofs (rw,relatime,fd=7,pgrp=739859,timeout=300,minproto=5,maxproto=5,direct,pipe_ino=1745483)
The files name1, name2, and name3 show up when I use ls /mnt/ but I cannot access the files due to "no such file or directory" or them being empty (they have test files in them). Any suggestions would be appreciated.
paul runner
(11 rep)
Jun 5, 2024, 09:47 PM
• Last activity: Oct 14, 2024, 06:39 PM
0
votes
1
answers
244
views
Are there any implications of using a symbolic link or bind mount vs having the real files there?
I have an indirect wildcard autofs mount on home but would like a few local folders to remain in there. Thus I moved those local folders elsewhere and created a bind mount. As a backup, incase autofs dies, I created a symbolic link in /home to the folder as well. This seems to work fine, however, be...
I have an indirect wildcard autofs mount on home but would like a few local folders to remain in there. Thus I moved those local folders elsewhere and created a bind mount. As a backup, incase autofs dies, I created a symbolic link in /home to the folder as well.
This seems to work fine, however, before I push this configuration to the rest of the systems on my network, I'd like to know if there are any potential issues/drawbacks to doing this. Is this opaque to software accessing the /home paths? Or can this cause problems?
A workaround would be to leave the local folders there and manually have autofs do direct mounts for all the other home folders but it creates more work when adding/deleting accounts.
example, I want /home/local to remain so:
`
# mkdir /export
# mv /home/local /export/.
# ln -s /export/local /home/local
`
auto.master
`
...
/home auto.home
`
auto.home
`
local -bind :/export/local
* -fstype=nfs,rw,sync nfs-server:/home/&
`
when autofs is running, /home/local is available via the bind mount. if autofs is down, it is available via the symbolic link. I beleive any software using anything in /home/local should not know the differnce or be unaffected but I'm not 100% sure. So my question, re the any disadvantages or implications? **This is really about symbolic links or bind mount vs the normal hard link. The autofs reference is just for context.**
eng3
(330 rep)
Sep 15, 2024, 02:05 PM
• Last activity: Sep 15, 2024, 06:56 PM
1
votes
1
answers
714
views
How to mount a local folder with autofs? bind doesnt seem to work
I want all the users to automount from a NFS server except for a few accounts that should use the local home folder. Thus each client computer (and the nfs server) has a few folders in /home, the rest mount from the nfs server into /home as well. Is there a way to accomplish this? As I understand it...
I want all the users to automount from a NFS server except for a few accounts that should use the local home folder. Thus each client computer (and the nfs server) has a few folders in /home, the rest mount from the nfs server into /home as well. Is there a way to accomplish this?
As I understand it, autofs basically will put a mount over /home thus obscuring anything in the "real" /home. So I would need to move the local folders out and put them elsewhere (ie. /export), then do a bind mount. I could also create a symbolic link in the real /home so I'm covered if autofs breaks down. (I know another workaround is to mount elsewhere instead of /home but it is important all the home folders are in /home due to some legacy software issues)
auto.master
`
...
/- auto.local
/home auto.home
`
auto.home
`
* -fstype=nfs,rw,sync nfs-server:/home/&
`
auto.home
`
/home/local -fstype=bind :/export/local
`
Unfortunately this doesn't work. I get the error:
`
do_mount_autofs_direct: failed to create ioctl fd for /home/local
`
I found out this is because I have a /home/local symbolic link to /export/local. I'm not sure why this would cause a problem. Upon deleting it, it still gives me an error:
`
handle_packet_missing_direct: failed to create ioctl fd for /home/local
`
As a sanity check, I did confirm that manually doing the bind mount does succeed.
I'd appreciate any recommendations. I suppose another workaround would be to somehow setup so a symbolic link gets created whenever autofs starts up. Not sure how to do that
I am on an old system ubuntu 18. autofs version is 5.1.2.
The only way I found I can do this is to create direct mounts for all my users. However then it has to get updated everytime a user is added or deleted which is not ideal.
eng3
(330 rep)
Sep 14, 2024, 02:12 PM
• Last activity: Sep 14, 2024, 09:12 PM
0
votes
0
answers
188
views
Mount a fs using autofs with fail-over
I have 2 source IPs and common mount /SAP_01, mount the file system from 1st source IP if not available mount with 2nd IP and vise versa (kind of fail-over). I create a script to check which IP is active #!/bin/bash primary="//192.168.10.200/SAP_01" secondary="//10.25.9.200/SAP_01" mount_point="/SAP...
I have 2 source IPs and common mount /SAP_01, mount the file system from 1st source IP if not available mount with 2nd IP and vise versa (kind of fail-over).
I create a script to check which IP is active
#!/bin/bash
primary="//192.168.10.200/SAP_01"
secondary="//10.25.9.200/SAP_01"
mount_point="/SAP_01"
cifs_port=445
log_file="/var/log/cifs_events"
log_event() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $log_file
}
# Check if the mount point is stale
if mountpoint -q $mount_point && ! nc -z -w 1 192.168.10.200 $cifs_port && ! nc -z -w 1 10.25.9.200 $cifs_port; then
log_event "Mount point is stale. Forcing unmount."
umount -l $mount_point
fi
# Check if the primary server is available
if nc -z -w 1 192.168.10.200 $cifs_port; then
log_event "Primary server is available. Mounting $primary."
echo "$primary"
else
log_event "Primary server is down. Checking secondary server."
if nc -z -w 1 10.25.9.200 $cifs_port; then
log_event "Secondary server is available. Mounting $secondary."
echo "$secondary"
else
log_event "Neither CIFS server is available."
echo "Neither CIFS server is available" >&2
exit 1
fi
fi
its working fine , it will give UNC patch **//192.168.10.200/SAP_01** or **//10.25.9.200/SAP_01**
but I am not able to mount this via autofs
I added /etc/auto.master
/- /etc/auto.cifs
and /etc/auto.cifs
/SAP_01 -fstype=cifs,rw,uid=30000,gid=4010,file_mode=0777,dir_mode=0777,_netdev,credentials=/etc/cifs_credentials :/usr/local/bin/cifs_failover.sh
But while running autofs I am getting error
mount_mount: mount(generic): calling mkdir_path /SAP_01
mount_mount: mount(generic): calling mount -t cifs -o rw,uid=30000,gid=4010,file_mode=0777,dir_mode=0777,_netdev,credentials=/etc/cifs_credentials /usr/local/bin/cifs_failover.sh /SAP_01
>> mount.cifs: bad UNC (/usr/local/bin/cifs_failover.sh)
mount(generic): failed to mount /usr/local/bin/cifs_failover.sh (type cifs) on /SAP_01
dev_ioctl_send_fail: token = 370
Any solution to fix this ? I just want to fail-over if primary source unavailable and vise-versa.
Please help me to resolve this issue.
I just want to fail-over if primary source unavailable and vise-versa.
Vija200
(1 rep)
Aug 24, 2024, 02:22 AM
• Last activity: Aug 24, 2024, 11:52 PM
1
votes
0
answers
50
views
rclone group permissions not working
im using rclone to mount google drive but the permissions are completely broken. im using debian 12 ```bash -> sudo namei -l /mnt/cloud/gdrive/gd1/3D f: /mnt/cloud/gdrive/gd1/3D drwxr-xr-x root root / drwxr-xr-x root root mnt drwxr-xr-x root root cloud drwxr-xr-x root root gdrive drwxrwx--- root gdr...
im using rclone to mount google drive but the permissions are completely broken.
im using debian 12
-> sudo namei -l /mnt/cloud/gdrive/gd1/3D
f: /mnt/cloud/gdrive/gd1/3D
drwxr-xr-x root root /
drwxr-xr-x root root mnt
drwxr-xr-x root root cloud
drwxr-xr-x root root gdrive
drwxrwx--- root gdrive gd1
drwxrwx--- root gdrive 3D
-> sudo eza --icons -lg --time-style "+%s %Y-%m-%d %H:%M"
drwxrwx--- - root 937400039 1719190094 2024-06-23 17:48 gd1
-> eza --icons -lg --time-style "+%s %Y-%m-%d %H:%M"
[./gd1: Permission denied (os error 13)]
-> id
uid=937400003(irl_name) gid=937400003(irl_name) groups={19 others excluded for space},937400039(gdrive)
autofs map:
gd1 -fstype=rclone,rw,uid=0,gid=937400039,dir-perms=0770,file-perms=0770,allow-non-empty,umask=000,fast-list,vfs-cache-mode=writes,config=/etc/rclone/gd/rclone.conf,cache-dir=/var/cache/rclone :gd:
BPplays
(11 rep)
Jun 24, 2024, 01:00 AM
• Last activity: Jun 24, 2024, 10:45 AM
1
votes
0
answers
94
views
autofs shows resources but returns `file not found` when I try to access them
For some time now autofs stopped working properly, I've configure it to mount samba shared exposed on my raspberry running DietPi. My conf is really simple: `/etc/auto.master.d/dietpi-rpi1.autofs` : ```` /mnt/smb /etc/auto.smb --timeout=60 ```` The auto.smb is a script: ```` #!/bin/bash # This file...
For some time now autofs stopped working properly, I've configure it to mount samba shared exposed on my raspberry running DietPi.
My conf is really simple:
/etc/auto.master.d/dietpi-rpi1.autofs
:
`
/mnt/smb /etc/auto.smb --timeout=60
`
The auto.smb is a script:
`
#!/bin/bash
# This file must be executable to work! chmod 755!
# Automagically mount CIFS shares in the network, similar to
# what autofs -hosts does for NFS.
# Put a line like the following in /etc/auto.master:
# /cifs /etc/auto.smb --timeout=300
# You'll be able to access Windows and Samba shares in your network
# under /cifs/host.domain/share
# "smbclient -L" is used to obtain a list of shares from the given host.
# In some environments, this requires valid credentials.
# This script knows 2 methods to obtain credentials:
# 1) if a credentials file (see mount.cifs(8)) is present
# under /etc/creds/$key, use it.
# 2) Otherwise, try to find a usable kerberos credentials cache
# for the uid of the user that was first to trigger the mount
# and use that.
# If both methods fail, the script will try to obtain the list
# of shares anonymously.
get_krb5_cache() {
cache=
uid=${UID}
for x in $(ls -d /run/user/$uid/krb5cc_* 2>/dev/null); do
if [ -d "$x" ] && klist -s DIR:"$x"; then
cache=DIR:$x
return
fi
done
if [ -f /tmp/krb5cc_$uid ] && klist -s /tmp/krb5cc_$uid; then
cache=/tmp/krb5cc_$uid
return
fi
}
key="$1"
opts="-fstype=cifs"
for P in /bin /sbin /usr/bin /usr/sbin
do
if [ -x $P/smbclient ]
then
SMBCLIENT=$P/smbclient
break
fi
done
[ -x $SMBCLIENT ] || exit 1
creds=/etc/creds/$key
if [ -f "$creds" ]; then
opts="$opts"',uid=$UID,gid=$GID,credentials='"$creds"
smbopts="-A $creds"
else
get_krb5_cache
if [ -n "$cache" ]; then
opts="$opts"',multiuser,cruid=$UID,sec=krb5i'
smbopts="-k"
export KRB5CCNAME=$cache
else
opts="$opts"',guest'
smbopts="-N"
fi
fi
$SMBCLIENT $smbopts -gL "$key" 2>/dev/null| awk -v "key=$key" -v "opts=$opts" -F '|' -- '
BEGIN { ORS=""; first=1 }
/Disk/ {
if (first)
print opts; first=0
dir = $2
loc = $2
# Enclose mount dir and location in quotes
print " \\\n\t \"/" dir "\"", "\"://" key "/" loc "\""
}
END { if (!first) print "\n"; else exit 1 }
'
`
I also have a /etc/creds/dietpi-rpi1
containing my credentials in the form
username=.......
password=.......
If I cd into /mnt/smb/dietpi-rpi1 I can actually see the resources:
`
drwxr-xr-x 2 root root 0 May 25 17:19 daniele
drwxr-xr-x 2 root root 0 May 25 17:19 'print$'
drwxr-xr-x 2 root root 0 May 25 17:19 shares
`
but if I try to cd into them I get a "No such file or directory" error (neither as root):
`
bash: cd: shares/: No such file or directory
`
I can manually mount and browse them.
Daniele
(478 rep)
May 24, 2024, 10:41 PM
• Last activity: May 25, 2024, 03:18 PM
Showing page 1 of 20 total questions