Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
3
votes
1
answers
2682
views
Autofs fails to mount nfs-shared autofs bind mounted directory, but manual mount succeeds
Apologies for the long read below. Hopefully this will give enough context and traces so someone can help me. # Context I store user home directories on some zfs dataset (one per user), where each user has one different home directory per Linux distribution and version (e.g. Debian Buster), but wher...
Apologies for the long read below. Hopefully this will give enough context and traces so someone can help me.
# Context
I store user home directories on some zfs dataset (one per user), where each user has one different home directory per Linux distribution and version (e.g. Debian Buster), but where XDG directories (e.g., Downloads or Desktop) of all these homes actually points to the same directory so that the user has access to all its files regardless of the Linux distro run by the terminal the user logged on. I have the same kind of structure for binaries stored in home. See the example below (see below for meaning of numbers in square brackets):
/media/zfs/home/user1/XDG-DIRS/Downloads
| /Videos
| /
|
/homes/debian_buster/
| /ubuntu_groovy/
|
/usr/share/
| /include/
|
/local/linux-x86_64/bin
| /lib
|
/linux-armhf/bin
/lib
Using specific mounts to correct points in the structure above, one can reconstitute a home directory with all user setting files matching the desktop environment served by the distro and its version, but still access all personal files and even home-installed binaries compatible with the OS and architecture the user logged on. For example, a user logging on a Linux Debian Buster running on a x86_64 architecture, should have its home directory structure as below. A number in square bracket at the end of a directory names means the content of that directory should actually be content in the directory with the same number at the beginning of its name in the structure above. I let you guess what the home directory of a user logged on a raspberry pie running ubuntu groovy looks like.
/home/AD.EXAMPLE.COM/user1/Downloads
/Videos
/.local/bin
/lib
One can achieve this by running autofs on the terminal computer and using an executable map file that results in as many nfs mounts as necessary (5 per user in the example above, typically more because of more XDG directories). The problem I anticipate with this is that if the user wants to move a files from "Downloads" to "Videos", then because these are two different nfs mount points, the files is actually downloaded to the terminal, then uploaded back to the server. I actually did not test if that results in performance penalties. If you have any insight about this point, please let me know.
In order to limit the performance problem described above, I actually reconstitute home directories for each distro/version (on one side) and for each os/architecture (on the other side) on the server using autofs and then exporting the results through NFS. That means I build the following structure on the server using autofs bind mounts
/media/user_data/unix/user1/home/debian_buster/Downloads
| | /Videos
| | /.local/
| /ubuntu_groovy/Downloads
| /Videos
| /.local/
/local/linux-x86_64/bin
| /lib
/local/linux-armhf/bin
/lib
/etc/auto.master.d/user_data.autofs
(server side)
/media/user_data/unix/ /etc/auto.AD.EXAMPLE.COM.unix --ghost --timeout=120
/etc/auto.AD.EXAMPLE.COM.unix
(server side, executable and read bits set for ugo)
#!/bin/bash
key=$1
echo '- /home -fstype=bind :/media/zfs/home/'$key'/unix/ browse \'
for i in $(ls /media/zfs/home/$key/unix)
do
for j in $(ls /media/zfs/home/$key/XDG_DIRS)
do
echo ' /home/'$i'/'$j' -fstype=bind :/media/zfs/home/'$key'/XDG_DIRS/'$j' browse \'
done
done
for i in $(ls /media/zfs/home/$key/usr)
do
echo ' /local/'$i' -fstype=bind :/media/zfs/home/'$key'/local/ browse \'
for j in $(ls /media/zfs/home/$key/usr/$i)
do
echo ' /local/'$i'/'$j' -fstype=bind :/media/zfs/home/'$key'/usr/'$i'/'$j' browse \'
done
done
echo ''
Here is a sample output of the script shown above
root@server:~# /etc/auto.AD.EXAMPLE.COM.unix user1
- /home -fstype=bind :/media/zfs/home/user1/unix/ browse \
/home/debian_buster/Downloads -fstype=bind :/media/zfs/home/user1/XDG_DIRS/Downloads browse \
/home/debian_buster/Videos -fstype=bind :/media/zfs/home/user1/XDG_DIRS/Videos browse \
/local/linux-armhf -fstype=bind :/media/zfs/home/user1/local/ browse \
/local/linux-armhf/bin -fstype=bind :/media/zfs/home/user1/usr/linux-armhf/bin browse \
/local/linux-armhf/lib -fstype=bind :/media/zfs/home/user1/usr/linux-armhf/lib browse \
/local/linux-armhf/sbin -fstype=bind :/media/zfs/home/user1/usr/linux-armhf/sbin browse \
/local/linux-x86_64 -fstype=bind :/media/zfs/home/user1/local/ browse \
/local/linux-x86_64/bin -fstype=bind :/media/zfs/home/user1/usr/linux-x86_64/bin browse \
/local/linux-x86_64/lib -fstype=bind :/media/zfs/home/user1/usr/linux-x86_64/lib browse \
root@server:~#
This works flawlessly so far, although a tad slow to my taste. I export /media/user_data/unix
through NFS, using the export file below:
#
/media/user_data *(sec=krb5p,rw,crossmnt)
#
At this point, it is worth mentioning that one reason why the "user_data" step exists in this file hierarchy is that because if I export /media/user_data/unix
in /etc/exportfs
(or equivalently, /media/unix
although not demonstrated below), then I get the warnings below. This is not encouraging but I try anyway with this extra user data
layer in the hierarchy, hoping that the crossmnt
will manage to also export what is mounted inside the hierarchy being exported. The system does not seem to complain about this attempt.
root@server:~# cat /etc/exports
# Other exports of unrelated directories
/media/user_data *(sec=krb5p,rw,crossmnt)
/media/user_data/unix *(sec=krb5p,rw,crossmnt)
# More unrelated exports
root@server:~# exportfs -ra; zfs share -a
exportfs: /etc/exports : Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/media/user_data".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /media/user_data/unix does not support NFS export
root@server:~# showmount -e
Export list for server:
/media/user_data *
/media/user_data/unix *
#
root@server:~#
In the following text, I removed export of /media/user_data/unix
in /etc/exports
and ran exportfs -ra
and zfs share -a
so that the NFS server does not complain about any directory that does not support NFS exports.
Finally, autofs on the terminal computer only need to mount the home directory corresponding to the distro it runs and its version, as well as the local
subdirectory corresponding to the OS and architecture, resulting in the following hierarchy for user1 on a Linux Debian Buster x86_64.
/home/AD.EXAMPLE.COM/user1/Downloads
/Videos
/.local
Which I attempt to achieve with the autofs configuration below
/etc/auto.master.d/home.autofs
(terminal side, Linux Debian Buster, x86_64)
/media/AD.EXAMPLE.COM /etc/auto.AD.EXAMPLE.COM.home --timeout=120
/etc/auto.AD.EXAMPLE.COM.home
(terminal side, Linux Debian Buster, x86_64, executable and read bits set for ugo)
#!/bin/bash
key=$1
distributor()
{
lsb_release -i | cut -f 2 -d : | xargs echo | tr '[:upper:]' '[:lower:]'
}
codename()
{
lsb_release -c | cut -f 2 -d : | xargs echo | tr '[:upper:]' '[:lower:]'
}
architecture()
{
uname -m | tr '[:upper:]' '[:lower:]'
}
os()
{
uname -s | tr '[:upper:]' '[:lower:]'
}
echo '- / -fstype=nfs,vers=4.2,sec=krb5p,fsc server.example.com:/media/user_data/unix/'$key'/home/'$(distributor)'_'$(codename)' \'
echo ' /.local -fstype=nfs,vers=4.2,sec=krb5p,fsc server.example.com:/media/user_data/local/'$key'/local/'$(os)'-'$(architecture)' \'
echo ''
Here is a sample output of the script above
root@terminal:~$ /etc/auto.AD.EXAMPLE.COM.exp user1
- / -fstype=nfs,vers=4.2,sec=krb5p,fsc server.example.com:/media/user_data/unix/user1/home/debian_buster \
/.local -fstype=nfs,vers=4.2,sec=krb5p,fsc server.example.com:/media/user_data/local/user1/local/linux-x86_64 \
root@terminal:~$
# The problem
Autofs on the terminal computer fails to mount the reconstituted home directory exported from the server. This is the trace I get from automount when trying to list the content of /home/AD.EXAMPLE.COM/user1
:
root@terminal:~# automount -d -f -v
...
get_nfs_info: called with host server.example.com(192.168.80.101) proto 17 version 0x30
get_nfs_info: nfs v3 rpc ping time: 0.000000
get_nfs_info: host server.example.com cost 0 weight 0
prune_host_list: selected subset of hosts that support NFS3 over TCP
mount_mount: mount(nfs): calling mkdir_path /media/AD.EXAMPLE.COM/user1
mount_mount: mount(nfs): calling mount -t nfs -s -o vers=4.2,sec=krb5p,fsc server.example.com:/media/user_data/unix/user1/home/debian_buster /media/AD.EXAMPLE.COM/user1
>> mount.nfs: mounting server.example.com:/media/user_data/unix/user1/home/debian_buster failed, reason given by server: No such file or directory
mount(nfs): nfs: mount failure server.example.com:/media/user_data/unix/user1/home/debian_buster on /media/AD.EXAMPLE.COM/user1
do_mount_autofs_offset: mount offset /media/AD.EXAMPLE.COM/user1/.local at /media/AD.EXAMPLE.COM/user1
mount_autofs_offset: calling mount -t autofs -s -o fd=16,pgrp=20379,minproto=5,maxproto=5,offset automount /media/AD.EXAMPLE.COM/user1/.local
mounted offset on /media/AD.EXAMPLE.COM/user1/.local with timeout 120, freq 30 seconds
mount_autofs_offset: mounted trigger /media/AD.EXAMPLE.COM/user1/.local at /media/AD.EXAMPLE.COM/user1/.local
dev_ioctl_send_ready: token = 114
mounted /media/AD.EXAMPLE.COM/user1
And the listed content of /home/AD.EXAMPLE.COM/user1
is nothing:
root@terminal:~$ ls /home/AD.EXAMPLE.COM/user1
root@terminal:~$
Although the supposedly mounted directory from the server is full of files:
root@server:~# ls /media/user_data/unix/user1/home/debian_buster
file1 file2 file3
root@server:~#
The automount trace above hints that the directory attempted to mount does not exist on the server, but this is strange, first because trying to list that exact directory from the server does show that is exists (see above), and I can mount this directory manually from the terminal anyway, as shows the trace below:
root@terminal:~$ mount -vvvv -t nfs server.example.com:/media/user_data/unix/user1/home/debian_buster /mnt
mount.nfs: timeout set for Sat Feb 13 22:37:06 2021
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.80.101,clientaddr=192.168.104.1'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=192.168.80.101'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.80.101 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.80.101 prog 100005 vers 3 prot UDP port 39874
root@terminal:~$ ls /mnt
file1 file2 file3
root@terminal:~$
# My attempt for a solution
I thought that autofs on the terminal may not find the directory to mount because it is not mounted (yet) on the server, therefore I attempted to use --ghost
and browser
options (you can see them in the server's /etc/auto.master.d/media.autofs
and /etc/auto.AD.EXAMPLE.COM.unix
files shown above), but to no avail. I am running out of ideas to explore to find a permanent solution.
# Temporary workaround
The temporary workaround I use at the moment is not not use autofs on the server side, but instead bind mount manually all directories to obtain the correct file hierarchy to export. I am not too satisfied with this solution as it requires a lot of mounts to be permanently active and it seems to leave the server in a somewhat unstable state, though I don't know why exactly.
# Remarks
* Both the server and the terminal run Debian Buster (Linux x86_64) in my tests and that produced the traces above.
* NFS complaining that the autofs-reconstituted directory does not support NFS export hints that I should not try to export it at all by sneakingly exporting its parent directory instead. I could not find any reference stating that it is not possible to NFS-export a directory having a autofs-mounted subdirectory, so it is still worth a try. Furthermore, it work fine when I mount --bind
these subdirectories manually on the server side instead of using autofs so there should be some hope.
* This is a rather complex (and fragile?) setup; if you have a simpler (and more robust) suggestion to achieve the same functionalities, I am also interested :)
Nykau
(31 rep)
Feb 14, 2021, 10:29 AM
• Last activity: Jul 23, 2025, 06:05 AM
1
votes
1
answers
1929
views
Mounting nested ZFS filesystems exported via NFS
I have a linux (ubuntu) server with a zfs pool containing nested fileystem. E.g.: zfs_pool/root_fs/fs1 zfs_pool/root_fs/fs2 zfs_pool/root_fs/fs3 I have enabled NFS sharing on the root filesystem (via zfs, not by editing `/etc/exports`). Nested filesystems inherit this property. NAME PROPERTY VALUE S...
I have a linux (ubuntu) server with a zfs pool containing nested fileystem.
E.g.:
zfs_pool/root_fs/fs1
zfs_pool/root_fs/fs2
zfs_pool/root_fs/fs3
I have enabled NFS sharing on the root filesystem (via zfs, not by editing
/etc/exports
). Nested filesystems inherit this property.
NAME PROPERTY VALUE SOURCE
zfs_pool/root_fs sharenfs rw=192.168.1.0/24,root_squash,async local
NAME PROPERTY VALUE SOURCE
zfs_pool/root_fs/fs1 sharenfs rw=192.168.1.0/24,root_squash,async inherited from zfs_pool/root_fs
On the client machines (linux, mostly ubuntu), the only filesystem I explicitly mount is the root filesystem.
mount -t nfs zfsserver:/zfs_pool/root_fs /root_fs_mountpoint
Nested filesystems are mounted automatically when they are accessed. I didn't need to configure anything to make this work.
This is great, but I'd like to know who is providing this feature.
Is it ZFS? Is it NFS? Is it something else on the client side (something like autofs, which isn't even installed).
I'd like to change the timeout after which nested filesystems are unmounted, but I don't even know which configuration to edit and which documentation to read.
lgpasquale
(291 rep)
Oct 15, 2018, 09:22 AM
• Last activity: Jul 21, 2025, 04:08 PM
1
votes
1
answers
4058
views
nfs problems with jumbo (MTU=9000) but works with default (MTU=1500)
I have a local network set up between two servers running ubuntu 18.04 server. They are connected by a 10G network switch (actually 2 bonded connections 10G each). For performance reasons, in /etc/netplan, I have mtu=9000 for the corresponding interface (ethernet or bond). All machines on the subnet...
I have a local network set up between two servers running ubuntu 18.04 server. They are connected by a 10G network switch (actually 2 bonded connections 10G each). For performance reasons, in /etc/netplan, I have mtu=9000 for the corresponding interface (ethernet or bond). All machines on the subnet have MTU=9000 set. See my previous question and solution: https://unix.stackexchange.com/questions/469346/link-aggregation-bonding-for-bandwidth-does-not-work-when-link-aggregation-gro/469715#469715
I can ssh, copy files between machines, etc., at high bandwidth (>15 GBit/sec).
One server has an nfs (nfs4, I tried with nfs3 as well) export. I can mount and view some directories of the nfs from other machines on the subnet. Settings are identical to the HowTo: https://help.ubuntu.com/community/NFSv4Howto .
However, commands like "ls" or "cd" or even "df" will randomly hang infinitely on the client.
I tried changing the MTU to default (1500) on the client and host interfaces, while leaving jumbo frames "activated" on the switch. Oddly, this solved all the issues.
I am wondering if NFS(4) is incompatible with jumbo frames, of if anyone has any insight into this. I have found people "optimizing" nfs with different MTU sizes, and people mention hanging "ls" etc., but never in the same context...
rveale
(161 rep)
Apr 3, 2019, 12:34 PM
• Last activity: Jun 22, 2025, 11:22 PM
2
votes
1
answers
3797
views
Nfs4_setfacl reports error on files of mounted folder
I mounted an nfsv4 folder (both client and server are CentOS 7.4) via command $ sudo mount -t nfs -o v4.0,sec=krb5 ark-centos7-ker.qa.arkivio.com:/export/nfs1 /nfs4-mnt-dir created a file via: `touch 11`, then set file's ACL get failed with command $ sudo nfs4_setfacl -a A::auto-stor@qa.arkivio.com:...
I mounted an nfsv4 folder (both client and server are CentOS 7.4)
via command
$ sudo mount -t nfs -o v4.0,sec=krb5 ark-centos7-ker.qa.arkivio.com:/export/nfs1 /nfs4-mnt-dir
created a file via:
touch 11
, then set file's ACL get failed with command
$ sudo nfs4_setfacl -a A::auto-stor@qa.arkivio.com:rxtncy /nfs4-mnt-dir/11
[sudo] password for auto-stor@qa.arkivio.com:
Failed setxattr operation: Invalid argument
it seems complaining the parameter auto-stor@qa.arkivio.com is invalid,
but this user is already recognized by both nfs4 client and server.
$ getent passwd auto-stor@qa.arkivio.com
auto-stor@qa.arkivio.com:*:1712401226:1712400513:auto-stor:/home/auto-stor@qa.arkivio.com:/bin/bash
$ id auto-stor@qa.arkivio.com
uid=1712401226(auto-stor@qa.arkivio.com) gid=1712400513(domain users@qa.arkivio.com) groups=1712400513(domain users@qa.a rkivio.com),10(wheel),1712439592(autostoradmins@qa.arkivio.com),1712439438(certsvc_dcom_access@qa.arkivio.com),171243989 6(passwordpropdeny@qa.arkivio.com),1712400512(domain admins@qa.arkivio.com),1712439802(ats_steph_testgroup@qa.arkivio.co m)
What is missing in my configuration?
xq10907
(95 rep)
Mar 6, 2018, 01:42 AM
• Last activity: Jun 11, 2025, 11:09 AM
0
votes
1
answers
3442
views
Restricting NFS share access to particular IPs or hosts and restricting others on suse
I have created a shared folder **/data01/share** on one Suse Gnu/Linux and also made entry for host(client) machine in **/etc/exports** **/data01/share  10.241.200.53(rw,sync,no_root_squash,no_subtree_check)** . But i am getting this after **exportfs -a** exportfs: No options for /data01/s...
I have created a shared folder **/data01/share** on one Suse Gnu/Linux and also made entry for host(client) machine in **/etc/exports** **/data01/share 10.241.200.53(rw,sync,no_root_squash,no_subtree_check)** . But i am getting this after **exportfs -a**
exportfs: No options for /data01/share 10.241.200.53(rw,sync,no_root_squash,no_subtree_check) : suggest (sync) to avoid warning
exportfs: /etc/exports : Neither 'subtree_check' or 'no_subtree_check' specified for export ":/data01/share 10.241.200.53(rw,sync,no_root_squash,no_subtree_check)".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: Failed to stat /data01/share 10.241.200.53(rw,sync,no_root_squash,no_subtree_check): No such file or directory
**cat /etc/os-release**
NAME="SLES"
VERSION="12-SP3"
VERSION_ID="12.3"
PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:12:sp3"
**systemctl status nfs-server.service**
nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/nfs-server.service.d
└─nfsserver.conf
/run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Wed 2019-07-24 02:32:03 EDT; 2h 34min ago
Main PID: 2562 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 512)
CGroup: /system.slice/nfs-server.service
Jul 24 02:32:03 OPT001CORE0002 systemd: Starting NFS server and services...
Jul 24 02:32:03 OPT001CORE0002 systemd: Started NFS server and services.
**cat /etc/exports**
# See the exports(5) manpage for a description of the syntax of this file.
# This file contains a list of all directories that are to be exported to
# other computers via NFS (Network File System).
# This file used by rpc.nfsd and rpc.mountd. See their manpages for details
# on how make changes in this file effective.
/data01/share 10.241.200.53(rw,sync,no_root_squash,no_subtree_check)
**ls -la /data01/share**
total 0
drwxrwxrwx 4 acmuser acmgrp 36 Jul 24 04:18 .
drwxr-xr-x 6 acmuser acmgrp 65 Jul 24 04:16 ..
drwxrwxrwx 3 acmuser acmgrp 18 Jul 24 04:18 support
drwxrwxrwx 5 acmuser acmgrp 45 Jul 24 04:17 upgrade
Ravi Kumar
(21 rep)
Jul 24, 2019, 08:49 AM
• Last activity: Jun 6, 2025, 08:04 PM
2
votes
2
answers
91
views
Run NFSv4 w/RDMA on Rocky v9.5
I'm trying out RDMA on NFS and noticed that it does not seem to work with NFSv4: ``` [grant@host ~]$ sudo mount -t nfs -o rdma,proto=rdma,vers=4 10.99.99.98:/ifs/rdma-test /mnt/powerscale_rdma mount.nfs: Protocol family not supported [grant@host ~]$ sudo mount -t nfs -o rdma,proto=rdma,vers=3 10.99....
I'm trying out RDMA on NFS and noticed that it does not seem to work with NFSv4:
[grant@host ~]$ sudo mount -t nfs -o rdma,proto=rdma,vers=4 10.99.99.98:/ifs/rdma-test /mnt/powerscale_rdma
mount.nfs: Protocol family not supported
[grant@host ~]$ sudo mount -t nfs -o rdma,proto=rdma,vers=3 10.99.99.98:/ifs/rdma-test /mnt/powerscale_rdma
nfsv3 loads up and runs just fine, but nfsv4 gets you mount.nfs: Protocol family not supported
. [This answer](https://unix.stackexchange.com/a/749996/240147) on [NFS4, insecure, port number, rdma contradiction help](https://unix.stackexchange.com/questions/749990/nfs4-insecure-port-number-rdma-contradiction-help) seems to indicate it could work, but it's not really clear how.
Is there a way to run NFSv4 with RDMA?
Grant Curell
(769 rep)
May 23, 2025, 01:20 PM
• Last activity: May 30, 2025, 07:39 PM
6
votes
3
answers
67758
views
Mount NFS - "operation not permitted" in Proxmox container
I'm trying to mount a simple NFS share, but it keeps saying "operation not permitted". The NFS server has the following share. /mnt/share_dir 192.168.7.101(ro,fsid=0,all_squash,async,no_subtree_check) 192.168.7.11(ro,fsid=0,all_squash,async,no_subtree_check) The share seems to be active for both cli...
I'm trying to mount a simple NFS share, but it keeps saying "operation not permitted".
The NFS server has the following share.
/mnt/share_dir 192.168.7.101(ro,fsid=0,all_squash,async,no_subtree_check) 192.168.7.11(ro,fsid=0,all_squash,async,no_subtree_check)
The share seems to be active for both clients.
# exportfs -s
/mnt/share_dir 192.168.7.101(ro,async,wdelay,root_squash,all_squash,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,all_squash)
/mnt/share_dir 192.168.7.11(ro,async,wdelay,root_squash,all_squash,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,all_squash)
The client 192.168.7.101 can see the share.
$ sudo showmount -e 192.168.7.10
Export list for 192.168.7.10:
/mnt/share_dir 192.168.7.101
192.168.7.101 's mount destination:
# ls -lah /mnt/share_dir/
total 8.0K
drwxr-xr-x 2 me me 4.0K Aug 28 19:21 .
drwxr-xr-x 3 root root 4.0K Aug 28 19:21 ..
When I try to mount the share, the client says "operation not permitted" with either
nfs
or nfs4
type.
$ sudo mount -vvv -t nfs 192.168.7.10:/mnt/share_dir /mnt/share_dir
mount.nfs: timeout set for Sun Aug 28 21:56:03 2022
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.7.10,clientaddr=192.168.7.101'
mount.nfs: mount(2): Operation not permitted
mount.nfs: trying text-based options 'addr=192.168.7.10'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.7.10 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.7.10 prog 100005 vers 3 prot UDP port 46169
mount.nfs: mount(2): Operation not permitted
mount.nfs: Operation not permitted
I've set fsid=0
and insecure
to the export options, but it didn't work.
RPCInfo from the client's side:
# rpcinfo -p 192.168.7.10
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100005 1 udp 59675 mountd
100005 1 tcp 37269 mountd
100005 2 udp 41354 mountd
100005 2 tcp 38377 mountd
100005 3 udp 46169 mountd
100005 3 tcp 39211 mountd
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049
100003 3 udp 2049 nfs
100227 3 udp 2049
100021 1 udp 46745 nlockmgr
100021 3 udp 46745 nlockmgr
100021 4 udp 46745 nlockmgr
100021 1 tcp 42571 nlockmgr
100021 3 tcp 42571 nlockmgr
100021 4 tcp 42571 nlockmgr
Using another client, *192.168.7.11*, I was able to mount that share with no issues.
I can not see any issue or misconfiguration, and could not find a fix anywhere.
There's no firewall in the way and both server and client are using Debian 11.
Any idea of what's going on?
markfree
(425 rep)
Aug 29, 2022, 01:32 AM
• Last activity: May 29, 2025, 04:46 PM
0
votes
1
answers
2079
views
NFS mount failed after upgrading server, no route to host, embedded, nfsvers=4
After hours of reading and try and error process i'd like to explain my nfs mount problem and solution. I was working for years on a virtual **debian 8.5** host system to develop software for multiple embedded devices, based on imx, or raspi, beagle board and so on. During the development process it...
After hours of reading and try and error process i'd like to explain my nfs mount problem and solution.
I was working for years on a virtual **debian 8.5** host system to develop software for multiple embedded devices, based on imx, or raspi, beagle board and so on.
During the development process it's more than usefull to mount the embedded root partion over nfs from the host machine. The configuration is normaly straight forward.
# host configuration
# /etc/exports
/opt/tftpboot/rootfs *(rw,sync,insecure,no_subtree_check,no_root_squash)
- instead of wildcard it's recommend to use specific ips
- also remove the option insecure in productive enviroment
# client configuration
if the kernel supports the network file system it's pretty easy to configure the mounting of the root file system from the embedded / or remote system.
# example part of the kernel command line
root=/dev/nfs nfsroot=10.0.102.247:/opt/tftpboot/rootfs,nolock
By the way, with the new version of **nfs-kernel-server**, delivering with **debian 10.2** or **9.x** it's impossible to mount the root file system. The boot process stuck, no error log on host device, no error log on the remote system.
# testing from shell
I've tried to boot the remote system from flash and mount the remote folder from our busybox shell, but failed.
$ mount -t nfs 10.0.102.247:/opt/tftpboot/rootfs /mnt/nfs
no route to host
Ping works fine ;-) Also the firewall on the host side was well configured.
After exluding any other problem, like problems on the networking, i've changed the mount command to use the NFS V4 the mount command works like expected.
mount -t nfs -o nfsvers=4 10.0.102.247:/opt/tftpboot/rootfs /mnt/nfs
Thomas
(101 rep)
Jan 14, 2020, 01:54 PM
• Last activity: May 23, 2025, 12:01 PM
2
votes
1
answers
8040
views
"incorrect mount option was specified" when mounting krb5p nfs4 partition on Ubuntu
I have a NFS4 share running with krb5p. I have no problems accessing it from CentOS clients, all that is required is: yum install krb5-workstation setup krb5 (edit krb5.conf, setup keytab) systemctl enable nfs-secure.service && systemctl start nfs-secure.service systemctl enable nfs-client.target &&...
I have a NFS4 share running with krb5p.
I have no problems accessing it from CentOS clients, all that is required is:
yum install krb5-workstation
setup krb5 (edit krb5.conf, setup keytab)
systemctl enable nfs-secure.service && systemctl start nfs-secure.service
systemctl enable nfs-client.target && systemctl start nfs-client.target
mkdir /mnt/x
Add the following to fstab:
server.example.com:/srv/share/subdir /mnt/x nfs4 defaults,sec=krb5p,noexec,nosuid,_netdev,auto 0 0
This works great on CentOS, I've setup a dozen client hosts so far that way. However on Ubuntu, I get :
mount.nfs4: an incorrect mount option was specified
I think the Ubuntu error is to do with nfs-secure.service
however there seems to be no equivalent on Ubuntu that gets installed with NFS client ? (I am using Ubuntu 16.04.5 LTS).
***UPDATE:***
I have tried:
systemctl enable rpc-gssd.service && systemctl start rpc-gssd.service
That launches OK:
# systemctl status rpc-gssd.service
● rpc-gssd.service - RPC security service for NFS client and server
Loaded: loaded (/lib/systemd/system/rpc-gssd.service; static; vendor preset: enabled)
Active: active (running) since Thu 2018-10-04 16:49:40 BST; 6min ago
Process: 51689 ExecStart=/usr/sbin/rpc.gssd $GSSDARGS (code=exited, status=0/SUCCESS)
Main PID: 51691 (rpc.gssd)
Tasks: 1
Memory: 516.0K
CPU: 13ms
CGroup: /system.slice/rpc-gssd.service
└─51691 /usr/sbin/rpc.gssd
But Ubuntu just hangs when trying to mount ?
# mount -v -t nfs4 -o defaults,sec=krb5p,noexec,nosuid,_netdev,auto server.example.com:/srv/dir/example /mnt/example
mount.nfs4: timeout set for Thu Oct 4 16:54:40 2018
mount.nfs4: trying text-based options 'sec=krb5p,addr=10.10.10.10,clientaddr=10.10.10.9'
# NOTHING ELSE HAPPENS.....
Little Code
(491 rep)
Oct 4, 2018, 03:15 PM
• Last activity: May 20, 2025, 08:00 PM
0
votes
3
answers
226
views
nfs v4 export is adding additional options not specified in /etc/exports
This seems trivial but I've lost too much time searching and reading manuals. RHEL 7.9 server. I have a simple directory being exported on nfs v4, using `/etc/exports`, with specific options. ``` [ ~]# cat /etc/exports /path/to/my/share/ remoteserver.host.com(rw,no_root_squash,sync,insecure) [ ~]# `...
This seems trivial but I've lost too much time searching and reading manuals.
RHEL 7.9 server.
I have a simple directory being exported on nfs v4, using
/etc/exports
, with specific options.
[ ~]# cat /etc/exports
/path/to/my/share/ remoteserver.host.com(rw,no_root_squash,sync,insecure)
[ ~]#
This is exported using: ]# exportfs -ra
However, if I view the verbose export information, I'm seeing many more options, which are breaking the intended share operations.
Yes, I know I can be more explicit in /etc/exports
but I'm interested to understand where it's coming from because it's a new issue that has creeped up.
~]# exportfs -v
/path/to/my/share/
remoteserver.host.com(sync,wdelay,hide,no_subtree_check,sec=sys,rw,insecure,no_root_squash,no_all_squash)
[ ~]#
You can see the additional options, and in my case, specifically hide
is creating trouble.
I've checked: /etc/nfsmount.conf
but it's fully commented out.
hsx
(1 rep)
Jan 15, 2025, 09:17 PM
• Last activity: May 2, 2025, 02:29 PM
2
votes
4
answers
3304
views
NFS4, insecure, port number, rdma contradiction help
- With RHEL 8.8 currently, and RHEL 9.x, the latest NFS version is 4.2. - When NFS 4 was introduced, it did away with a few things in NFS3 one of which was multiple port numbers: - *NFS4 **mandates** all traffic now exclusively TCP **uses the single well known port 2049**.* - https://www.snia.org/si...
- With RHEL 8.8 currently, and RHEL 9.x, the latest NFS version is 4.2.
- When NFS 4 was introduced, it did away with a few things in NFS3 one of which was multiple port numbers:
- *NFS4 **mandates** all traffic now exclusively TCP **uses the single well known port 2049**.*
- https://www.snia.org/sites/default/files/SNIA_An_Overview_of_NFSv4-3_0.pdf
- you can find more mostly reputable articles stating the same thing.
- I have confirmed this by having only TCP 2049 open in firewalld for NFS 4.1 in RHEL 7.9; it does not use port 111 or any other unless you change the default configurations of
/etc/nfs.conf
or /etc/sysconfig/nfs
. And in fact when I did get rdma working (over port 20049) that the rdma protocol specifically bypasses firewalld, an inherent aspect of why rdma saves cpy cycles and is faster i suppose.
> The NFS **insecure** option in /etc/exports
sets the server to listen to a request from any port on the client. Changing it to 'secure' (default) makes sure that the server will listen to only requests originating from ports 1-1024 of the client. Thus an unauthorized user on the client is kept from starting an NFS dialogue. For reference : https://security.stackexchange.com/questions/246527/what-is-insecure-about-the-insecure-option-of-nfs-exports
The default is **secure** vs *insecure* when doing an NFS4 export if neither is mentioned in /etc/exports
.
With **security rules** it is oftentimes stated *The NFS server must not have the insecure file locking option enabled.*.
First with the /etc/exports
secure
option in play, be default, the *will only operate on secure ports less than 1024` seems to be completely not true since NFS4 runs on port 2049. The number 2049 is greater than 1024... what am I missing?
With regards to RDMA
which by convention happens on port 20049
there seems to be a little missed fact that one needs to **explicitly** state the **insecure** option in /etc/exports
if a mount -o rdma
is to be used otherwise the mount always happens as proto=tcp
and not proto=rdma
with no indication why.
I did validate that, using MLNX_OFED_LINUX-23.04-1.1.3.0-rhel8.8-x86_64.iso
installed in place of the Redhat InfiniBand Support
packages that a mount -o rdma,port=1023
does work with a mount
on the client side showing proto=rdma
.
**However** one must also do (with MLNX only?) an echo rdma 20049 > /proc/sys/nfsd/portlist
. Or in the case with secure export an *echo rdma 1023*. Does anyone know how/why these values are not in /proc/sys/nfsd/portlist
in the first place and why I must do them manually ? **And then what is the correct way to put those numbers there**, so that after boot my /etc/fstab
nfs mounting of my data folder as rdma happens successfully? The MLNX instructional pdf falls short.
I have been banging my head against the wall getting RDMA to work, there seems to be a lot of shortcoming with NFS overall, and I have a paid for cluster mgr software that has RDMA placeholders for configuration, but all mounts are always proto=tcp. So if anyone can provide any information on anything described would be helpful, I will + any answer.
**Also:** I will end up doing /etc/exports
with secure
and choose some port number 1023 and below to satisfy security rules. How do I choose a proper number in that range? As ron nobody my understanding was I should never use port numbers below 1000 or 1024 for stuff I set up?
**update:** it appears that the /etc/exports
parameter of *secure* or *insecure* is inconsequential. What matters is having rdma 20049
in /proc/fs/nfsd/portlist
on the nfsserver. With that, or any number, it appears to work with the *secure* exportfs.
ron
(8647 rep)
Jun 27, 2023, 02:17 PM
• Last activity: Apr 19, 2025, 07:12 PM
0
votes
0
answers
43
views
Trouble changing retrans on NFS mount on Debian 12
I have mounted an exported fs from our NFS Server ( kernel.nfs server ) using fstab: 10.0.0.29:/admin /admin nfs sec=sys,rw,sync,soft 0 0 Because we have some curious errors on an application I want to test some nfs options ( for now retrans and timeo ) I checked the mounted filesystem using nfsstat...
I have mounted an exported fs from our NFS Server ( kernel.nfs server ) using fstab:
10.0.0.29:/admin /admin nfs sec=sys,rw,sync,soft 0 0
Because we have some curious errors on an application I want to test some nfs options ( for now retrans and timeo )
I checked the mounted filesystem using nfsstat -m
/admin from 10.0.0.29:/admin
Flags: rw,sync,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.31,local_lock=none,addr=10.0.0.29
Changed my /etc/fstab
10.0.0.29:/admin /admin nfs sec=sys,rw,sync,soft,retrans=3 0 0
reread the fstab in systemd
systemctl daemon-reload
remount the filesystem
mount -o remount /admin
I got an error
mount.nfs: an incorrect mount option was specified
When I make an umount /admin and mount /admin I got no error, but nfsstat shows that retrans was not changed
nfsstat -m
/admin from 10.0.0.29:/admin
Flags: rw,sync,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.31,local_lock=none,addr=10.0.0.29
What have I missed here?
Installed packages:
apt list --installed | grep nfs
libnfsidmap1/stable,now 1:2.6.2-4 amd64 [Installiert,automatisch]
nfs-common/stable,now 1:2.6.2-4 amd64 [installiert]
I have make some tests on a second machine, I looks it occures if more than 1 disks are mounted over nfs ( may be from same server?!? ), in that case, you can not change the retrans option only on one entry. Both enties need the same retrans options and also than it looks like you can not change it with -o remount, you need umount and mount after changing it
Thomas
(1 rep)
Mar 26, 2025, 10:51 AM
• Last activity: Mar 26, 2025, 01:10 PM
2
votes
0
answers
41
views
Wrong attributes bitmask in READDIR requests on NFSv4.1
I'm struggling the following problem. I have an NFS v4.1 mount, where I have a directory with a couple of thousands files. I'm trying to list their names and types. Even with a minimal example program taken from the `getdents` man page, I see a strange behaviour of the NFS client: - first few `READD...
I'm struggling the following problem.
I have an NFS v4.1 mount, where I have a directory with a couple of thousands files. I'm trying to list their names and types. Even with a minimal example program taken from the
getdents
man page, I see a strange behaviour of the NFS client:
- first few READDIR
RPCs have an attribute bitmask set to what you would expect, i.e. to all the attributes that the server supports
- after the second call to getdents
(or readdir
, doesn't matter), the NFS READDIR
RPCs change - attributes bitmask is set to just RDAttr_Error
and FileId
Can some explain, what could cause the change and why ls
seems not to cause it?
dmk
(21 rep)
Feb 17, 2025, 05:53 PM
• Last activity: Feb 17, 2025, 05:55 PM
2
votes
2
answers
1511
views
Is there any reason to use NFS 3 over version 4.2?
Consider a work environment with 100gbps type Infiniband switches, running on enterprise class servers having 512 GB of RAM and larger, with the operating system being RHEL 8.7 or very close to it. When setting up NFS between servers, is the latest NFS vers=4.2 the best choice? Is there any reason w...
Consider a work environment with 100gbps type Infiniband switches, running on enterprise class servers having 512 GB of RAM and larger, with the operating system being RHEL 8.7 or very close to it.
When setting up NFS between servers, is the latest NFS vers=4.2 the best choice? Is there any reason when using NFSv3 with proto=UDP is ever better than NFS vers=4.2 with proto=tcp/udp/rdma ?
I am particularly interested in this regarding many cluster nodes mounting a
/data
folder from a head node for read/write; what is the logical nfs choice if the goal is performance?
I have found this, dated 2008, which is presumably for vers=4.0 and not 4.2, that makes the statement *there is no clear performance advantage to moving from NFSv3 to NFSv4*.
https://www.linux.com/news/benchmarking-nfsv3-vs-nfsv4-file-operation-performance/
also
https://www.techtarget.com/searchenterprisedesktop/definition/Network-File-System
*Some reviews of NFSv4 and NFSv4.1 suggest that these versions have limited bandwidth and scalability and that NFS slows down during heavy network traffic. The bandwidth and scalability issue is reported to have improved with NFSv4.2. This was last updated in April 2022*
ron
(8647 rep)
Feb 14, 2023, 05:01 PM
• Last activity: Feb 14, 2025, 05:29 AM
1
votes
3
answers
2691
views
NFS4 and remote clients: how to show info?
On Linux with nfsv3 the command showmmount -d show the remote client which mount dir on my nfs server. With nfs4 with a directory remotely mounted the showmount command display nothing. How to know which remote client are using nfs server on my local machine?
On Linux with nfsv3 the command
showmmount -d
show the remote client which mount dir on my nfs server.
With nfs4 with a directory remotely mounted the showmount command display nothing.
How to know which remote client are using nfs server on my local machine?
elbarna
(13690 rep)
Nov 1, 2022, 02:31 PM
• Last activity: Feb 13, 2025, 12:37 PM
0
votes
1
answers
399
views
NFS v4.2 tuning
https://www.youtube.com/watch?v=JXASmxGrHvY at 5:30 the statement is made > if you get NFS tuned just right it is incredibly fast for **ultra small file transfers***... at 6:05 > I've heard of 4.0GB/sec using a sequential read... but you have to have all the infrastructure tuned just right. What, wh...
https://www.youtube.com/watch?v=JXASmxGrHvY
at 5:30 the statement is made
> if you get NFS tuned just right it is incredibly fast for **ultra small file transfers***...
at 6:05
> I've heard of 4.0GB/sec using a sequential read... but you have to have all the infrastructure tuned just right.
What, where, and how do I tune NFS v4.2 in **RHEL-8.10 or later** to achieve what is claimed above? Is this claim true, which this youtube vid seems to be made 3 years ago?
Is there any good *NFS tuning* documentation as it pertains to NFS v4.2 in RHEL 8/9 or equivalent today?
*v4.2 is the latest of NFS correct? Is there any proposed newer version of NFS on the horizon?*
**are there any better settings than the default in
/etc/nfs.conf
and /etc/nfsmount.conf
?**
**if I can place a bounty on this I will --> what is the max transfer speed in GB/sec that should be had, in RHEL-8.10 or later, over 100gbps infiniband, on NFS v4.2 (assuming RDMA?) and all the "tuned" options ??** The only *tuning* I am aware of is putting rdma
into effect over infiniband; if someone else knows better/more let me know.
ron
(8647 rep)
Dec 17, 2024, 05:25 PM
• Last activity: Dec 17, 2024, 08:38 PM
3
votes
2
answers
8204
views
is FSID really needed for NFS export?
reference: - https://unix.stackexchange.com/questions/427597/implications-of-using-nfsv4-fsid-0-and- exporting-the-nfs-root-to-entire-lan-or - https://earlruby.org/2022/01/setting-up-nfs-fsid-for-multiple-networks/ - *When I’m setting up NFS servers I **like** to assign each exported volume with a u...
reference:
- https://unix.stackexchange.com/questions/427597/implications-of-using-nfsv4-fsid-0-and-
exporting-the-nfs-root-to-entire-lan-or
- https://earlruby.org/2022/01/setting-up-nfs-fsid-for-multiple-networks/
- *When I’m setting up NFS servers I **like** to assign each exported volume with a unique FSID...*
- *If you use a different FSID for one of these entries, or if you declare the FSID for one subnet and not the other, your NFS server will slowly and mysteriously grind to a halt, sometimes over hours and sometimes over days.*
- seems like explicitly using
fsid
in /etc/exports
can cause problems?
> *from the earlruby link above:*
>
> If you don’t use FSID, there is a chance that when you reboot your NFS server the way that the server identifies the volume will change between reboots, and your NFS clients will hang with “stale file handle” errors. I say “a chance” because it depends on how your server stores volumes, what version of NFS it’s running, and if it’s a fault tolerant / high availability server, how that HA ability was implemented. Using a unique FSID ensures that the volume that the server presents is always identified the same way, and it makes it easier for NFS clients to reconnect and resume operations after an NFS server gets rebooted.
- https://linux.die.net/man/5/exports
In reading the man page and other resources, I do not understand the real need for doing an explicit fsid=
in /etc/exports
. Can someone explain to me when and why you would do such a thing, in RHEL 8.9 or later with NFS vers=4.2, especially when I personally have never explicitly done used fsid=
and have never had a problem.
When one explicitly states fsid=
in /etc/exports
, how is that identifier chosen? How do you ensure it is unique?
If I am only nfs exporting folders that are local mounts such as /dev/sdb1
and /dev/sdc1
and I mount them via UUID in my /etc/fstab
and they are XFS file systems does that basically guarantee that I do not have to use fsid=
in /etc/exports
? What if any of those local mounted filesystems are NTFS-3g?
What file systems are problematic per the description of the nfs man page? *Not all filesystems are not stored on devices*... what does this mean*?
ron
(8647 rep)
Dec 6, 2023, 09:35 PM
• Last activity: Dec 11, 2024, 04:08 AM
0
votes
1
answers
33
views
Can't set NFS permissions after switching networks
(EndeavourOS host, running nfsv4-server.service) Got a QEMU/KVM VM network that I'm trying to switch to VMware. For the most part, this has been a simple process, if not a time-consuming one. The only real hangup I'm having is with the NFSv4 share permissions. Here is my working `/etc/exports` from...
(EndeavourOS host, running nfsv4-server.service)
Got a QEMU/KVM VM network that I'm trying to switch to VMware. For the most part, this has been a simple process, if not a time-consuming one. The only real hangup I'm having is with the NFSv4 share permissions.
Here is my working
/etc/exports
from the QEMU setup:
/media/host/shared-files 192.168.122.0/24(rw,sync,insecure,no_root_squash,no_subtree_check,crossmnt,fsid=0)
When I switched to VMware, the NAT network changed to 172.16.105.0, so I figured that simply making that change in /etc/exports
, re-export, then I'd be good to go. However, the only way I can get this to work is to remove the IP network/subnet masks restrictions.
This is not a firewall issue, as I can see/use the share just fine from the guests. I'm not as proficient with TCP/IP networks as I'd like, so I'm not sure what the problem is here.
Any ideas?
ajgringo619
(3584 rep)
Nov 18, 2024, 02:56 AM
• Last activity: Nov 18, 2024, 06:32 PM
1
votes
0
answers
79
views
preventing Linux session unresponsiveness during heavy NFS write
Given this scenario: - on a 100gbp infiniband LAN, (same happens over the 1gbps LAN) - a RHEL-8.10 host controller server having a SCSI cable connection to an enterprise type data storage unit that provides a 100TB /data folder, having an XFS file system and an "ADAPT" raid protection choice over 50...
Given this scenario:
- on a 100gbp infiniband LAN, (same happens over the 1gbps LAN)
- a RHEL-8.10 host controller server having a SCSI cable connection to an enterprise type data storage unit that provides a 100TB /data folder, having an XFS file system and an "ADAPT" raid protection choice over 50+ disks (versus RAID-5, 0/1, etc), and where this is the nfs_server as 192.168.1.1,
- an NFS client (i.e. 192.168.1.2), also a RHEL-8.10 server, mounts
192.168.1.1:/data /data
with proto=rdma
and as NFS version 4.2
- a scp
from nfs client to nfs server writing a single ~100gb test.tar file achieves 1.0 GB/sec speeds,
- typically a copy over NFS runs around 400MB/sec or better,
Problem is when a user does a basic copy of cp /home/ron/bigdata.tar /data/run1/
where that data file is somewhere between 100-500gb in size, all nfs clients mounting the /data
folder from the nfs server become unresponsive with regards to the GNOME desktop and even any putty ssh terminal connection - if you try to do various cd /data/run2/
or cd /data/anywhere
on an nfs client and **especially** if you hit tab you use bash completion to traverse the folder structure, you session hangs. Once that copy, which seems to be getting 100% resource usage of the nfs_server host controller to the 100tb storage array finishes, then things seemingly go back to normal and users get acceptable responsiveness on their existing sessions. But during that write, their logging sessions become unresponsive and it is unacceptable.
Is there a way to correct this? Is there a specific tuned
profile to choose? I would rather have the copy (write) process be lower priority and give priority to whatever processes are involved in making a pleasant and responsive user experience when logged in and simply traversing the folder paths under /data which is nfs mounted.
ron
(8647 rep)
Sep 19, 2024, 06:17 PM
• Last activity: Sep 19, 2024, 07:54 PM
0
votes
0
answers
51
views
What explains incomplete nfs4 foreground mounting?
I'm trying to troubleshoot an NFSv4 mounting issue on Rocky 9.4. My `mount` command exits with status 0 and without printing any error message, but the filesystem is not successfully mounted. In particular, it is not listed by the `df` command, and the mount point directory is empty (whereas the exp...
I'm trying to troubleshoot an NFSv4 mounting issue on Rocky 9.4.
My
mount
command exits with status 0 and without printing any error message, but the filesystem is not successfully mounted. In particular, it is not listed by the df
command, and the mount point directory is empty (whereas the exported directory is not). On the other hand, the mount appears in /proc/mounts
, and if I attempt to repeat the same mount command in verbose mode, the last message it emits is "mount.nfs: mount(2): Device or resource busy". That appears to be associated with the mount point directory, not the remote filesystem, because I can try again (once) without getting that message if I use a different mount point.
Those "Device or resource busy" conditions persist as long as I've been willing to wait, at least tens of minutes, even though I'm performing a foreground mount, and I've set options that I would expect to make the mount attempt fail relatively quickly (and the mount
command does terminate quickly). For example (as root):
mount -v -o rw,fg,exec,nosuid,nodev,vers=4,retry=1,timeo=50 my.server.org:/remote/filesystem /mnt/tmp
I don't see anything relevant in the system log.
Also, I can successfully mount a different filesystem exported by the same server. If I'm grasping at straws, it may be relevant that krb5p security is being negotiated (both for the mount that fails and for the one that succeeds), which is as I expected.
**What are some plausible explanations for this problem?**
**What options do I have for further troubleshooting?**
John Bollinger
(632 rep)
Sep 13, 2024, 09:05 PM
• Last activity: Sep 13, 2024, 09:16 PM
Showing page 1 of 20 total questions