Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
3
answers
96
views
Unable to mount NFS share from fstab, cmd line works fine
I have an NFS server installed on my PC running KDE Neon (24.04). I installed it so that I could share a folder to a Windows 11 tablet. It worked well, and I was able to create a persistent mount on the tablet. I have removed Windows from the tablet in favor of KDE Neon. Now I can mount the share fr...
I have an NFS server installed on my PC running KDE Neon (24.04). I installed it so that I could share a folder to a Windows 11 tablet. It worked well, and I was able to create a persistent mount on the tablet. I have removed Windows from the tablet in favor of KDE Neon. Now I can mount the share from the cmd line without a problem, but I am not able to get it to mount from fstab. Can anyone suggest why one works and the other doesn't?
CMD mount...
sudo mount -t nfs 10.0.0.239:/srv/nfs/share /media/media0
In my fstab I have this...
/media/media0 /srv/nfs4/media/media0 none bind 0 0
In my exports file I have...
/srv/nfs4 10.0.0.0/255.255.255.0(rw,sync,fsid=0,crossmnt,no_subtree_check)
Can anyone help me figure out why I the mount works from the command line, but not from fstab?
Thanks,
Rick
Thanks for the suggestions everyone. Unfortunately I have yet to find a complete solution.
I've made the suggested changes to my /etc/exports file and the the /etc/fstab file recommended in the last reply. I am still not able to mount the shares at boot. I can mount them manually, and I can mount them with $ sudo mount -a. I thought the issue was timing and my impatience but after a reboot and waiting several minutes the share didn't mount unless I issued sudo mount -a. What am I missing here?
Rick
Ok, the instructions say to reply to a post I need to edit my post so, here goes...
Adding _netdev to the NFS mount commands in my fstab did not work. In fact adding that caused the network to not start, but had no effect on nfs-server.service, at least not that I could see.
Thanks, Rick
Rick Knight
(19 rep)
Jul 11, 2025, 06:40 PM
• Last activity: Jul 25, 2025, 04:20 AM
7
votes
1
answers
10280
views
"Stale file handle" on certain directories occurring immediately after NFS mount; no file handles open
For some time I've been experiencing a strange issue with NFS where a seemingly random subset of directories (always the same ones) under `/` consistently show up with stale file handles immediately after NFS mount. I've been able to correct the problem by explicitly exporting the seemingly-random s...
For some time I've been experiencing a strange issue with NFS where a seemingly random subset of directories (always the same ones) under
/
consistently show up with stale file handles immediately after NFS mount.
I've been able to correct the problem by explicitly exporting the seemingly-random set of problem directories, but I'd like to see if I can fix things more completely so I don't have to occasionally add random directories to the export table.
Below, I mount a filesystem, show that there are no open file handles, run ls
, and rerun lsof
. Empty lines added between commands for clarity:
# mount -t nfs -o vers=4,noac,hard,intr 192.168.0.2:/ /nfs -vvv
mount.nfs: trying text-based options 'vers=4,noac,hard,intr,addr=192.168.0.2,clientaddr=192.168.0.4'
192.168.0.2:/ on /nfs type nfs (rw,vers=4,noac,hard,intr)
# lsof | grep /nfs
# ls -lh /nfs
ls: cannot access /nfs/usr: Stale file handle
ls: cannot access /nfs/root: Stale file handle
ls: cannot access /nfs/etc: Stale file handle
ls: cannot access /nfs/home: Stale file handle
lrwxrwxrwx 1 root root 7 Mar 27 2017 bin -> usr/bin
drwxr-xr-x 6 root root 16K Jan 1 1970 boot
drwxr-xr-x 438 i336 users 36K Feb 28 12:12 data
drwxr-xr-x 2 root root 4.0K Mar 14 2016 dev
d????????? ? ? ? ? ? etc
d????????? ? ? ? ? ? home
lrwxrwxrwx 1 root root 7 Mar 27 2017 lib -> usr/lib
lrwxrwxrwx 1 root root 7 Mar 27 2017 lib64 -> usr/lib
drwxr-xr-x 15 root root 4.0K Oct 15 15:51 mnt
drwxr-xr-x 2 root root 4.0K Aug 9 2017 nfs
drwxr-xr-x 14 root root 4.0K Jan 28 17:00 opt
dr-xr-xr-x 2 root root 4.0K Mar 14 2016 proc
d????????? ? ? ? ? ? root
drwxr-xr-x 2 root root 4.0K Mar 14 2016 run
lrwxrwxrwx 1 root root 7 Mar 27 2017 sbin -> usr/bin
drwxr-xr-x 6 root root 4.0K Jun 22 2016 srv
dr-xr-xr-x 2 root root 4.0K Mar 14 2016 sys
drwxrwxrwt 2 root root 4.0K Dec 10 2016 tmp
d????????? ? ? ? ? ? usr
drwxr-xr-x 15 root root 4.0K May 24 2017 var
# lsof | grep /nfs
#
The subdirectories in question are not mount points; they seem completely normal:
$ ls -dlh /usr /root /etc /home
drwxr-xr-x 123 root root 12K Mar 3 13:34 /etc
drwxr-xr-x 7 root root 4.0K Jul 28 2017 /home
drwxrwxrwx 32 root root 4.0K Mar 3 13:55 /root
drwxr-xr-x 15 root root 4.0K Feb 24 17:48 /usr
There are no related errors in syslog about these directories. The only info that does show up mentions a different set of directories:
... rpc.mountd: Cannot export /proc, possibly unsupported filesystem or fsid= required
... rpc.mountd: Cannot export /dev, possibly unsupported filesystem or fsid= required
... rpc.mountd: Cannot export /sys, possibly unsupported filesystem or fsid= required
... rpc.mountd: Cannot export /tmp, possibly unsupported filesystem or fsid= required
... rpc.mountd: Cannot export /run, possibly unsupported filesystem or fsid= required
Here's what /etc/exports
currently looks like:
/ *(rw,subtree_check,no_root_squash,nohide,crossmnt,fsid=0,sync)
The server side is running Arch Linux and is currently on kernel 4.10.3.
The client-side is Slackware 14.1 with kernel 4.1.6.
i336_
(1077 rep)
Mar 3, 2018, 08:18 AM
• Last activity: Jul 22, 2025, 02:09 PM
1
votes
1
answers
1927
views
Mounting nested ZFS filesystems exported via NFS
I have a linux (ubuntu) server with a zfs pool containing nested fileystem. E.g.: zfs_pool/root_fs/fs1 zfs_pool/root_fs/fs2 zfs_pool/root_fs/fs3 I have enabled NFS sharing on the root filesystem (via zfs, not by editing `/etc/exports`). Nested filesystems inherit this property. NAME PROPERTY VALUE S...
I have a linux (ubuntu) server with a zfs pool containing nested fileystem.
E.g.:
zfs_pool/root_fs/fs1
zfs_pool/root_fs/fs2
zfs_pool/root_fs/fs3
I have enabled NFS sharing on the root filesystem (via zfs, not by editing
/etc/exports
). Nested filesystems inherit this property.
NAME PROPERTY VALUE SOURCE
zfs_pool/root_fs sharenfs rw=192.168.1.0/24,root_squash,async local
NAME PROPERTY VALUE SOURCE
zfs_pool/root_fs/fs1 sharenfs rw=192.168.1.0/24,root_squash,async inherited from zfs_pool/root_fs
On the client machines (linux, mostly ubuntu), the only filesystem I explicitly mount is the root filesystem.
mount -t nfs zfsserver:/zfs_pool/root_fs /root_fs_mountpoint
Nested filesystems are mounted automatically when they are accessed. I didn't need to configure anything to make this work.
This is great, but I'd like to know who is providing this feature.
Is it ZFS? Is it NFS? Is it something else on the client side (something like autofs, which isn't even installed).
I'd like to change the timeout after which nested filesystems are unmounted, but I don't even know which configuration to edit and which documentation to read.
lgpasquale
(291 rep)
Oct 15, 2018, 09:22 AM
• Last activity: Jul 21, 2025, 04:08 PM
1
votes
2
answers
3767
views
What size blocks does NFS server write to disk?
I am trying to calculate required IOPS for a fileserver, and to do this I need to know the typical block size. I know that NFS clients can use `rsize` and `wsize` to specify how much data is sent over the network. Does the NFS server also use these same values to write the data to the disk, or is th...
I am trying to calculate required IOPS for a fileserver, and to do this I need to know the typical block size. I know that NFS clients can use
rsize
and wsize
to specify how much data is sent over the network. Does the NFS server also use these same values to write the data to the disk, or is there some other way to configure that? I haven't found anything in the man pages.
Elliott B
(575 rep)
Sep 19, 2019, 07:15 PM
• Last activity: Jul 18, 2025, 08:03 PM
0
votes
1
answers
35
views
How do I share an Ubuntu Ext4 partition in a VMWare VM, with another VM?
I am running VMware Workstation 16 Pro on my W11 PC. I have several VMs, including a Ubuntu 22.04.4 LTS VM, and a Solaris 10 VM (sadly still needed for legacy projects). I need to allow the Solaris VM access (full RW permissions) to my "/" partition (ext4, /dev/sda3) in the Ubuntu VM. First, is this...
I am running VMware Workstation 16 Pro on my W11 PC. I have several VMs, including a Ubuntu 22.04.4 LTS VM, and a Solaris 10 VM (sadly still needed for legacy projects). I need to allow the Solaris VM access (full RW permissions) to my "/" partition (ext4, /dev/sda3) in the Ubuntu VM. First, is this even possible? If so, what are the steps necessary for this to happen?
On the Ubuntu VM, I went to Settings -> Sharing, enabled Sharing, and then went into Media Sharing, and shared the folder
/
, and enabled Wired connection 1
. In the Solaris VM created the folder /etc/nfs/ub22
(as root), I su'd to root and entered
mount -F nfs 192.168.18.107:/ /etc/nfs/ub22
(1921.68.18.107 is the IP of the Ubuntu VM) but I get the error
nfs mount: 192.168.18.107: : RPC: Rpcbind failure - RPC: Unable to receive
Brie
(111 rep)
Jul 18, 2025, 03:10 PM
• Last activity: Jul 18, 2025, 06:58 PM
0
votes
1
answers
2429
views
Why are directories under /mnt not visible when mounting filesystem with NFS?
I setup an NFS share on my NFS server with `/etc/exports` containing `/ *(rw,no_root_squash,no_subtree_check)` Then I do `exportfs -a` to activate the share and restart the nfs server. I mount the share with autofs on the client machine with `/etc/auto.nfs` containing `foo -fstype=nfs4,soft,rw,noati...
I setup an NFS share on my NFS server with
/etc/exports
containing / *(rw,no_root_squash,no_subtree_check)
Then I do exportfs -a
to activate the share and restart the nfs server.
I mount the share with autofs on the client machine with /etc/auto.nfs
containing foo -fstype=nfs4,soft,rw,noatime,allow_other server.tld:/
My auto.master
contains /mnt/nfs /etc/auto.nfs --timeout=30 --ghost
I restart autofs (systemctl restart autofs.service)
Then I see all directories from the server. But when I try to navigate to mounted serverdisks under /mnt/mounteddiskonserver I can't see anything anymore. No files, no directories, no write permission through nemo file browser on the client machine.
I can go to /home/user on the server and see and delete all my files on the server that have same permissions as /mnt/mounteddiskonserver/fileshere.
When I setup the NFS server to share /mnt/mounteddiskonserver specifically with with /etc/exports
containing /mnt/mounteddiskonserver *(rw,no_root_squash,no_subtree_check)
I can see all files and directories under /mnt/mounteddiskonserver and I can read and write.
Michiel Bruijn
(1 rep)
Feb 13, 2022, 05:01 PM
• Last activity: Jul 12, 2025, 07:06 PM
3
votes
2
answers
13322
views
mount.nfs: mount system call failed
I am trying to mount hdfs on my local machine running Ubuntu using the following command :--- sudo mount -t nfs -o vers=3,proto=tcp,nolock 192.168.170.52:/ /mnt/hdfs_mount/ But I am getting this error:- mount.nfs: mount system call failed Output for rpcinfo -p 192.168.170.52 is program vers proto po...
I am trying to mount hdfs on my local machine running Ubuntu using the following command :---
sudo mount -t nfs -o vers=3,proto=tcp,nolock 192.168.170.52:/ /mnt/hdfs_mount/
But I am getting this error:-
mount.nfs: mount system call failed
Output for
rpcinfo -p 192.168.170.52
is
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 48435 status
100024 1 tcp 54261 status
100005 1 udp 4242 mountd
100005 2 udp 4242 mountd
100005 3 udp 4242 mountd
100005 1 tcp 4242 mountd
100005 2 tcp 4242 mountd
100005 3 tcp 4242 mountd
100003 3 tcp 2049 nfs
Output for
showmount -e 192.168.170.52
is
Export list for 192.168.170.52:
/ *
I also tried by adding
hadoop.proxyuser.root.groups
*
hadoop.proxyuser.root.hosts
*
in my core-site.xml file located in /etc/hadoop/conf.pseudo. But it did not work.
Please help me with this.
Bhavya Jain
(341 rep)
Jun 28, 2017, 05:38 AM
• Last activity: Jul 9, 2025, 01:05 AM
1
votes
1
answers
1919
views
NFS client source port on reconnect
I am using NFS client to connect to NFS cluster. I have noticed that the default behaviour when server is unavailable is to retry the TCP connections from the same source TCP port, which I have confirmed by tcpdump (many SYN packets, different seq numbers, but same source port). By default NFS uses...
I am using NFS client to connect to NFS cluster. I have noticed that the default behaviour when server is unavailable is to retry the TCP connections from the same source TCP port, which I have confirmed by tcpdump (many SYN packets, different seq numbers, but same source port). By default NFS uses priviledged ports (1024 and and now on each reconnect attempt will be from different TCP port.
NFS client is SLES12 SP4, the same behaviour is also on Oracle Linux 7.7.
NFS server is HAE cluster based on SLES12 SP4.
Is this behaviour documented somewhere? Why it does use the same port every time by default but not when using
noresvport
?
Marki555
(2128 rep)
Dec 12, 2019, 01:25 PM
• Last activity: Jul 2, 2025, 04:15 AM
1
votes
1
answers
2852
views
nfs export to ipv6 client
My NFS exports are accessible via IPv4 to a number of hosts on my LAN. I want to make these exports available via IPv6 so that I can mount them to my laptop when I'm away. When I'm away I can access these LAN hosts via their IPv6 address and from them can access my laptop via its IPv6 address. So I...
My NFS exports are accessible via IPv4 to a number of hosts on my LAN.
I want to make these exports available via IPv6 so that I can mount them to my laptop when I'm away. When I'm away I can access these LAN hosts via their IPv6 address and from them can access my laptop via its IPv6 address.
So I assume that the blocking issue is in the NFS configuration somewhere.
So here is a line from the server's
/etc/exports
:
/export/test 2001:123:a:b:c:d:e:f(rw,nohide,insecure,no_subtree_check,async)
where that IPv6 address is the laptop's network device through which I can ping from the NFS server, open a SSH session...
And here is the corresponding line from the client's /etc/fstab
For the IPv6 address I have tried [address]
'[address]'
and simply address
.
But in all cases, attempting to mount returns the error
mount.nfs: access denied by server while mounting address
:/export/test
Stephen Boston
(2526 rep)
Jul 31, 2019, 09:13 PM
• Last activity: Jul 1, 2025, 05:03 AM
2
votes
2
answers
6929
views
Many troubles trying to mount QNAP NFS server on Ubuntu mount point
I have a [QNAP TS-251 NAS][1] with a pair of 4 GB drives set up as RAID 1, running as a server, mainly for backups. It has the default operating system, QTS 4.2.0 (2016/01/19), which I think is current. It's a customized flavor of Linux with a GUI interface that I run from my Windows desktop. It has...
I have a QNAP TS-251 NAS with a pair of 4 GB drives set up as RAID 1, running as a server, mainly for backups. It has the default operating system, QTS 4.2.0 (2016/01/19), which I think is current. It's a customized flavor of Linux with a GUI interface that I run from my Windows desktop. It has IP number 169.254.100.101 on its Ethernet #1 port and 169.254.100.102 on its Ethernet #2 port.
As a client, I have an old Dell (Core 2 Duo T8100) laptop running Ubuntu 14.04 LTS. It's connected to the QNAP's #2 Ethernet port, and its IP number is 169.254.100.99.
_Note:_ From here I'll include background details that can probably be skipped (but answer "Why would you do this?" questions I anticipate) in [brackets].
There's also a Windows desktop connected to the QNAP's #1 Ethernet port, which is involved with this only to run the QNAP's GUI. I can also use PuTTY from it to the QNAP if I need a command line
[The QNAP comes with several default shared folders: Download, Multimedia, Public, Recordings, Web, homes.]
I created a shared folder named CrashPlan on the QNAP server, and a mount point on the Ubuntu client named /mnt/QNAP-CrashPlan. I installed NFS client packages on the client with
sudo apt-get install portmap nfs-client
[and installed autofs with sudo apt-get install autofs
in an unsuccessful attempt to diagnose problems].
Following advice in this question , I gave NFS access rights, host/IP/network 169.254.*, permission read/write, and squash option NO_ROOT_SQUASH. The anonymous boxes remained grayed out.
So, in the moment of truth, from the client, I tried sudo mount 169.254.100.102:/CrashPlan /mnt/QNAP-CrashPlan
and sudo mount -t nfs 169.254.100.102:/CrashPlan /mnt/QNAP-CrashPlan
, and got mount.nfs: Connection timed out
in both cases.
In an attempt to diagnose the problem, I tried showmount -e 169.254.100.102
, but it replied clnt_create: RPC: Port mapper failure - Unable to receive: errno 111 (Connection refused)
.
I've searched a lot and tried a lot, but I haven't found any further paths to diagnose the problem. Any ideas?
I will edit in more details as needed. Also, this might deserve a "qnap" tag, but I don't have create tag permission.
_[XY problem details:_ The reason I named the shared folder that I created CrashPlan is that I tried to run the QNAP as a CrashPlan server. I wasn't able to get that to work except by mounting the CrashPlan folder as a Windows drive, using net use
to mount it, and running a separate user-mode client instance of CrashPlan on Windows, because Windows doesn't allow services access to net use
drives. Running the CrashPlan server on my old Ubuntu laptop with the QNAP mounted through NFS was described as a configuration that avoided that issue.]
Steve
(176 rep)
Jun 30, 2016, 12:11 AM
• Last activity: Jun 29, 2025, 02:01 PM
2
votes
2
answers
6809
views
Permissions on a mounted NFS share
I am trying to make a php script on a webserver write into a folder /data on a fileserver. Apache 2.2, PhP 5.x. It's just a test configuration but I'd like to understand the thing somehow as I am not very experienced regarding permissions when it comes to webservers. I am sharing the folder /data on...
I am trying to make a php script on a webserver write into a folder /data on a fileserver.
Apache 2.2, PhP 5.x. It's just a test configuration but I'd like to understand the thing somehow as I am not very experienced regarding permissions when it comes to webservers.
I am sharing the folder /data on the fileserver by adding
/data 192.168.20.6(rw,sync,no_subtree_check)
Mount the folder by
sudo mount 192.168.20.5:/data /mnt/data
Create a link to the webroot(does that makes sense at all?)
sudo ln -s /mnt/data /webroot/site1/share
Then I get this:
Warning: fopen(/webroot/site1/share/data/uploads/Fotoraum/Original/Bluehend/test.txt): failed to open stream: Permission denied
Where and how do I have to adjust permissions in a sane manner to allow the script to write into /data and its subfolders?
Thanks a lot!
mammal
(21 rep)
Sep 9, 2014, 09:20 PM
• Last activity: Jun 29, 2025, 11:04 AM
0
votes
0
answers
45
views
OpenBSD: group access on NFS share?
I'm exporting a filesystem from Ubuntu 22.04.5 via `/etc/exports`: ``` /mnt/exportTest/ 192.168.129.8(rw,async,no_root_squash) ``` and mount this filesystem on OpenBSD 7.7 in `/etc/fstab`: ``` 192.168.130.2:/mnt/exportTest/ /mnt/exportTest nfs rw,tcp,soft 0 0 ``` the mount works well and I get: ```...
I'm exporting a filesystem from Ubuntu 22.04.5 via
/etc/exports
:
/mnt/exportTest/ 192.168.129.8(rw,async,no_root_squash)
and mount this filesystem on OpenBSD 7.7 in /etc/fstab
:
192.168.130.2:/mnt/exportTest/ /mnt/exportTest nfs rw,tcp,soft 0 0
the mount works well and I get:
root@bsdHost> ~# mount
...
192.168.130.2:/mnt/exportTest/ on /mnt/exportTest type nfs (v3, tcp, soft, timeo=100)
Within the filesystem there is a folder ssl-cert
:
root@ubuntuHost:/mnt/exportTest# ls -aln
total 11
drwxr-xr-x 3 0 0 3 Jun 23 08:09 .
drwxr-xr-x 10 0 0 10 Jun 23 08:07 ..
drwxrwx--- 2 0 114 2 Jun 23 08:09 ssl-cert
the folder looks like this on the OpenBSD host:
root@bsdHost> /mnt/exportTest# ls -aln
total 8
drwxr-xr-x 3 0 0 3 Jun 23 08:09 .
drwxr-xr-x 5 0 0 512 Jun 23 08:15 ..
drwxrwx--- 2 0 114 2 Jun 23 08:09 ssl-cert
There is a user _openldap
on the OpenBSD host who is a member of the group 114
:
root@bsdHost> /mnt/exportTest# id -G _openldap
544 114
In my opinion, this user should be able to read the folder /mnt/exportTest/ssl-cert/
, but:
root@bsdHost> ~# su - _openldap
No home directory /nonexistent!
Logging in with home = "/".
bsdHost$ ls /mnt/exportTest/ssl-cert/
ls: /mnt/exportTest/ssl-cert/: Permission denied
What also confuses me is, that the filesystem can be mounted multiple times:
root@bsdHost> ~# mount /mnt/exportTest/
nfs server 192.168.130.2:/mnt/exportTest/: not responding
nfs server 192.168.130.2:/mnt/exportTest/: is alive again
root@bsdHost> ~# mount /mnt/exportTest/
root@bsdHost> ~# mount /mnt/exportTest/
root@bsdHost> ~# mount /mnt/exportTest/
root@bsdHost> ~# mount /mnt/exportTest/
root@bsdHost> ~# mount /mnt/exportTest/
nfs server 192.168.130.2:/mnt/exportTest/: not responding
nfs server 192.168.130.2:/mnt/exportTest/: is alive again
root@bsdHost> ~# mount
...
192.168.130.2:/mnt/exportTest/ on /mnt/exportTest type nfs (v3, tcp, soft, timeo=100)
192.168.130.2:/mnt/exportTest/ on /mnt/exportTest type nfs (v3, tcp, soft, timeo=100)
192.168.130.2:/mnt/exportTest/ on /mnt/exportTest type nfs (v3, tcp, soft, timeo=100)
192.168.130.2:/mnt/exportTest/ on /mnt/exportTest type nfs (v3, tcp, soft, timeo=100)
192.168.130.2:/mnt/exportTest/ on /mnt/exportTest type nfs (v3, tcp, soft, timeo=100)
192.168.130.2:/mnt/exportTest/ on /mnt/exportTest type nfs (v3, tcp, soft, timeo=100)
192.168.130.2:/mnt/exportTest/ on /mnt/exportTest type nfs (v3, tcp, soft, timeo=100)
Thomas P
(23 rep)
Jun 22, 2025, 02:14 PM
• Last activity: Jun 24, 2025, 06:20 AM
1
votes
1
answers
4058
views
nfs problems with jumbo (MTU=9000) but works with default (MTU=1500)
I have a local network set up between two servers running ubuntu 18.04 server. They are connected by a 10G network switch (actually 2 bonded connections 10G each). For performance reasons, in /etc/netplan, I have mtu=9000 for the corresponding interface (ethernet or bond). All machines on the subnet...
I have a local network set up between two servers running ubuntu 18.04 server. They are connected by a 10G network switch (actually 2 bonded connections 10G each). For performance reasons, in /etc/netplan, I have mtu=9000 for the corresponding interface (ethernet or bond). All machines on the subnet have MTU=9000 set. See my previous question and solution: https://unix.stackexchange.com/questions/469346/link-aggregation-bonding-for-bandwidth-does-not-work-when-link-aggregation-gro/469715#469715
I can ssh, copy files between machines, etc., at high bandwidth (>15 GBit/sec).
One server has an nfs (nfs4, I tried with nfs3 as well) export. I can mount and view some directories of the nfs from other machines on the subnet. Settings are identical to the HowTo: https://help.ubuntu.com/community/NFSv4Howto .
However, commands like "ls" or "cd" or even "df" will randomly hang infinitely on the client.
I tried changing the MTU to default (1500) on the client and host interfaces, while leaving jumbo frames "activated" on the switch. Oddly, this solved all the issues.
I am wondering if NFS(4) is incompatible with jumbo frames, of if anyone has any insight into this. I have found people "optimizing" nfs with different MTU sizes, and people mention hanging "ls" etc., but never in the same context...
rveale
(161 rep)
Apr 3, 2019, 12:34 PM
• Last activity: Jun 22, 2025, 11:22 PM
0
votes
1
answers
2301
views
Secondary DRBD node does not auto-start in Pacemaker+Corosync setup
I am trying to set up a 2-PC cluster with shared resources: `ClusterIP`, `ClusterSamba`, `ClusterNFS`, `DRBD` (cloned resource), and a `DRBDFS`. The beginning of the project followed the [Clusters from Scratch](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/inde...
I am trying to set up a 2-PC cluster with shared resources:
ClusterIP
, ClusterSamba
, ClusterNFS
, DRBD
(cloned resource), and a DRBDFS
.
The beginning of the project followed the [Clusters from Scratch](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/index.html) guide. When everything in this guide is done, it works without problems.
So, I wanted to use parts of that guide and build my own setup:
I created one shared IP (ClusterIP
) that is automatically assigned to one node, and (here is where it gets tricky) on that node, I mount my /dev/drbd1
device to /exports
and then share this mount through **SAMBA** and **NFS**.
When I start the cluster, all resources come up as they should, _but DRBD does not go up on the secondary node_ (Primary/Unknown
). If I bring it up manually, it syncs and works. Also, when I stop the cluster (or forcibly reboot the first node), all resources transfer to the other node and everything works, _except DRBD on the other node goes into an Unknown state_.
### So now, here is the problem:
**Why does DRBD go down on the secondary node when I stop the cluster? Or why doesn't it start in the Secondary role on the secondary node?**
Sorry if my description is bad.
---
## Here are the commands I used
# apt install -y pacemaker pcs psmisc policycoreutils-python-utils drbd-utils samba nfs-kernel-server
# systemctl start pcsd.service
# systemctl enable pcsd.service
# passwd hacluster
# pcs host auth alice bob
# pcs cluster setup myCluster alice bob --force
# pcs cluster start --all
# pcs property set stonith-enabled=false
# pcs property set no-quorum-policy=ignore
# modprobe drbd
# echo drbd >/etc/modules-load.d/drbd.conf
# drbdadm create-md r0
# drbdadm up r0
# drbdadm primary r0 --force
# mkfs.ext4 /dev/drbd1
# systemctl disable smbd
# systemctl disable nfs-kernel-server.service
# mkdir /exports
# vi /etc/samba/smb.conf
# vi /etc/exports
# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=10.1.1.30 cidr_netmask=24 op monitor interval=30s
# pcs resource defaults resource-stickiness=100
# pcs resource op defaults timeout=240s
# pcs resource create ClusterSamba lsb:smbd op monitor interval=60s
# pcs resource create ClusterNFS ocf:heartbeat:nfsserver op monitor interval=60s
# pcs resource create DRBD ocf:linbit:drbd drbd_resource=r0 op monitor interval=60s
# pcs resource promotable DRBD promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true
# pcs resource create DRBDFS Filesystem device="/dev/drbd1" directory="/exports" fstype="ext4"
# pcs constraint order ClusterIP then ClusterNFS
# pcs constraint order ClusterNFS then ClusterSamba
# pcs constraint order promote DRBD-clone then start DRBDFS
# pcs constraint order DRBDFS then ClusterNFS
# pcs constraint order ClusterIP then DRBD-clone
# pcs constraint colocation ClusterSamba with ClusterIP
# pcs constraint colocation add ClusterSamba with ClusterIP
# pcs constraint colocation add ClusterNFS with ClusterIP
# pcs constraint colocation add DRBDFS with DRBD-clone INFINITY with-rsc-role=Master
# pcs constraint colocation add DRBD-clone with ClusterIP
# pcs cluster stop --all && sleep 2 && pcs cluster start --all
---
## Configs and stats
### /etc/drbd.d/r0.res
resource r0 {
device /dev/drbd1;
disk /dev/sdb;
meta-disk internal;
net {
allow-two-primaries;
}
on alice {
address 10.1.1.31:7788;
}
on bob {
address 10.1.1.32:7788;
}
}
---
### /etc/corosync/corosync.conf
totem {
version: 2
cluster_name: myCluster
transport: knet
crypto_cipher: aes256
crypto_hash: sha256
}
nodelist {
node {
ring0_addr: alice
name: alice
nodeid: 1
}
node {
ring0_addr: bob
name: bob
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
}
logging {
to_logfile: yes
logfile: /var/log/corosync/corosync.log
to_syslog: yes
timestamp: on
}
---
### pcs status
Cluster name: myCluster
Stack: corosync
Current DC: alice (version 2.0.1-9e909a5bdd) - partition with quorum
Last updated: Fri May 15 12:28:30 2020
Last change: Fri May 15 11:04:50 2020 by root via cibadmin on bob
2 nodes configured
6 resources configured
Online: [ alice bob ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started alice
ClusterSamba (lsb:smbd): Started alice
ClusterNFS (ocf::heartbeat:nfsserver): Started alice
Clone Set: DRBD-clone [DRBD] (promotable)
Masters: [ alice ]
Stopped: [ bob ]
DRBDFS (ocf::heartbeat:Filesystem): Started alice
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
---
### pcs constraint --full
Location Constraints:
Ordering Constraints:
start ClusterIP then start ClusterNFS (kind:Mandatory) (id:order-ClusterIP-ClusterNFS-mandatory)
start ClusterNFS then start ClusterSamba (kind:Mandatory) (id:order-ClusterNFS-ClusterSamba-mandatory)
promote DRBD-clone then start DRBDFS (kind:Mandatory) (id:order-DRBD-clone-DRBDFS-mandatory)
start DRBDFS then start ClusterNFS (kind:Mandatory) (id:order-DRBDFS-ClusterNFS-mandatory)
start ClusterIP then start DRBD-clone (kind:Mandatory) (id:order-ClusterIP-DRBD-clone-mandatory)
start ClusterIP then promote DRBD-clone (kind:Mandatory) (id:order-ClusterIP-DRBD-clone-mandatory-1)
Colocation Constraints:
ClusterSamba with ClusterIP (score:INFINITY) (id:colocation-ClusterSamba-ClusterIP-INFINITY)
ClusterNFS with ClusterIP (score:INFINITY) (id:colocation-ClusterNFS-ClusterIP-INFINITY)
DRBDFS with DRBD-clone (score:INFINITY) (with-rsc-role:Master) (id:colocation-DRBDFS-DRBD-clone-INFINITY)
DRBD-clone with ClusterIP (score:INFINITY) (id:colocation-DRBD-clone-ClusterIP-INFINITY)
Ticket Constraints:
---
### /proc/drbd
version: 8.4.10 (api:1/proto:86-101)
srcversion: 983FCB77F30137D4E127B83
1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r-----
ns:0 nr:4 dw:8 dr:17 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:4
Miki
(31 rep)
May 15, 2020, 11:12 AM
• Last activity: Jun 19, 2025, 10:03 PM
0
votes
1
answers
108
views
Unable to install autofs on rhel9
When I try sudo dnf install autofs, I get this error: ``` [MIRROR] autofs-5.1.7-58.el9.x86_64.rpm: Status code: 403 for https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os/Packages/a/autofs-5.1.7-58.el9.x86_64.rpm (IP: 184.51.36.251) [MIRROR] libsss_autofs-2.8.2-2.el9.x86_64.rpm: Status cod...
When I try sudo dnf install autofs, I get this error:
[MIRROR] autofs-5.1.7-58.el9.x86_64.rpm: Status code: 403 for https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os/Packages/a/autofs-5.1.7-58.el9.x86_64.rpm (IP: 184.51.36.251)
[MIRROR] libsss_autofs-2.8.2-2.el9.x86_64.rpm: Status code: 403 for https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os/Packages/l/libsss_autofs-2.8.2-2.el9.x86_64.rpm (IP: 184.51.36.251)
[MIRROR] autofs-5.1.7-58.el9.x86_64.rpm: Status code: 403 for https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os/Packages/a/autofs-5.1.7-58.el9.x86_64.rpm (IP: 184.51.36.251)
[MIRROR] libsss_autofs-2.8.2-2.el9.x86_64.rpm: Status code: 403 for https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os/Packages/l/libsss_autofs-2.8.2-2.el9.x86_64.rpm (IP: 184.51.36.251)
[FAILED] libsss_autofs-2.8.2-2.el9.x86_64.rpm: No more mirrors to try - All mirrors were already tried without success
(2/2): autofs-5.1.7- 9% [=- ] 484 kB/s | 42 kB 00:00 ETA
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'yum clean packages'.
Error: Error downloading packages:
libsss_autofs-2.8.2-2.el9.x86_64: Cannot download, all mirrors were already tried without success
I attempted to unregister, register, and then refresh my subscription, but after that, attempting to install autofs just gives me this error which is arguably worse:
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:54 ago on Fri 31 May 2024 02:02:08 PM CDT.
No match for argument: autofs
Error: Unable to find a match: autofs
I also tried:
wget http://mirror.centos.org/centos/9-stream/AppStream/x86_64/os/Packages/autofs-5.1.6-1.el9.x86_64.rpm
which gave me:
--2024-05-31 14:15:30-- http://mirror.centos.org/centos/9-stream/AppStream/x86_64/os/Packages/autofs-5.1.6-1.el9.x86_64.rpm
Resolving mirror.centos.org (mirror.centos.org)... 64.150.179.24, 2607:ff28:8005:5:225:90ff:fea8:cd64
Connecting to mirror.centos.org (mirror.centos.org)|64.150.179.24|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2024-05-31 14:15:31 ERROR 404: Not Found.
Ironically, I successfully installed autofs on another one of my linux servers running under the same subscription yesterday, but installing does not work on another node today.
paul runner
(11 rep)
May 31, 2024, 08:24 PM
• Last activity: Jun 19, 2025, 04:30 PM
0
votes
2
answers
3393
views
Mount a NFS partition from a NAS server
We have a NAS server in the lab and I can reach it through the web interface at `192.168.1.100`, and I have enabled NFS on the admin's panel. ![enter image description here][1] After that I did sudo aptitude install nfs-common sudo mount -t nfs4 192.168.2.254:/gwas_data /media/thecus Result: mount.n...
We have a NAS server in the lab and I can reach it through the web interface at
After that I did
sudo aptitude install nfs-common
sudo mount -t nfs4 192.168.2.254:/gwas_data /media/thecus
Result:
mount.nfs4: Connection timed out
OS is Ubuntu 14.04. Any ideas?
192.168.1.100
, and I have enabled NFS on the admin's panel.

qed
(2749 rep)
Dec 11, 2014, 01:23 PM
• Last activity: Jun 17, 2025, 04:03 PM
0
votes
0
answers
34
views
CacheFiles when the cached system is unmounted, or alternatives
In my current setup, I have two machines `serverA` and `serverB` in different geographical areas. `serverA` has a limited amount of persistent memory (~256GB), while `serverB` can be considered to have enough that I will never use it all up (several TB). `serverA` has a directory `/data` which is an...
In my current setup, I have two machines
serverA
and serverB
in different geographical areas. serverA
has a limited amount of persistent memory (~256GB), while serverB
can be considered to have enough that I will never use it all up (several TB).
serverA
has a directory /data
which is an NFS share from serverB
, and also has CacheFiles enabled.
This setup achieves the following:
1. replication: if serverA
's disk die, I can still recover the data from serverB
2. unlimited memory: I am not limited by serverA
's small amount of persistent memory
3. fast access to data: the content of /data
that is in the cache (basically the most recently accessed 200GB) can be accessed without a round-trip on the network
Note that a simple backing-up setup would not achieve 2. I'd like to achieve 1., 2. and 3. but also the following:
4. robustness: if serverB
goes down temporarily, serverA
can still work with the data that's been cached, without me having to manually intervene on serverA
5. encryption: /data
is encrypted by serverA
, so that someone with access to serverB
cannot access the data
I'm mostly interested in 4. and 5. would only be a bonus. Here are my questions:
- I suppose CacheFiles does not achieve 4., is this correct?
- What are the simplest setups that would allow me to achieve 1., 2., 3. and 4., and possibly also 5.?
Quentin
(25 rep)
Jun 8, 2025, 11:46 AM
• Last activity: Jun 8, 2025, 04:35 PM
0
votes
1
answers
3440
views
Restricting NFS share access to particular IPs or hosts and restricting others on suse
I have created a shared folder **/data01/share** on one Suse Gnu/Linux and also made entry for host(client) machine in **/etc/exports** **/data01/share  10.241.200.53(rw,sync,no_root_squash,no_subtree_check)** . But i am getting this after **exportfs -a** exportfs: No options for /data01/s...
I have created a shared folder **/data01/share** on one Suse Gnu/Linux and also made entry for host(client) machine in **/etc/exports** **/data01/share 10.241.200.53(rw,sync,no_root_squash,no_subtree_check)** . But i am getting this after **exportfs -a**
exportfs: No options for /data01/share 10.241.200.53(rw,sync,no_root_squash,no_subtree_check) : suggest (sync) to avoid warning
exportfs: /etc/exports : Neither 'subtree_check' or 'no_subtree_check' specified for export ":/data01/share 10.241.200.53(rw,sync,no_root_squash,no_subtree_check)".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: Failed to stat /data01/share 10.241.200.53(rw,sync,no_root_squash,no_subtree_check): No such file or directory
**cat /etc/os-release**
NAME="SLES"
VERSION="12-SP3"
VERSION_ID="12.3"
PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:12:sp3"
**systemctl status nfs-server.service**
nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/nfs-server.service.d
└─nfsserver.conf
/run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Wed 2019-07-24 02:32:03 EDT; 2h 34min ago
Main PID: 2562 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 512)
CGroup: /system.slice/nfs-server.service
Jul 24 02:32:03 OPT001CORE0002 systemd: Starting NFS server and services...
Jul 24 02:32:03 OPT001CORE0002 systemd: Started NFS server and services.
**cat /etc/exports**
# See the exports(5) manpage for a description of the syntax of this file.
# This file contains a list of all directories that are to be exported to
# other computers via NFS (Network File System).
# This file used by rpc.nfsd and rpc.mountd. See their manpages for details
# on how make changes in this file effective.
/data01/share 10.241.200.53(rw,sync,no_root_squash,no_subtree_check)
**ls -la /data01/share**
total 0
drwxrwxrwx 4 acmuser acmgrp 36 Jul 24 04:18 .
drwxr-xr-x 6 acmuser acmgrp 65 Jul 24 04:16 ..
drwxrwxrwx 3 acmuser acmgrp 18 Jul 24 04:18 support
drwxrwxrwx 5 acmuser acmgrp 45 Jul 24 04:17 upgrade
Ravi Kumar
(21 rep)
Jul 24, 2019, 08:49 AM
• Last activity: Jun 6, 2025, 08:04 PM
2
votes
2
answers
91
views
Run NFSv4 w/RDMA on Rocky v9.5
I'm trying out RDMA on NFS and noticed that it does not seem to work with NFSv4: ``` [grant@host ~]$ sudo mount -t nfs -o rdma,proto=rdma,vers=4 10.99.99.98:/ifs/rdma-test /mnt/powerscale_rdma mount.nfs: Protocol family not supported [grant@host ~]$ sudo mount -t nfs -o rdma,proto=rdma,vers=3 10.99....
I'm trying out RDMA on NFS and noticed that it does not seem to work with NFSv4:
[grant@host ~]$ sudo mount -t nfs -o rdma,proto=rdma,vers=4 10.99.99.98:/ifs/rdma-test /mnt/powerscale_rdma
mount.nfs: Protocol family not supported
[grant@host ~]$ sudo mount -t nfs -o rdma,proto=rdma,vers=3 10.99.99.98:/ifs/rdma-test /mnt/powerscale_rdma
nfsv3 loads up and runs just fine, but nfsv4 gets you mount.nfs: Protocol family not supported
. [This answer](https://unix.stackexchange.com/a/749996/240147) on [NFS4, insecure, port number, rdma contradiction help](https://unix.stackexchange.com/questions/749990/nfs4-insecure-port-number-rdma-contradiction-help) seems to indicate it could work, but it's not really clear how.
Is there a way to run NFSv4 with RDMA?
Grant Curell
(769 rep)
May 23, 2025, 01:20 PM
• Last activity: May 30, 2025, 07:39 PM
6
votes
3
answers
67745
views
Mount NFS - "operation not permitted" in Proxmox container
I'm trying to mount a simple NFS share, but it keeps saying "operation not permitted". The NFS server has the following share. /mnt/share_dir 192.168.7.101(ro,fsid=0,all_squash,async,no_subtree_check) 192.168.7.11(ro,fsid=0,all_squash,async,no_subtree_check) The share seems to be active for both cli...
I'm trying to mount a simple NFS share, but it keeps saying "operation not permitted".
The NFS server has the following share.
/mnt/share_dir 192.168.7.101(ro,fsid=0,all_squash,async,no_subtree_check) 192.168.7.11(ro,fsid=0,all_squash,async,no_subtree_check)
The share seems to be active for both clients.
# exportfs -s
/mnt/share_dir 192.168.7.101(ro,async,wdelay,root_squash,all_squash,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,all_squash)
/mnt/share_dir 192.168.7.11(ro,async,wdelay,root_squash,all_squash,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,all_squash)
The client 192.168.7.101 can see the share.
$ sudo showmount -e 192.168.7.10
Export list for 192.168.7.10:
/mnt/share_dir 192.168.7.101
192.168.7.101 's mount destination:
# ls -lah /mnt/share_dir/
total 8.0K
drwxr-xr-x 2 me me 4.0K Aug 28 19:21 .
drwxr-xr-x 3 root root 4.0K Aug 28 19:21 ..
When I try to mount the share, the client says "operation not permitted" with either
nfs
or nfs4
type.
$ sudo mount -vvv -t nfs 192.168.7.10:/mnt/share_dir /mnt/share_dir
mount.nfs: timeout set for Sun Aug 28 21:56:03 2022
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.7.10,clientaddr=192.168.7.101'
mount.nfs: mount(2): Operation not permitted
mount.nfs: trying text-based options 'addr=192.168.7.10'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.7.10 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.7.10 prog 100005 vers 3 prot UDP port 46169
mount.nfs: mount(2): Operation not permitted
mount.nfs: Operation not permitted
I've set fsid=0
and insecure
to the export options, but it didn't work.
RPCInfo from the client's side:
# rpcinfo -p 192.168.7.10
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100005 1 udp 59675 mountd
100005 1 tcp 37269 mountd
100005 2 udp 41354 mountd
100005 2 tcp 38377 mountd
100005 3 udp 46169 mountd
100005 3 tcp 39211 mountd
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049
100003 3 udp 2049 nfs
100227 3 udp 2049
100021 1 udp 46745 nlockmgr
100021 3 udp 46745 nlockmgr
100021 4 udp 46745 nlockmgr
100021 1 tcp 42571 nlockmgr
100021 3 tcp 42571 nlockmgr
100021 4 tcp 42571 nlockmgr
Using another client, *192.168.7.11*, I was able to mount that share with no issues.
I can not see any issue or misconfiguration, and could not find a fix anywhere.
There's no firewall in the way and both server and client are using Debian 11.
Any idea of what's going on?
markfree
(425 rep)
Aug 29, 2022, 01:32 AM
• Last activity: May 29, 2025, 04:46 PM
Showing page 1 of 20 total questions