nfs problems with jumbo (MTU=9000) but works with default (MTU=1500)
1
vote
1
answer
4058
views
I have a local network set up between two servers running ubuntu 18.04 server. They are connected by a 10G network switch (actually 2 bonded connections 10G each). For performance reasons, in /etc/netplan, I have mtu=9000 for the corresponding interface (ethernet or bond). All machines on the subnet have MTU=9000 set. See my previous question and solution: https://unix.stackexchange.com/questions/469346/link-aggregation-bonding-for-bandwidth-does-not-work-when-link-aggregation-gro/469715#469715
I can ssh, copy files between machines, etc., at high bandwidth (>15 GBit/sec).
One server has an nfs (nfs4, I tried with nfs3 as well) export. I can mount and view some directories of the nfs from other machines on the subnet. Settings are identical to the HowTo: https://help.ubuntu.com/community/NFSv4Howto .
However, commands like "ls" or "cd" or even "df" will randomly hang infinitely on the client.
I tried changing the MTU to default (1500) on the client and host interfaces, while leaving jumbo frames "activated" on the switch. Oddly, this solved all the issues.
I am wondering if NFS(4) is incompatible with jumbo frames, of if anyone has any insight into this. I have found people "optimizing" nfs with different MTU sizes, and people mention hanging "ls" etc., but never in the same context...
Asked by rveale
(161 rep)
Apr 3, 2019, 12:34 PM
Last activity: Jun 22, 2025, 11:22 PM
Last activity: Jun 22, 2025, 11:22 PM