setting MTU size for performance "today"?
8
votes
1
answer
688
views
Redhat-8 when configuring Network Settings has MTU set to automatic, which results in the traditional **MTU=1500**. Is this good or bad ? when....
ip -d link list
reports a **maxmtu** value for your interfaces. On mine having a quad-port NIC with two SFP's that are 10GbE and two rj-45 ports that are 1gbps, along with mellanox HDR infinifband:
- 10gbE interfaces maxmtu = 9600
- 1gbps interfaces maxmtu = 9000
- ib0 interface maxmtu = 65520
doing ifconfig
or ip a
will reveal the currently set MTU value for the interface.
Should I be going the extra step of manually/explicitly setting MTU in the network settings gui for my interfaces that are in use?
**Is there a way to validate this change from 1500 to 9000/9600/65520 for the given interface, if so how?** Would a simple scp
(which reports transfer rate) for the single ~50gb tar file show a performance bump in transfer rate?
**Is anybody messing with MTU settings these days, or did we all just forget about it and left it at 1500 for the last however many years?** I ask this because articles you currently can find on the issue don't seem to recognize current hardware and Linux (today's date, RHEL 8/9+, latest kernel, etc) and just pander on the topic.
Any technical info on the topic that can be posted would be appreciated, such as why the +600 on my 10gbe ports vs the 1gbps on the same NIC? Who/what/where decides that value? Why can't it be 65520? How does the ib0
get to have a maxmtu of 65520 what makes it so special?
I am interested in increasing performance/throughput on interfaces where NFS v4.2 is in use to export & mount folders, as well as having samba-server running on a 1gbps port for which users connect to a data share from win11 clients on a LAN/WAN. Would a high MTU mess up samba?
Asked by ron
(8647 rep)
Jun 23, 2025, 06:58 PM
Last activity: Jun 23, 2025, 07:23 PM
Last activity: Jun 23, 2025, 07:23 PM