Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
3
votes
0
answers
20
views
Doubling Etherchannel Throughput Over LACP Teamed Interfaces
I have a ALMA9 Linux server with a quad BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller NIC. I have two interfaces teamed using LACP and connected to a Cisco 9336C-FX2 switch running (NX-OS) Software 7.0(3)I7(6). The two interfaces are connected at 25Gbps speed each. Can I aggregate them so...
I have a ALMA9 Linux server with a quad BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller NIC. I have two interfaces teamed using LACP and connected to a Cisco 9336C-FX2 switch running (NX-OS) Software 7.0(3)I7(6). The two interfaces are connected at 25Gbps speed each. Can I aggregate them so that the total throughput is 50Gbps?
Here is what my network guys sent me regarding the interfaces and port channel:
SWITCH-ACCESS02-9336C# show int status | i ppg
Eth1/14/1 ... Sto connected trunk full 25G QSFP100G-4SFP25G-CU3M
Eth1/14/2 ... Sto connected trunk full 25G QSFP100G-4SFP25G-CU3M
Po160 ... Sto connected trunk full 25G --
He says, "It turns out that the server is sending LACP packets to the switch telling it that it can only load balance using MAC address and Layer 4 destination port. Those two modes do NOT support bundling the throughput."
How then do I bundle the throughput? Is that possible on the Linux side?
My bond state looks like this:
[root@linux-host ~]# teamdctl bond1 state
setup:
runner: lacp
ports:
bcom1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 1
runner:
aggregator ID: 5, Selected
selected: yes
state: current
bcom2
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 1
runner:
aggregator ID: 5, Selected
selected: yes
state: current
runner:
active: yes
fast rate: yes
Thanks.
Mike S
(2732 rep)
Jul 25, 2025, 03:00 PM
1
votes
1
answers
939
views
Why did the mac address of bond changed on CentOS7 after reboot
I have two interfaces bonded together in LACP mode (mode=4 802.3ad) in `bond0` : The mac address of `bond0`before reboot : ```shell $ cat /sys/class/net/bond0/address xx:xx:xx:xx:xx:bf ``` The mac address of `bond0`after reboot : ```shell $ cat /sys/class/net/bond0/address xx:xx:xx:xx:xx:bd ``` Here...
I have two interfaces bonded together in LACP mode (mode=4 802.3ad) in
bond0
:
The mac address of bond0
before reboot :
$ cat /sys/class/net/bond0/address
xx:xx:xx:xx:xx:bf
The mac address of bond0
after reboot :
$ cat /sys/class/net/bond0/address
xx:xx:xx:xx:xx:bd
Here is some information about the network configuration :
$ ip -o l | grep state.UP
2: em1: mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000\ link/ether xx:xx:xx:xx:xx:bd brd ff:ff:ff:ff:ff:ff
3: em2: mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000\ link/ether xx:xx:xx:xx:xx:bd brd ff:ff:ff:ff:ff:ff
7: bond0: mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000\ link/ether xx:xx:xx:xx:xx:bd brd ff:ff:ff:ff:ff:ff
$ grep BONDING_OPTS= /etc/sysconfig/network-scripts/ifcfg-bond0
BONDING_OPTS="mode=4 miimon=100 lacp_rate=1"
$ egrep "(MASTER|SLAVE)=" /etc/sysconfig/network-scripts/ifcfg-em?
/etc/sysconfig/network-scripts/ifcfg-em1:MASTER=bond0
/etc/sysconfig/network-scripts/ifcfg-em1:SLAVE=yes
/etc/sysconfig/network-scripts/ifcfg-em2:MASTER=bond0
/etc/sysconfig/network-scripts/ifcfg-em2:SLAVE=yes
$ egrep -v "^$|^#" /etc/NetworkManager/NetworkManager.conf
[main]
[logging]
Why did the mac address of bond0
change ?
SebMa
(2433 rep)
Sep 2, 2022, 09:27 AM
• Last activity: Mar 11, 2023, 12:56 AM
1
votes
1
answers
1549
views
LACP bond seems to be favouring one interface when multiple concurrent processes are transferring data
This is likely a really easy one (ie I've fundamentally missed something and its just my fault / lack of knowledge / assumptions). So I have a machine with 2x 25GbE fibres bonded into bond0 via LACP (cisco switches) with two VLANs. When I start 5 or 6 concurrent rsyncs transferring different data fr...
This is likely a really easy one (ie I've fundamentally missed something and its just my fault / lack of knowledge / assumptions).
So I have a machine with 2x 25GbE fibres bonded into bond0 via LACP (cisco switches) with two VLANs.
When I start 5 or 6 concurrent rsyncs transferring different data from one source to a destination path I was slightly surprised to see the data basically favouring one physical interface almost exclusively (~900MiB/s). I was under the assumption that the load would have been somewhat split between the two interfaces that constitute the bond.
I am fully aware that packets are NOT split across interfaces for a single stream, but as my rsync's are all separate processes I would have expected at least one, or two to use the 2nd physical interface....
for reference a 'rough' outline (ie removed info I think is sensitive) of the netplan config in use:

network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
link-local: []
ens5f0np0:
dhcp4: false
dhcp6: false
optional: true
ens5f1np1:
dhcp4: false
dhcp6: false
optional: true
bonds:
bond0:
dhcp4: false
dhcp6: false
interfaces: [ens5f0np0, ens5f1np1]
mtu: 9000
parameters:
mode: 802.3ad
lacp-rate: fast
mii-monitor-interval: 100
vlans:
bond0.xxx:
id: xxx
link: bond0
addresses: [ip]
gateway4: ip
mtu: 1500
nameservers:
search: [domains]
addresses: [ips]
bond0.xxx:
id: xxx
link: bond0
addresses: [ip]
mtu: 9000
routes:
- to: random subnet
via: local subnet ip
- to: random subnet
via: local subnet ip
- to: random subnet
via: local subnet ip
Is the issue that although the rsync are different processes, the source and destination IPs are the same (each rsync is reading a large sub-folder in one location, and copying to a common location)and the hashing being done at the bond basically means it sees it all as the same traffic? The source data lives on a server in 1 VLAN, and the destination server is on the other.
If it is my fault / improper assumptions would still like to learn all the same as I would have thought the different rsyncs would constitute a different 'stream' of data.
user279851
Jul 16, 2022, 07:30 PM
• Last activity: Jul 16, 2022, 09:09 PM
0
votes
1
answers
222
views
Network fails with 802.3ad after connecting second cable
I'm trying to setup Bonding in Lubuntu 20.04 LTS. I've got the onboard NIC plus a PCI card with two more NICs. All three ports should get connected to a Ubiquiti Switch US-8-60W, the three ports are already configured as Aggregate ports (which should support 802.3ad). My configuration in /etc/networ...
I'm trying to setup Bonding in Lubuntu 20.04 LTS. I've got the onboard NIC plus a PCI card with two more NICs.
All three ports should get connected to a Ubiquiti Switch US-8-60W, the three ports are already configured as Aggregate ports (which should support 802.3ad).
My configuration in /etc/network/interfaces looks like this:
auto lo
iface lo inet loopback
auto enp0s31f6
iface enp0s31f6 inet manual
bond-master bond0
auto enp6s0
iface enp6s0 inet manual
bond-master bond0
auto enp7s0
iface enp7s0 inet manual
bond-master bond0
auto bond0
iface bond0 inet static
address 192.168.1.11
gateway 192.168.1.1
netmask 255.255.255.0
dns-nameservers 192.168.1.1
bond-mode 4
bond-miimon 100
bond-xmit-hash-policy layer2+3
bond-slaves enp0s31f6 enp6s0 enp7s0
If only one cable is connected to enp0s31f6 (the onboard NIC), everything works correctly*. As soon as I connect a second cable, the network starts failing after half a minute or so. A bit hard to describe, sometimes I can't access the internet anymore but can still ping the router, sometimes pinging the router doesn't work either. In all cases, I can't reach the machine 192.168.1.11 from any other machine anymore.
As soon as I disconnect the second port, everything goes back to normal.
\* When I say "correctly", one thing is still odd when using one cable only. I've got a few virtual machines (Virtualbox) with static IPs and bridging mode. If I select bond0 as network adapter, I can't reach the virtual machines from outside (traffic from the VM works). When I switch the adapter to enp0s31f6, I can reach the VM again.
pgruetter
(111 rep)
Oct 17, 2021, 08:46 PM
• Last activity: Oct 27, 2021, 05:24 AM
1
votes
1
answers
146
views
Reaching /proc LACP bond infos as normal user?
As a normal user I cannot see the "details actor lacp pdu" infos what are in: /proc/net/bonding/bond0 Looks like only root can see that. But how could a regular user see these LACP infos? Or only root can really?
As a normal user I cannot see the
"details actor lacp pdu"
infos what are in:
/proc/net/bonding/bond0
Looks like only root can see that.
But how could a regular user see these LACP infos? Or only root can really?
HolcombSimons
(97 rep)
Jun 30, 2020, 01:01 PM
• Last activity: Jun 30, 2020, 04:19 PM
2
votes
0
answers
2953
views
systemd LACP bond mess up aggregation ID for NICs
arch linux with lts kernel systemd networkd is configured to bond two NICs into LACP trunk with smart switch (DGS-1210-28). But it fails after OS startup - only one NIC is used for bond connection. As shown in /proc/net/bonding/* Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode:...
arch linux with lts kernel
systemd networkd is configured to bond two NICs into LACP trunk with smart switch (DGS-1210-28). But it fails after OS startup - only one NIC is used for bond connection. As shown in /proc/net/bonding/*
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 1000
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: **:**:**:**:**:ad
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 1
Actor Key: 9
Partner Key: 3
Partner Mac Address: **:**:**:**:**:6e
Slave Interface: enp7s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: **:**:**:**:**:67
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
system priority: 65535
system mac address: **:**:**:**:**:ad
port key: 0
port priority: 255
port number: 1
port state: 69
details partner lacp pdu:
system priority: 65535
system mac address: 00:00:00:00:00:00
oper key: 1
port priority: 255
port number: 1
port state: 1
Slave Interface: enp2s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: **:**:**:**:**:ef
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: **:**:**:**:**:ad
port key: 9
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: **:**:**:**:**:6e
oper key: 3
port priority: 128
port number: 15
port state: 61
notice that bond have AggregatorID=2, and unused (churned) NIC have AggregatorID=1, which is weird. I'd expect bond to have AggregatorID=1
This can be "fixed" by physical port reconnect for churned NIC. After this both NICs are used by bond. This can be seen in kernel log:
>> OS startup
>> churned NIC physical disconnect
>> churned NIC physical connect
notice that NIC "enp7s0" initialized first, but gets churned.
Config files:
enp-any.network
[Match]
Name=enp*
[Network]
Bond=Trunk0
Trunk0.netdev
[NetDev]
Name=Trunk0
Kind=bond
[Bond]
Mode=802.3ad
TransmitHashPolicy=layer3+4
MIIMonitorSec=1s
LACPTransmitRate=slow
Trunk0.network
[Match]
Name=Trunk0
[Network]
VLAN=***
VLAN=***
VLAN=***
LinkLocalAddressing=no
BindCarrier=enp2s0 enp7s0
avi9526
(339 rep)
Jan 27, 2019, 04:38 PM
0
votes
1
answers
337
views
FreeBSD Ethernet Port Bonding
After reading about [FreeBSD aggregation][1] we think we are set but it's still not clear to me. Switch is a Cisco 2960S (software 15.x), Client is a Mac Pro running Mavericks. I don't know if the BSD internals are the same, but don't expect they have changed much over the years. Mac Pro is a medica...
After reading about FreeBSD aggregation we think we are set but it's still not clear to me.
Switch is a Cisco 2960S (software 15.x), Client is a Mac Pro running Mavericks. I don't know if the BSD internals are the same, but don't expect they have changed much over the years. Mac Pro is a medical clinic client's server (required for their application).
Below is what I’m seeing on my switch and according to the article, one sets the switch ports to active, then the flags show the peer as SA, meaning the peer is Active thus the contradiction. If the switch is set to active, I would expect the peer to be passive.
Switch#sh lacp neighbor
Flags: S - Device is requesting Slow LACPDUs
F - Device is requesting Fast LACPDUs
A - Device is in Active mode P - Device is in Passive mode
Channel group 1 neighbors
Partner's information:
LACP port Admin Oper Port Port
Port Flags Priority Dev ID Age key Key Number State
Gi0/47 SA 32768 003e.e1cb.71d4 24s 0x0 0x1 0x4 0x3D
Gi0/48 SA 32768 003e.e1cb.71d4 24s 0x0 0x1 0x5 0x3D
Switch#sh lacp nei de
Flags: S - Device is requesting Slow LACPDUs
F - Device is requesting Fast LACPDUs
A - Device is in Active mode P - Device is in Passive mode
Channel group 1 neighbors
Partner's information:
Partner Partner Partner
Port System ID Port Number Age Flags
Gi0/47 32768,003e.e1cb.71d4 0x4 25s SA
LACP Partner Partner Partner
Port Priority Oper Key Port State
32768 0x1 0x3D
Port State Flags Decode:
Activity: Timeout: Aggregation: Synchronization:
Active Long Yes Yes
Collecting: Distributing: Defaulted: Expired:
Yes Yes No No
Partner Partner Partner
Port System ID Port Number Age Flags
Gi0/48 32768,003e.e1cb.71d4 0x5 25s SA
LACP Partner Partner Partner
Port Priority Oper Key Port State
32768 0x1 0x3D
Port State Flags Decode:
Activity: Timeout: Aggregation: Synchronization:
Active Long Yes Yes
Collecting: Distributing: Defaulted: Expired:
Yes Yes No No
Switch#
Thank you
David Sain
(103 rep)
Oct 4, 2018, 04:29 PM
• Last activity: Oct 4, 2018, 04:36 PM
2
votes
1
answers
15748
views
How to make NIC team come up on boot in RHEL 7
To configure networking on RHEL 7, I created a JSON file for teaming, then ran these commands: ip link set down eno1 ip link set down eno2 ip link set down eno3 ip link set down eno4 teamd -g -f lacp.conf -d Also I created ifcfg files for VLANs and ran this command: systemctl restart network After t...
To configure networking on RHEL 7, I created a JSON file for teaming, then ran these commands:
ip link set down eno1
ip link set down eno2
ip link set down eno3
ip link set down eno4
teamd -g -f lacp.conf -d
Also I created ifcfg files for VLANs and ran this command:
systemctl restart network
After that, everything works more or less as expected, but the problem is that this is not persist between reboots, so I have to do this every time after start-up.
How can I fix this problem? I expect this commands to be executed just once, and then I want this settings to persist between reboots.
==========================================================
I've tried to apply suggested fix and these questions appeared:
1. I've created
ifcfg-team0
file, and content is:
DEVICE=team0
DEVICETYPE=Team
ONBOOT=yes
BOOTPROTO=none
TEAM_CONFIG='{"device":"team0", "runner": { "name": "lacp"...
Should I remove "device" section, so change to TEAMCONFIG='{"runner": { "name": "lacp"...
because I already have DEVICE=team0
? Or it's ok to mention it twice?
2. My ifcfg-eno1
file contain:
HWADDR=...
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
IPV4_FAILURE_FATAL=no
IPV6_INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eno1
UUID=e656...
ONBOOT=no
Should I really remove almost everything from it? In particular am I really should remove UUID
, NAME
sections and substitute this file to what documentations suggests? :
DEVICE=eth1
HWADDR=D4:85:64:01:46:9E
DEVICETYPE=TeamPort
ONBOOT=yes
TEAM_MASTER=team0
TEAM_PORT_CONFIG='{"prio": 100}'
Or I should keep both files? Just name them for example ifcfg-eno1
and ifcfg-eno1Team
3. When executing this: systemctl start network.service
I receive such error:
Failed to start LSB: Bring up/down networking. Unit network.service entered failed state.
Oleg Vazhnev
(459 rep)
Oct 13, 2014, 08:03 PM
• Last activity: Oct 20, 2014, 10:25 AM
Showing page 1 of 8 total questions