Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
2
votes
2
answers
3600
views
Identifying physical network devices using /sys/class/net/<iface>
I wanted to know if there is a way to differentiate physical and virtual network devices. `ip a` doesn't have an option. So I am trying `/sys/class/net/ `. There are 2 attributes `addr_assign_type` and type, but type only tells `Ethernet` or `loopback` there is not way to tell if its virtual. I want...
I wanted to know if there is a way to differentiate physical and virtual network devices.
ip a
doesn't have an option. So I am trying /sys/class/net/
.
There are 2 attributes addr_assign_type
and type, but type only tells Ethernet
or loopback
there is not way to tell if its virtual.
I wanted to know does addr_assign_type
tell us the different?
As per my observation /sys/class/net//{eth|loopback}
gives 0
and /sys/class/net//{virtualdevice}
gives 1 or 3
.
Is there something I can infer from this?
Dinesh Gowda
(121 rep)
Jul 24, 2019, 08:05 AM
• Last activity: Jun 28, 2025, 05:06 AM
5
votes
1
answers
866
views
Is there a thing like "veth", but without link-level headers?
When I use separate network namespace, I often set up networking there using veth: ip link add type veth ip link set veth0 netns 1 ifconfig veth1 192.168.60.2 ip route add default via 192.168.60.1 This includes unnecessary random MAC addresses for this "virtual Ethernet". For example in other mechan...
When I use separate network namespace, I often set up networking there using veth:
ip link add type veth
ip link set veth0 netns 1
ifconfig veth1 192.168.60.2
ip route add default via 192.168.60.1
This includes unnecessary random MAC addresses for this "virtual Ethernet".
For example in other mechanism (TUN/TAP) there are two modes: "tap" for Ethernet-like mode and "tun" for IP mode (i.e. without ARP, MAC address, neightbors, frame headers, promisc mode and other extra entities).
Maybe there is similar "other mode" for veth?
| connects | networking level
----------------------------------------
tap | IF to userspace | Ethernet
tun | IF to userspace | IP
veth | two IFs together | Ethernet
I want | two IFs together | IP
Vi.
(5985 rep)
Apr 6, 2015, 11:01 PM
• Last activity: May 9, 2025, 10:28 PM
4
votes
1
answers
2322
views
I can ping across namespaces, but not connect with TCP
I'm trying to set up two network namespaces to communicate with eachother. I've set up two namespaces, `ns0` and `ns1` that each have a veth pair, where the non-namespaced side of the veth is linked to a bridge. I set it up like this: ``` ip link add veth0 type veth peer name brveth0 ip link set brv...
I'm trying to set up two network namespaces to communicate with eachother. I've set up two namespaces,
ns0
and ns1
that each have a veth pair, where the non-namespaced side of the veth is linked to a bridge.
I set it up like this:
ip link add veth0 type veth peer name brveth0
ip link set brveth0 up
ip link add veth1 type veth peer name brveth1
ip link set brveth1 up
ip link add br10 type bridge
ip link set br10 up
ip addr add 192.168.1.11/24 brd + dev br10
ip netns add ns0
ip netns add ns1
ip link set veth0 netns ns0
ip link set veth1 netns ns1
ip netns exec ns0 ip addr add 192.168.1.20/24 dev veth0
ip netns exec ns0 ip link set veth0 up
ip netns exec ns0 ip link set lo up
ip netns exec ns1 ip addr add 192.168.1.21/24 dev veth1
ip netns exec ns1 ip link set veth1 up
ip netns exec ns1 ip link set lo up
ip link set brveth0 master br10
ip link set brveth1 master br10
As expected, I can ping the interface in ns0
from ns1
.
$ sudo ip netns exec ns1 ping -c 3 192.168.1.20
PING 192.168.1.20 (192.168.1.20) 56(84) bytes of data.
64 bytes from 192.168.1.20: icmp_seq=1 ttl=64 time=0.099 ms
64 bytes from 192.168.1.20: icmp_seq=2 ttl=64 time=0.189 ms
But, I can't connect the two over TCP.
For example, running a server in ns0
:
$ sudo ip netns exec ns0 python3 -m http.server 8080
Serving HTTP on 0.0.0.0 port 8080 (http://0.0.0.0:8080/) ...
I would expect to be able to curl it from ns1
, but that yields an error:
$ sudo ip netns exec ns1 curl 192.168.1.20:8080
curl: (7) Failed to connect to 192.168.1.20 port 8080: No route to host
Why is this happening?
Lee Avital
(203 rep)
Oct 11, 2019, 12:25 AM
• Last activity: Apr 14, 2025, 07:03 AM
17
votes
4
answers
19214
views
How to find the network namespace of a veth peer ifindex?
# Task I need to unambiguously and without "holistic" guessing find the **peer** network interface of a veth end in another network namespace. # Theory ./. Reality Albeit a lot of documentation and also answers here on SO assume that the ifindex indices of network interfaces are globally unique per...
# Task
I need to unambiguously and without "holistic" guessing find the **peer** network interface of a veth end in another network namespace.
# Theory ./. Reality
Albeit a lot of documentation and also answers here on SO assume that the ifindex indices of network interfaces are globally unique per host across network namespaces, **this doesn't hold in many cases**:
ifindex/iflink
**are ambiguous**. Even the loopback already shows the contrary, having an ifindex of 1 in any network namespace. Also, depending on the container environment, **ifindex
numbers get reused in different namespaces**. Which makes tracing veth wiring a nightmare, espcially with lots of containers and a host bridge with veth peers all ending in @if3 or so...
# Example: link-netnsid
is 0
Spin up a Docker container instance, just to get a new veth
pair connecting from the host network namespace to the new container network namespace...
$ sudo docker run -it debian /bin/bashNow, in the host network namespace list the network interfaces (I've left out those interfaces that are of no interest to this question):
$ ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 ... 4: docker0: mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:34:23:81:f0 brd ff:ff:ff:ff:ff:ff ... 16: vethfc8d91e@if15: mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether da:4c:f7:50:09:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 0As you can see, while the
iflink
is unambiguous, but the link-netnsid
is 0, despite the peer end sitting in a different network namespace.
For reference, check the netnsid in the unnamed network namespace of the container:
$ sudo lsns -t net NS TYPE NPROCS PID USER COMMAND ... ... 4026532469 net 1 29616 root /bin/bash $ sudo nsenter -t 29616 -n ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 15: eth0@if16: mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0So, for both veth ends
ip link show
(and RTNETLINK fwif) tells us they're in the same network namespace with netnsid 0. Which is either wrong or correct under the assumptions that link-netnsids are local as opposed to global. I could not find any documentation that make it explicit what scope link-netnsids are supposed to have.
# /sys/class/net/...
NOT to the Rescue?
I've looked into /sys/class/net/_if_/... but can only find the ifindex and iflink elements; these are well documented. "ip link show" also only seems to show the peer ifindex in form of the (in)famous "@if#" notation. Or did I miss some additional network namespace element?
# Bottom Line/Question
Are there any syscalls that allow retrieving the missing network namespace information for the peer end of a veth pair?
TheDiveO
(1427 rep)
May 4, 2018, 08:41 PM
• Last activity: Apr 11, 2025, 01:39 AM
8
votes
2
answers
9879
views
Linux bridge: what does master mean in the "ip link set"?
In the following diagram, each color stands for a network namespace, which is connected by a Linux bridge `v-net-0`. - `veth-red` and `veth-red-br` are a pair of veth. - `veth-blue` and `veth-blue-br` are a pair of veth. - `v-net-0` is a linux bridge. [![enter image description here][1]][1] what doe...
In the following diagram, each color stands for a network namespace, which is connected by a Linux bridge
what does "master" mean in this command?
v-net-0
.
- veth-red
and veth-red-br
are a pair of veth.
- veth-blue
and veth-blue-br
are a pair of veth.
- v-net-0
is a linux bridge.

ip link set veth-blue-br master v-net-0
I have checked the man page of ip link set
, but still don't understand the meaning of flag master
.
Ryan Lyu
(234 rep)
Apr 26, 2022, 03:09 PM
• Last activity: Apr 8, 2025, 02:25 AM
1
votes
1
answers
90
views
strange Docker veth interface (peer) names
On a Docker host (which I have not set up; I am not very familiar wich Docker anyway) I noticed that I do not understand the interface names: ``` 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens18: mtu 1500...
On a Docker host (which I have not set up; I am not very familiar wich Docker anyway) I noticed that I do not understand the interface names:
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens18: mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 5e:44:5a:26:82:e7 brd ff:ff:ff:ff:ff:ff
8: docker0: mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether ae:b3:52:68:1d:5b brd ff:ff:ff:ff:ff:ff
12: br-7fef86ec14bd: mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 76:d3:a0:d7:73:0a brd ff:ff:ff:ff:ff:ff
33: vethc35030f@if2: mtu 1500 qdisc noqueue master br-7fef86ec14bd state UP mode DEFAULT group default
link/ether 6e:b1:3e:85:88:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
ip -d link show dev vethc35030f
33: vethc35030f@if2: mtu 1500 qdisc noqueue master br-7fef86ec14bd state UP mode DEFAULT group default
link/ether 6e:b1:3e:85:88:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1 minmtu 68 maxmtu 65535
veth
bridge_slave [...]
So vethc35030f
does not only sound like veth
, it actually is a veth
.
How can it be @if2
? The documentation says that veth
interfaces are always created in pairs, the paired interface name or (if in a different namespace) number is the part after the @
. I am not aware of any possibility to change the veth
peer later, especially not to an interface of a different type.
somename@if2
is something I would expect for a macvlan
(or similar) interface but this is not the case here.
Hauke Laging
(93688 rep)
Mar 22, 2025, 11:15 PM
• Last activity: Mar 23, 2025, 01:50 AM
1
votes
0
answers
71
views
Is it possible to use a veth created in a user namespace as a regular user in a practical way?
[This question](https://unix.stackexchange.com/questions/396175/how-do-i-connect-a-veth-device-inside-an-anonymous-network-namespace-to-one-ou) hints that it is possible to create a `veth` (which normally requires root) from inside a user and network namespace, and indeed: ``` user@host$ unshare --u...
[This question](https://unix.stackexchange.com/questions/396175/how-do-i-connect-a-veth-device-inside-an-anonymous-network-namespace-to-one-ou) hints that it is possible to create a
veth
(which normally requires root) from inside a user and network namespace, and indeed:
user@host$ unshare --user --net -r =bash
root@namespace# ip link add veth0 type veth peer name veth0 netns 1
root@namespace# ip link
1: lo: mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: veth0@if3: mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 4a:b9:93:89:bd:d1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
The other end of the veth
does appear on the host:
user@host$ ip link
...
3: veth0@if2: mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 62:02:c7:8c:58:77 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Unfortunately, it does not seem possible to use it in a practical way as a regular user, because any modification requires root, including bringing the interface up:
user@host$ ip link set veth0 up
RTNETLINK answers: Operation not permitted
Is this actually possible, and did I miss something?
Container technologies like Podman makes use of custom usermode TCP/IP stacks ([slirp4netns](https://github.com/rootless-containers/slirp4netns) or [passt/pasta](https://passt.top/passt/about/)) when run in rootless mode, which work _in addition_ to the normal kernel networking stack. Is there a documented reason why using (if yes) or developing (if no) such a feature was not pursued while developing those alternative stacks?
F.X.
(361 rep)
Aug 24, 2024, 11:39 AM
2
votes
1
answers
261
views
What is the reason why creating a veth requires root?
I recently became aware of solutions like [slirp4netns](https://github.com/rootless-containers/slirp4netns) or [passt/pasta](https://passt.top/passt/about/) which essentially work around the fact that you can't create a pair of [veth](https://www.man7.org/linux/man-pages/man4/veth.4.html) network in...
I recently became aware of solutions like [slirp4netns](https://github.com/rootless-containers/slirp4netns) or [passt/pasta](https://passt.top/passt/about/) which essentially work around the fact that you can't create a pair of [veth](https://www.man7.org/linux/man-pages/man4/veth.4.html) network interfaces without
root
(or CAP_NET_ADMIN
). Before user namespaces became widely available, changing the network configuration was indeed originally restricted to the superuser.
Is there a documented reason why it was deemed "easier" to create a whole entire TCP/IP stack and/or complex abstraction layers rather than just allowing users to create their own pairs? Was it difficult to implement a user permission scheme on top of the networking configuration tools, or are there security reasons why allowing non-root users to modify the network configuration of interfaces they themselves created would be a bad idea?
F.X.
(361 rep)
Aug 18, 2024, 11:51 AM
• Last activity: Aug 18, 2024, 12:05 PM
0
votes
1
answers
386
views
Debian: Can you define veth in /etc/network/interfaces?
I found the following [documentation][1] for Ubuntu. Does Debian have a similar page? I'm looking for something comprehensive-- not just a few random examples. I can create **veth** using the **ip** command, so I know the support is there. Should something like this work? auto h iface h inet manual...
I found the following documentation for Ubuntu. Does Debian have a similar page? I'm looking for something comprehensive-- not just a few random examples. I can create **veth** using the **ip** command, so I know the support is there. Should something like this work?
auto h
iface h inet manual
link-type veth
veth-peer-name t
David
(3 rep)
Jun 6, 2024, 05:50 PM
• Last activity: Jun 6, 2024, 05:54 PM
1
votes
0
answers
1310
views
Network routing through veth0 / bridge for userspace QEMU VM?
I would like to filter and capture traffic from a virtual machine. This VM must run in userspace. Capturing requires root, I know (though I hope to minimise root activity needed later). The easiest way to capture and filter - as I understand - is having a dedicated, virtual network interface like `v...
I would like to filter and capture traffic from a virtual machine. This VM must run in userspace. Capturing requires root, I know (though I hope to minimise root activity needed later).
The easiest way to capture and filter - as I understand - is having a dedicated, virtual network interface like
vnet0
used exclusively by the VM. Then I can run tshark
, tcpdump
, iptables
etc. on it.
**How do I set up (create) the network interfaces (as root), and how do I connect to them (as non-root) with a KVM/QEMU virtual machine?**
I am looking for ip
commands (iproute2 style) and qemu configuration options.
I have started working on interfaces like this:
(root 1) ip link add br0 type bridge
(root 2) ip addr add dev br0 10.10.0.1/24
(root 3) ip link set dev br0 up
(root 4) ip link add vm1-host type veth peer name vm1-net
(root 5) ip link set dev vm1-host master br0
(root 6) ip link set dev vm1-host up
(root 7) ip tuntap add vm1-tap mode tap
(root 8) ip addr add 10.10.0.2/24 dev vm1-net
(root 9) ip addr add 10.0.2.2/24 dev vm1-tap
(root 10) ip link set dev vm1-tap up
(root 10b) echo 1 > /proc/sys/net/ipv4/ip_forward
Then tried to connect with QEMU to the bridge but could not do so as a non-root user.
I edited /etc/qemu/bridge.conf
to allow the user to connect to the bridge via /usr/libexec/qemu-bridge-helper
:
(root 11) grep -v # /etc/qemu/bridge.conf
allow veth0
allow vm1-tap
allow vm1-host
allow vm1-net
(root 12) ll /usr/libexec/qemu-bridge-helper
-rwsr-x--- 1 root kvm 312888 Aug 25 16:16 /usr/libexec/qemu-bridge-helper
The user is indeed a member of the kvm
group (id
shows it). However, I get the following error message when usingvirt-manager
running as $USERNAME
with a bridge interface:
Error starting domain: /usr/libexec/qemu-bridge-helper --use-vnet --br=vm1-tap --fd=34: failed to communicate with bridge helper: stderr=failed to add interface tap0' to bridge
vm1-tap': Operation not supported
: Transport endpoint is not connected
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 72, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 108, in tmpcb
callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 57, in newfn
ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/domain.py", line 1425, in startup
self._backend.create()
File "/usr/lib64/python3.10/site-packages/libvirt.py", line 1362, in create
raise libvirtError('virDomainCreate() failed')
libvirt.libvirtError: /usr/libexec/qemu-bridge-helper --use-vnet --br=vm1-tap --fd=34: failed to communicate with bridge helper: stderr=failed to add interface tap0' to bridge
vm1-tap': Operation not supported
: Transport endpoint is not connected
Some **capabilities** may additionally be necessary? - **how do I make a user have them only for starting the VM?**
Perhaps this will help?
(root 13) ip tuntap add vm1-tap mode tap user $USERNAME
ioctl(TUNSETIFF): Device or resource busy
I experimented with commands found online but **do I need tun/tap at all?**, and I could not find the correct way to do this.
References (helpful but not solutions for my problem):
* [Howto setup a veth
virtual network](https://superuser.com/questions/764986/howto-setup-a-veth-virtual-network)
* https://unix.stackexchange.com/questions/456449/qemu-network-bridge
* https://unix.stackexchange.com/questions/219952/routing-only-vm-traffic-through-vpn
Ned64
(9256 rep)
Sep 1, 2022, 04:43 PM
• Last activity: Jun 30, 2023, 05:21 PM
2
votes
1
answers
576
views
Root network namespace as transit between 2 other net namespaces
I am trying to communicate between two network namespaces that are connected through the root namespaces using veth pairs as seen in the diagram. I am unable to perform a ping from netns A to netns B. Additionally I can ping from root namespace to both netns A (VA IP) and B (VB IP). ``` +-------+ +-...
I am trying to communicate between two network namespaces that are connected through the root namespaces using veth pairs as seen in the diagram. I am unable to perform a ping from netns A to netns B. Additionally I can ping from root namespace to both netns A (VA IP) and B (VB IP).
+-------+ +-------+
| A | | B |
+-------+ +-------+
| VA | VB
| |
| RA | RB
+-------------------------+
| |
| Root namespace |
| |
+-------------------------+
ip netns add A
ip netns add B
ip link add VA type veth peer name RA
ip link add VB type veth peer name RB
ip link set VA netns A
ip link set VB netns B
ip addr add 192.168.101.1/24 dev RA
ip addr add 192.168.102.1/24 dev RB
ip link set RA up
ip link set RB up
ip netns exec A ip addr add 192.168.101.2/24 dev VA
ip netns exec B ip addr add 192.168.102.2/24 dev VB
ip netns exec A ip link set VA up
ip netns exec B ip link set VB up
ip netns exec A ip route add default via 192.168.101.1
ip netns exec B ip route add default via 192.168.102.1
I have tried enabling IP forwarding and there are no IP table rules blocking the traffic.
The same works when instead of using root namespace I use another namespace called transit and connect it like below.
+-------+ VA RA +-------+ RB VB +-------+
| A |--------|transit|---------| B |
+-------+ +-------+ +-------+
+-------------------------+
| |
| Root namespace |
| |
+-------------------------+
Here I am successful in pinging between namespaces A and B.
Why is it that the traffic gets dropped at root namespace and does not when a third transit namespace is used instead?
There are a few iptable rules installed by docker, but I do not see any conflict.
rahul@inception:~$ sudo iptables -L -n -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy DROP 2 packets, 168 bytes)
pkts bytes target prot opt in out source destination
2 168 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
2 168 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2 168 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
2 168 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
nft list format
rahul@inception:~$ sudo nft list ruleset
table ip nat {
chain DOCKER {
iifname "docker0" counter packets 0 bytes 0 return
}
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
oifname != "docker0" ip saddr 172.17.0.0/16 counter packets 1 bytes 90 masquerade
}
chain PREROUTING {
type nat hook prerouting priority dstnat; policy accept;
fib daddr type local counter packets 148 bytes 11544 jump DOCKER
}
chain OUTPUT {
type nat hook output priority -100; policy accept;
ip daddr != 127.0.0.0/8 fib daddr type local counter packets 3 bytes 258 jump DOCKER
}
}
table ip filter {
chain DOCKER {
}
chain DOCKER-ISOLATION-STAGE-1 {
iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
counter packets 2 bytes 168 return
}
chain DOCKER-ISOLATION-STAGE-2 {
oifname "docker0" counter packets 0 bytes 0 drop
counter packets 0 bytes 0 return
}
chain FORWARD {
type filter hook forward priority filter; policy drop;
counter packets 2 bytes 168 jump DOCKER-USER
counter packets 2 bytes 168 jump DOCKER-ISOLATION-STAGE-1
oifname "docker0" ct state related,established counter packets 0 bytes 0 accept
oifname "docker0" counter packets 0 bytes 0 jump DOCKER
iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 accept
iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
}
chain DOCKER-USER {
counter packets 2 bytes 168 return
}
}
ip route
rahul@inception:~$ ip route
default via 192.168.0.1 dev wlo1 proto dhcp metric 600
169.254.0.0/16 dev wlo1 scope link metric 1000
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.0.0/24 dev wlo1 proto kernel scope link src 192.168.0.101 metric 600
192.168.101.0/24 dev RA proto kernel scope link src 192.168.101.1
192.168.102.0/24 dev RB proto kernel scope link src 192.168.102.1
Using TCPDUMP I found that the packet is reaching the root namespace.
Is there any debugging tool that I can learn and can be used to see where the packet is traversing inside the namespace (like strace or ftrace)?
Rahul Raj Purohit
(23 rep)
Apr 29, 2023, 09:19 PM
• Last activity: May 2, 2023, 12:26 AM
3
votes
1
answers
5401
views
Relationship between bridge and veth for Docker network
On my Ubuntu 22.04 host, I've created a Docker network with the bridge driver and started up a container within that network. Running `ip addr` on my host, I see these two interfaces: ```text 5: br-fc7599764562: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:d4:4f:b9:39 brd ff:ff:ff:...
On my Ubuntu 22.04 host, I've created a Docker network with the bridge driver and started up a container within that network.
Running
ip addr
on my host, I see these two interfaces:
5: br-fc7599764562: mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:d4:4f:b9:39 brd ff:ff:ff:ff:ff:ff
inet 172.21.0.1/16 brd 172.21.255.255 scope global br-fc7599764562
valid_lft forever preferred_lft forever
inet6 fe80::42:d4ff:fe4f:b939/64 scope link
valid_lftforever preferred_lft forever
6: vethe6879a0@if14: mtu 1500 qdisc noqueue master br-fc7599764562 state UP group default
link/ether e2:e8:0f:5b:37:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e0e8:fff:fe5b:37a0/64 scope link
valid_lft forever preferred_lft forever
Clearly, these two interfaces are related as the second lists the first as "master". What is the relationship?
Some context for the question: I actually have two Docker networks with one container inside each. Using iptables, I've [set up NAT between them](https://unix.stackexchange.com/questions/744165/set-up-nat-between-docker-networks) (or, at least, I think I have) and am trying to ping one container from the other. Running Wireshark on the host, I see the ICMP packet come in on the bridge interface and going out on the veth interface (instead of the other bridge).
Daniel Walker
(921 rep)
Apr 27, 2023, 02:30 PM
• Last activity: Apr 27, 2023, 06:59 PM
1
votes
0
answers
317
views
Which app create the veth in my os?
Show network interface which named as `vethf2842c3`: ifconfig | grep veth -A 7 vethf2842c3: flags=-28605 mtu 1500 inet 169.254.83.218 netmask 255.255.0.0 broadcast 169.254.255.255 inet6 xxxxxxxxxx prefixlen 64 scopeid 0x20 ether xx:xx:xx:xx:xx:xx txqueuelen 0 (Ethernet) RX packets 761 bytes 112388 (...
Show network interface which named as
vethf2842c3
:
ifconfig | grep veth -A 7
vethf2842c3: flags=-28605 mtu 1500
inet 169.254.83.218 netmask 255.255.0.0 broadcast 169.254.255.255
inet6 xxxxxxxxxx prefixlen 64 scopeid 0x20
ether xx:xx:xx:xx:xx:xx txqueuelen 0 (Ethernet)
RX packets 761 bytes 112388 (109.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1112 bytes 482321 (471.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I wondered which app create the vethf2842c3
?
newview
(205 rep)
Oct 28, 2022, 02:06 AM
0
votes
1
answers
393
views
ipv4 forwarding breaks bridges and veths
I've successfully gotten the following to work: ip netns add quarantine ip link add eth0-q type veth peer name veth-q ip link add br0 type bridge ip link set veth-q master br0 ip link set br0 up ip link set veth-q up ip link set eth0-q netns quarantine ip netns exec quarantine ip link set lo up ip n...
I've successfully gotten the following to work:
ip netns add quarantine
ip link add eth0-q type veth peer name veth-q
ip link add br0 type bridge
ip link set veth-q master br0
ip link set br0 up
ip link set veth-q up
ip link set eth0-q netns quarantine
ip netns exec quarantine ip link set lo up
ip netns exec quarantine ip link set eth0-q up
ip netns exec quarantine ip address add 192.168.66.5/24 dev eth0-q
ip netns exec quarantine dnsmasq --interface=eth0-q --dhcp-range=192.168.66.10,192.168.66.50,255.255.255.0
ip link set eno1 master br0
This allows me to run an instance of dnsmasq without interfering with network-manager, and lets a device connecting through my default ethernet interface (eno1) get an IP in 192.168.66.0/24
I then decided to grant internet access, I did so:
ip address add 192.168.66.1/24 dev br0
iptables -A FORWARD -i wlp58s0 -o br0 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -o wlp58s0 -o br0 -j ACCEPT
iptables -A FORWARD -j LOG
iptables -t nat -A POSTROUTING -o wlp58s0 -j MASQUERADE
sysctl -w net.ipv4.ipforward=1
sysctl -p
where wlp58s0 is my WiFi interface connected to my home WiFi. I also had to kill the dnsmasq described previously and replace it with:
ip netns exec quarantine dnsmasq --interface=eth0-q \
--dhcp-range=192.168.66.10,192.168.66.50,255.255.255.0 \
--dhcp-option=3,192.168.66.1 --dhcp-option=6,8.8.8.8
This way the device connected via eno1 knows to find the gateway and ask DNS queries to the Google DNS server 8.8.8.8.
All of this works perfectly fine, and after rebooting my machine, all the configuration is gone as expected, and things work consistently.
However: in an earlier attempt, I took advice found on the internet to enable packet forwarding, and instead of using sysctl, I did:
echo 1 > /proc/sys/net/ipv4/ip_forward
This had granted internet access after I had already connected my device on eno1 where it already had an IP.
But: after rebooting my machine, that ip forwarding setting had become persistent. Moreover: writing a 0 where I had written a 1 was not persistent. Worse: the initial setup (no internet access, just hand out IPs) was broken, my device on eno1 could not get an IP anymore from the configuration I described in the beginning. I used wireshark: requests for an IP could be seen on br0 but were gone from veth-q, even more peculiar: only IPv6 traffic could be seen on veth-q, the ipv4 traffic was entirely gone. Manually disabling IP forwarding by writing a 0 to /proc/sys/net/ipv4/ip_forward did nothing to help. Eventually I reinstalled my Linux distribution (Ubuntu) and took care of never using that echo command ever again and do things with sysctl which causes no problems.
Why did this happen ? It was a very strange and peculiar behaviour, because everything else with my computer seemed to be working just fine: I could get internet access, everything seemed to be back to normal, but that one interaction between the bridge and veth had been corrupted.
Any light shed on this would be greatly appreciated !
Arno
(1 rep)
Jun 3, 2022, 06:44 AM
• Last activity: Jun 10, 2022, 03:36 PM
3
votes
1
answers
3547
views
Why two bridged veth cannot ping each other?
I need to set up a network environment where two veth interfaces is attached to one bridge and they need to be able to communicate with each other. So I execute the following commands in a clean ubuntu shell: ``` # Create Two veth and attach them to the bridge sudo ip link add veth0 type veth peer n...
I need to set up a network environment where two veth interfaces is attached to one bridge and they need to be able to communicate with each other.
So I execute the following commands in a clean ubuntu shell:
# Create Two veth and attach them to the bridge
sudo ip link add veth0 type veth peer name veth0p
sudo ip link add veth1 type veth peer name veth1p
sudo brctl addbr br0
sudo brctl addif br0 veth0p
sudo brctl addif br0 veth1p
# Set links up
sudo ip link set veth0 up
sudo ip link set veth1 up
sudo ip link set veth0p up
sudo ip link set veth1p up
sudo ip link set br0 up
# Give each veth an IP address
sudo ip addr add 10.0.0.1/24 dev veth0
sudo ip addr add 10.0.0.2/24 dev veth1
# Try to ping one from the other
ping 10.0.0.1 -I veth1
The ping does not work. Could anyone help me on this? What should I do to make veth0
and veth2
being able to ping each other?
The output of the ip r s
is:
default via 192.168.0.1 dev ens160 proto dhcp src 192.168.0.119 metric 100
10.0.0.0/24 dev veth0 proto kernel scope link src 10.0.0.1
10.0.0.0/24 dev veth1 proto kernel scope link src 10.0.0.2
192.168.0.0/24 dev ens160 proto kernel scope link src 192.168.0.119
192.168.0.1 dev ens160 proto dhcp scope link src 192.168.0.119 metric 100
The purpose of this is to put those two veth
interfaces into a VXLAN overlay network later. But for development purpose, I want to test the bridge and the two interfaces without the VXLAN been set up at this point. (But even though there is no VXLAN set up, they should be able to ping each other as long as they are on the same bridge right?)
Thank you!
Entropy Xu
(31 rep)
Jun 24, 2021, 06:15 AM
• Last activity: Oct 24, 2021, 04:30 PM
1
votes
0
answers
263
views
Why I have so many veth interfaces in firewalld Zone
I have a server that runs `docker` and `firewalld`, everything works fine but I **cannot** reload `firewalld` with the command ``` firewall-cmd --reload ``` One thing that I notice is in the result of this command ``` [root@localhost ~]# firewall-cmd --zone=public --list-all public (active) target:...
I have a server that runs
docker
and firewalld
, everything works fine but I **cannot** reload firewalld
with the command
firewall-cmd --reload
One thing that I notice is in the result of this command
[root@localhost ~]# firewall-cmd --zone=public --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0 veth0008eb7 veth00197d0 veth0023d4b ...(10K of these)... veth036eaae
I have over 10K of veth.....
interfaces, that's insane.
- I try to run a script to delete all of these interfaces but was not able to reload.
- Each time I reload firewall-cmd --reload
, the server is unable to connect
How do I fix this?
Thanks in advance.
Federal Reserve
(955 rep)
Oct 3, 2021, 11:48 AM
2
votes
1
answers
1624
views
How to setup veth with 9000 MTU to simulate sending and receiving large UDP multicast packets on the same host?
The sender needs to transmit large data packets to the receiver (which is on the same host with 1500 MTU) and I think this can be simulated using veth with 9000 MTU, from my reading on it. But I'm not able to figure out how exactly to do that - most of the veth tutorials/articles on the internet men...
The sender needs to transmit large data packets to the receiver (which is on the same host with 1500 MTU) and I think this can be simulated using veth with 9000 MTU, from my reading on it. But I'm not able to figure out how exactly to do that - most of the veth tutorials/articles on the internet mention network namespaces and I'm not sure if I would need to create a network namespace to achieve this. Any pointers/suggestions would be helpful, thanks!
Anand
(23 rep)
Jun 23, 2021, 06:05 AM
• Last activity: Jun 23, 2021, 06:46 AM
5
votes
1
answers
1145
views
veth interfaces performance problem
On a fast AWS machine (`m5.2xlarge`), I am creating around 600 veth interfaces, each one having a little server (with `socat`) running on a port. I then start sending around 7kb/second of data per server. When sending to about 500 servers everything goes well, but when I send it to around 600 server...
On a fast AWS machine (
m5.2xlarge
), I am creating around 600 veth interfaces, each one having a little server (with socat
) running on a port.
I then start sending around 7kb/second of data per server. When sending to about 500 servers everything goes well, but when I send it to around 600 servers, timeouts begin to occur. The connection to the server can take more than 3 seconds to be executed, as I have tested.
It's not a lot of processing (for such a server) and it's not a lot of data.
*Is the Linux veth
implementation slow?*
I have created a [git repo to reproduce the problem](https://github.com/pallix/veth_network_namespaces_perf) . Any help would be highly appreciated.
Pierre Allix
(51 rep)
Jan 22, 2020, 01:56 PM
• Last activity: Jun 14, 2021, 03:38 PM
1
votes
1
answers
7539
views
How to set up a bridge interface, add eth0 to it, and have internet connection
I am trying to set up `br0` with `eth0` and `veth1` on a headless server where I am logged in via `ssh`. I am doing this as a preparation to run a systemd service in a special namespace. This namespace will have the peer of the virtual divice as it's endpoint: `veth2`. This should make it possible t...
I am trying to set up
br0
with eth0
and veth1
on a headless server where I am logged in via ssh
.
I am doing this as a preparation to run a systemd service in a special namespace. This namespace will have the peer of the virtual divice as it's endpoint: veth2
.
This should make it possible to set up static routes for just this process. In my case it will then route packages through a vpn
while all the other traffic goes to the standard gateway.
To figure out how this works I wrote a small script that executes the following so fast that the ssh
connection to the server does not break. I can then traceroute
the veth2
successfully. The server has just one eth
device and no wifi which is why I have to do it this way.
My problem is that after executing the script the server does not have internet access any more. I am probably missing a lot here. Can anyone help?
My script:
pi@testpi:~ $ cat add_bridge_and_veth1.sh
brctl addbr br0;
ip addr del 192.168.100.222/24 dev eth0;
ip addr add 192.168.100.222/24 dev br0;
brctl addif br0 eth0;
ip link set dev br0 up;
ip link add name veth1 type veth peer name veth2;
brctl addif br0 veth1;
brctl show;
ip netns add nsben1;
ip link set veth2 netns nsben1;
ip netns exec nsben1 ip addr add 192.168.55.101/24 dev veth2;
ip netns exec nsben1 ip link set lo up;
ip netns exec nsben1 ip link set veth2 up;
No internet after this in the default namespace:
pi@testpi:~ $ traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 192.168.100.222 (192.168.100.222) 3085.668 ms !H 3085.488 ms !H 3085.393 ms !H
pi@testpi:~ $ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 192.168.100.222 icmp_seq=1 Destination Host Unreachable
EDIT: My default setup is very simple. eth0
gets a fixed IP in 192.168.100.0/24
from the router according to the MAC
of the device: 192.168.100.222
.
pi@testpi:~ $ ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
link/ether b8:27:eb:98:70:4b brd ff:ff:ff:ff:ff:ff
inet 192.168.100.222/24 brd 192.168.100.255 scope global dynamic noprefixroute eth0
valid_lft 83282sec preferred_lft 72482sec
inet6 fe80::247e:fd3c:36d7:68f5/64 scope link
valid_lft forever preferred_lft forever
3: br0: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether b8:27:eb:98:70:4b brd ff:ff:ff:ff:ff:ff
inet 192.168.100.222/24 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::ba27:ebff:fe98:704b/64 scope link
valid_lft forever preferred_lft forever
5: veth1@if4: mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether e2:bc:58:01:67:92 brd ff:ff:ff:ff:ff:ff link-netns nsben1
inet 169.254.205.121/16 brd 169.254.255.255 scope global noprefixroute veth1
valid_lft forever preferred_lft forever
inet6 fe80::db71:b4e9:c60f:5865/64 scope link
valid_lft forever preferred_lft forever
No network in nsben1
, but this is not my main concern yet. I first want to have everything working in default namespace.
root@testpi:~# ip netns exec nsben1 ping 8.8.8.8
connect: Network is unreachable
Here the output for ip route
in default and nsben1
namespaces. I think Network is unreachable
from the nsben1
results from the internet beeing unreachable from default namespace. It does not necessarily mean that something is wrong with the nsben1
, but even if that's not the main problem at the moment.
root@testpi:~# ip route
192.168.55.0/24 dev veth2 proto kernel scope link src 192.168.55.101
root@testpi:~# ip route get 8.8.8.8
RTNETLINK answers: Network is unreachable
root@testpi:~# ip netns exec nsben1 ip route
192.168.55.0/24 dev veth2 proto kernel scope link src 192.168.55.101
root@testpi:~# ip netns exec nsben1 ip route get 8.8.8.8
RTNETLINK answers: Network is unreachable
For the sake of completeness ip a
in nsben1
:
root@testpi:~# ip netns exec nsben1 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: veth2@if5: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 92:31:7e:0f:89:9d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.55.101/24 scope global veth2
valid_lft forever preferred_lft forever
inet6 fe80::9031:7eff:fe0f:899d/64 scope link
valid_lft forever preferred_lft forever
------
**I tried @berndbausch's approach** of just executing the first five commands
brctl addbr br0;
ip addr del 192.168.100.222/24 dev eth0;
ip addr add 192.168.100.222/24 dev br0;
brctl addif br0 eth0;
ip link set dev br0 up;
in a script. When I do this I get the following output, where br0
and eth0
still have the same IP, which probably is wrong:
pi@testpi:~ $ sudo ./add_bridge.sh
pi@testpi:~ $ ip route
192.168.100.0/24 dev br0 proto kernel scope link src 192.168.100.222
pi@testpi:~ $ ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
link/ether b8:27:eb:98:70:4b brd ff:ff:ff:ff:ff:ff
inet 192.168.100.222/24 brd 192.168.100.255 scope global dynamic noprefixroute eth0
valid_lft 86389sec preferred_lft 75589sec
inet6 fe80::247e:fd3c:36d7:68f5/64 scope link
valid_lft forever preferred_lft forever
3: br0: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether b8:27:eb:98:70:4b brd ff:ff:ff:ff:ff:ff
inet 192.168.100.222/24 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::ba27:ebff:fe98:704b/64 scope link
valid_lft forever preferred_lft forever
I then tried to execute the script adding ip link set dev eth0 down;
and up
like this:
ip link set dev eth0 down;
brctl addif br0 eth0;
ip link set dev eth up;
I lose the connection via ssh
which is understandable. Maybe it is normal that eth0
has the same IP as the br0
it is connected to. If not, why is the IP not removed despite me using ip addr del 192.168.100.222/24 dev eth0;
bomben
(549 rep)
Feb 10, 2021, 08:37 AM
• Last activity: Feb 11, 2021, 11:56 AM
1
votes
1
answers
1130
views
cant connect tap interface with eth
I wrote a simple c program which can send and receive ether net frames using `/dev/net/tun` and I connected the tap interface with my ethernet nic using both the virtual bridge and veth pair. I expected to see some traffic from my tap while capturing the ethernet card. the problem is see my packets...
I wrote a simple c program which can send and receive ether net frames using
/dev/net/tun
and I connected the tap interface with my ethernet nic using both the virtual bridge and veth pair.
I expected to see some traffic from my tap while capturing the ethernet card.
the problem is see my packets coming from tap0 and i see them arriving at the bridge but i cant see arrive anything while capturing the ethernet card.
To be honest I have no idea what I am doing and so I tried configuring the bridge with netplan config files, brctl
, ip
and ifconfig
.
I tried adding ip addresses of same subnet to br0 tap and enp8s0
and i tried to just give an ip adress to the bridge always with the same result,
same thing while using veth
currently my setup is:
#create tap0
sudo ip tuntap add tap0 mode tap
sudo ip link set tap0 promisc on
#create Br0
sudo ip link add br0 type bridge
sudo ip link set enp8s0 promisc on
#set to up
sudo ip link set br0 up #To add an interface br0s state must be up
sudo ip link set enp8s0 up #To add an interface its state must be up
sudo ip link set dev tap0 up #To add an interface its state must be up
#Adding the interface into the bridge is done by setting its master to bridge_name:
sudo ip link set enp8s0 master br0
sudo ip link set tap0 master br0
sudo ip addr add dev br0 192.168.4.10/24
i am writing this structure to the tap filedescriptor:
struct eth_hdr
{
unsigned char h_dest ;
unsigned char h_source ;
uint16_t ethertype;
unsigned char payload[];
} __attribute__((packed));
wireshark shows a valid ethernet frame but i am not sure what source mac would be appropriate
what can i do to get my ethernet headers from tap0 to enp8s0
KingKoopa
(11 rep)
Jan 1, 2021, 05:14 PM
• Last activity: Jan 4, 2021, 02:23 PM
Showing page 1 of 20 total questions