Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
0
answers
36
views
How to troubleshoot random latency spikes in WiFi traffic?
I am building an embedded device that runs Linux (Armbian) and receives a steady stream of UDP data. It has both Ethernet and WiFi connectivity. Ethernet works fine, however when using WiFi, the data stream will occasionally appear to suddenly pause for up to a second, no data is received during thi...
I am building an embedded device that runs Linux (Armbian) and receives a steady stream of UDP data. It has both Ethernet and WiFi connectivity. Ethernet works fine, however when using WiFi, the data stream will occasionally appear to suddenly pause for up to a second, no data is received during this period. The packets are not lost, however, they seem to just be queued in the network driver stack somewhere. And after the pause is up, the entire backlog is dumped on the application.
This would be fine if I had a reasonable buffer to work with, but my stream is very time-sensitive, I cannot afford jitter more than 50ms or so. So these pauses lead to big buffer underruns in my application.
The delay is at the receiver, earlier than userspace, since tshark running there shows a gap in packets when it happens. But this gap is not present in wireshark on the client PC. It is hard to tell whether the spikes correlate with activity, since activity is very evenly distributed on the device, there is nothing else of note running on there than my application. It is not correlated with time, it seems random.
Where can I begin to troubleshoot this?
GrixM
(101 rep)
Jul 18, 2025, 08:32 PM
• Last activity: Jul 20, 2025, 09:09 PM
2
votes
1
answers
146
views
Try netconsole in single machine, but not working
Some basic info of my machine: ``` # uname -a Linux iZ2zeirtviyt9b8s96ery2Z 6.8.0-40-generic #40-Ubuntu SMP PREEMPT_DYNAMIC Fri Jul 5 10:34:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux # ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00...
Some basic info of my machine:
# uname -a
Linux iZ2zeirtviyt9b8s96ery2Z 6.8.0-40-generic #40-Ubuntu SMP PREEMPT_DYNAMIC Fri Jul 5 10:34:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:16:3e:3d:fb:0d brd ff:ff:ff:ff:ff:ff
altname enp0s5
altname ens5
inet 172.25.37.57/18 metric 100 brd 172.25.63.255 scope global dynamic eth0
valid_lft 1887925919sec preferred_lft 1887925919sec
inet6 fe80::216:3eff:fe3d:fb0d/64 scope link
valid_lft forever preferred_lft forever
In session 1, I run modprobe netconsole netconsole=@/,@127.0.0.1/
, and the dmesg show:
[4235827.031716] netpoll: netconsole: local port 6665
[4235827.031724] netpoll: netconsole: local IPv4 address 0.0.0.0
[4235827.031726] netpoll: netconsole: interface 'eth0'
[4235827.031727] netpoll: netconsole: remote port 6666
[4235827.031729] netpoll: netconsole: remote IPv4 address 127.0.0.1
[4235827.031730] netpoll: netconsole: remote ethernet address ff:ff:ff:ff:ff:ff
[4235827.031734] netpoll: netconsole: local IP 172.25.37.57
[4235827.031794] printk: legacy console [netcon0] enabled
[4235827.031797] netconsole: network logging started
In session 2, I run
# nc -u -l -v 127.0.0.1 6666
Bound on localhost 6666
So I think the setup is done.
Then I send some logs to dmesg:
# dmesg -n 8 && echo "NETCONSOLE SUCCESS" | sudo tee /dev/kmsg
NETCONSOLE SUCCESS
# dmesg | tail
[4235827.031729] netpoll: netconsole: remote IPv4 address 127.0.0.1
[4235827.031730] netpoll: netconsole: remote ethernet address ff:ff:ff:ff:ff:ff
[4235827.031734] netpoll: netconsole: local IP 172.25.37.57
[4235827.031794] printk: legacy console [netcon0] enabled
[4235827.031797] netconsole: network logging started
[4235853.405638] NETCONSOLE SUCCESS
[4235864.247827] NETCONSOLE SUCCESS
[4235901.736691] NETCONSOLE SUCCESS
[4235902.538482] NETCONSOLE SUCCESS
But in session 2, I cannot see any received log. I expect I could see received log in Session 2.
Could anyone help me how to further debug this?
Note: Why use one machine? I just want to learn how netconsole work for demo.
linrl3
(51 rep)
Jun 9, 2025, 05:47 AM
• Last activity: Jun 9, 2025, 08:01 AM
9
votes
3
answers
7559
views
Cannot see packets arrive on application socket that were seen by Wireshark
Using Ubuntu 14 I have a Linux machine where there are two interfaces: eth1: 172.16.20.1 ppp0: 192.168.0.2 ppp0 is connected to a device which has a PPP interface (192.168.0.1) and a WAN interface (172.16.20.2). I can verify that this device can reach 172.16.20.1 The problem I am having is if I send...
Using Ubuntu 14
I have a Linux machine where there are two interfaces:
eth1: 172.16.20.1
ppp0: 192.168.0.2
ppp0 is connected to a device which has a PPP interface (192.168.0.1) and a WAN interface (172.16.20.2). I can verify that this device can reach 172.16.20.1
The problem I am having is if I send a packet using Python on the same machine:
# client.py
import socket
cl = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
cl.sendto("Hello", ("172.16.20.1", 5005))
# server.py
import socket
srv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
srv.bind(("", 5005))
while True:
data, addr = srv.recvfrom(2048)
print("Message: ", data)
the script works fine but I cannot see the packet on Wireshark coming out of eth1 (I can only see it when I choose to capture on the lo interface). I assume the OS has detected the packet is for one of its local interface and does not send it through the 192.168.0.2 socket created.
When I add the following rules to prevent this from happening:
sudo ip route del table local 172.16.20.1 dev eth1
sudo ip route add table local 172.16.20.1 dev ppp0
sudo ip route flush cache
What happens is:
- I can see the packets on Wireshark now arriving at eth1, the source address is the address of the WAN (172.16.20.2)
- I cannot see any output from server.py after restarting the program.
**Ignoring the ppp0 interface and using two ethx interfaces:**
If I try to run the program in two (client and server) separate machines (without applying the rules), I can see the packets arriving at eth1 in Wireshark, and the output on server.py. If I try to run the program in two separate machines AND I apply the rules above for the ppp0 connection (I have not removed it), I can no longer see any output from server.py but can still see packets arriving on Wireshark. My knowledge of the TCP/IP stack is not good, but it looks like the link layer is no longer forwarding to the application layer?
nnja
(199 rep)
Jan 17, 2018, 04:09 AM
• Last activity: May 19, 2025, 01:08 PM
0
votes
1
answers
2353
views
Netcat listener: Line break between syslog messages
I am playing with `netcat` command. On this linux system, I have set up a netcat listener on UDP 514, so that I get to see syslog messages from remote systems. $ sudo nc -v -ulp 514 listening on [::]:514 ... connect to 192.168.20.252:514 from (null) ([::ffff:192.168.20.5]:58904) 60: *Mar 5 19:57:06....
I am playing with
netcat
command. On this linux system, I have set up a netcat listener on UDP 514, so that I get to see syslog messages from remote systems.
$ sudo nc -v -ulp 514
listening on [::]:514 ...
connect to 192.168.20.252:514 from (null) ([::ffff:192.168.20.5]:58904)
60: *Mar 5 19:57:06.735: %SYS-5-CONFIG_I: Configured from console by console61: *Mar 5 19:57:32.651: %SYS-5-CONFIG_I: Configured from console by console62: *Mar 5 20:10:10.127: %SYS-5-CONFIG_I: Configured from console by console
The logs are coming through, but there is no line break between events. I need a line break between events. The terminal output is enough and no need to store the logs. While the logs are not uniform, how can I achieve this?
Bruce Malaudzi
(1655 rep)
Mar 5, 2021, 08:14 PM
• Last activity: May 7, 2025, 11:03 AM
1
votes
0
answers
105
views
UDP multicast fails with "network unreachable" after static IP change on Linux 4.1
We have a requirement for UDP multicasting in our project using Linux 4.1 kernel with static ip address. Basic UDP multicasting by the use of the `sendto` function to send data works fine with device static ip 10.13.204.100; the issue comes when I change the ip address of the device to 10.13.204.101...
We have a requirement for UDP multicasting in our project using Linux 4.1 kernel with static ip address.
Basic UDP multicasting by the use of the
sendto
function to send data works fine with device static ip 10.13.204.100; the issue comes when I change the ip address of the device to 10.13.204.101 or to any other ip in the same series, the udp multicasting starts showing the error:
sendto: network unreachable
The UDP has been initialized by the following function:
int udp_init()
{
char multicastTTL = 10;
// Create UDP socket:
memset(&socket_desc, 0, sizeof(socket_desc));
socket_desc = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (socket_desc %d\nsocket_desc==>%d\n", udp_socket_fd, socket_desc);
/* Set the TTL (time to live/hop count) for the send */
// if (setsockopt(socket_desc, IPPROTO_IP, IP_MULTICAST_TTL, &multicastTTL, sizeof(multicastTTL)) %s:%d\n", __FILE__, __FUNCTION__, __LINE__);
return 1;
}
}
Once the ip has been changed, the UDP socket is closed by the instruction:
close(socket_desc)
Once again the udp_init
function (showed above) is used to initialize the UDP socket and then it is used the sendto
function to transmit the data but this causes the error:
sendto:network unreachable
thanks in advance
Rajesh
(23 rep)
Jan 4, 2023, 10:11 AM
• Last activity: Apr 24, 2025, 12:17 PM
1
votes
2
answers
300
views
Traceroute from VM guest does not go past Host PC to internet
I have created an Ubuntu virtual machine on my PC, and I was trying out the traceroute command, as I need it for a college project. However, when I ran "traceroute www.google.com", what I got was: 1 _gateway (10.0.2.2) 0.494 ms 0.472 ms 0.461 ms 2 * * * 3 * * * and so on, nothing but "* * *". It wor...
I have created an Ubuntu virtual machine on my PC, and I was trying out the traceroute command, as I need it for a college project. However, when I ran "traceroute www.google.com", what I got was:
1 _gateway (10.0.2.2) 0.494 ms 0.472 ms 0.461 ms
2 * * *
3 * * *
and so on, nothing but "* * *". It worked when I added the "-I" argument, but I would not be allowed to use that for my project. Is there any way to fix this, so that the I can do traceroute using UDP, not ICMP?
Might also be worth mentioning that when I ran traceroute on my Windows PC, the one the VM is running on, I didn't encounter any issues like this.
Boni Daniel
(13 rep)
Jun 1, 2023, 06:13 PM
• Last activity: Jan 18, 2025, 02:59 PM
0
votes
0
answers
85
views
Send UDP data from one NIC and receive on another NIC on same machine
I have multiple NIC's on my Ubuntu machine. ```none $ ifconfig docker0: flags=4099 mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:caff:fe8e:4751 prefixlen 64 scopeid 0x20 ether 02:42:ca:8e:47:51 txqueuelen 0 (Ethernet) RX packets 765284 bytes 41062277 (41.0 MB)...
I have multiple NIC's on my Ubuntu machine.
$ ifconfig
docker0: flags=4099 mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:caff:fe8e:4751 prefixlen 64 scopeid 0x20
ether 02:42:ca:8e:47:51 txqueuelen 0 (Ethernet)
RX packets 765284 bytes 41062277 (41.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1184197 bytes 5763037228 (5.7 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno1np0: flags=4163 mtu 1500
inet 10.10.100.105 netmask 255.255.255.0 broadcast 10.10.100.255
ether 3c:ec:ef:74:3b:16 txqueuelen 1000 (Ethernet)
RX packets 31185342 bytes 31349645842 (31.3 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7674137 bytes 1643490885 (1.6 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno2np1: flags=4099 mtu 1500
ether 3c:ec:ef:74:3b:17 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp175s0f0np0: flags=4163 mtu 1500
inet 172.16.0.10 netmask 255.255.0.0 broadcast 172.16.255.255
ether 1c:34:da:7e:65:de txqueuelen 1000 (Ethernet)
RX packets 214905696 bytes 268542904796 (268.5 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75363 bytes 11878888 (11.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp175s0f1np1: flags=4163 mtu 1500
inet 172.16.0.11 netmask 255.255.0.0 broadcast 172.16.255.255
ether 1c:34:da:7e:65:df txqueuelen 1000 (Ethernet)
RX packets 36352647 bytes 40206354758 (40.2 GB)
RX errors 0 dropped 283164 overruns 0 frame 0
TX packets 998 bytes 114081 (114.0 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 3819602 bytes 764575236 (764.5 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3819602 bytes 764575236 (764.5 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I can send and receive datagrams via UDP sockets. I am using code like this
https://en.wikipedia.org/wiki/Berkeley_sockets under UDP client server examples.
My server is running on enp175s0f1np1 172.16.0.11
I only see my RX increment on enp175s0f1np1 when I do 'watch -n,1 ifconfig'
The TX (on any NIC) does not increment. Not sure why, I am receiving data just fine.
I think it is using the same NIC enp175s0f1np1 for TX and RX.
How can I make it do TX from the other NIC enp175s0f0np0 172.16.0.10?
The SO_BINDTODEVICE
seems to be for server sockets, as shown here https://unix.stackexchange.com/a/648721/583986
Will it work for client socket as well?
Also any idea why the TX won't increment when I am receiving data?
rosewater
(109 rep)
Oct 18, 2024, 05:18 PM
• Last activity: Dec 10, 2024, 12:07 AM
0
votes
1
answers
42
views
Linux Off-path ICMP Fragmentation Needed Injection Attack to quic-go library
I was looking at [CVE-2024-53259](https://nvd.nist.gov/vuln/detail/CVE-2024-53259), where an attacker can inject ICMP Fragmentation Needed message to a host with QUIC connection using quic-go library. The cause is quic-go setting `IP_PMTUDISC_DO` socket option when probing its path mtu. What I don't...
I was looking at [CVE-2024-53259](https://nvd.nist.gov/vuln/detail/CVE-2024-53259) , where an attacker can inject ICMP Fragmentation Needed message to a host with QUIC connection using quic-go library. The cause is quic-go setting
IP_PMTUDISC_DO
socket option when probing its path mtu.
What I don't really get is while the CVE page basically says the kernel receives the attacker's ICMP packet and updates the path mtu, it seems like Linux kernel handles packet separately by dst ip and src ip for received ICMP error, building 'flow key' (I'm looking at [ipv4_sk_update_pmtu func](https://github.com/torvalds/linux/blob/feffde684ac29a3b7aec82d2df850fbdbdee55e4/net/ipv4/route.c#L1102)) :
void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu)
{
const struct iphdr *iph = (const struct iphdr *)skb->data;
struct flowi4 fl4;
struct rtable *rt;
struct dst_entry *odst = NULL;
bool new = false;
struct net *net = sock_net(sk);
bh_lock_sock(sk);
if (!ip_sk_accept_pmtu(sk))
goto out;
odst = sk_dst_get(sk);
if (sock_owned_by_user(sk) || !odst) {
__ipv4_sk_update_pmtu(skb, sk, mtu);
goto out;
}
// this function calls flowi4_init_output inside, which takes src/dst addr as arguments
__build_flow_key(net, &fl4, sk, iph, 0, 0, 0, 0, 0);
rt = dst_rtable(odst);
if (odst->obsolete && !odst->ops->check(odst, 0)) {
rt = ip_route_output_flow(sock_net(sk), &fl4, sk);
if (IS_ERR(rt))
goto out;
new = true;
}
...
}
If packets are handled based on their src/dst address, attacker's packet should not interrupt existing connection because the packet has different source IP address. Why is this kind of attack available?
thanks in advance
kota-yata
(3 rep)
Dec 5, 2024, 06:19 PM
• Last activity: Dec 5, 2024, 08:38 PM
3
votes
2
answers
212
views
Why doesn't netcat print anything when listening in UDP mode when it can't reach the client even when the client can reach the server?
I'm using a fresh minimal Ubuntu server 24.04.1 LTS install. I run these commands as root to set up networking and do some experiments: ```sh apt install netcat-traditional ip netns add ns1 ip netns add ns2 ip link add my_veth1 type veth peer name my_veth2 ip link set my_veth1 up netns ns1 ip link s...
I'm using a fresh minimal Ubuntu server 24.04.1 LTS install.
I run these commands as root to set up networking and do some experiments:
apt install netcat-traditional
ip netns add ns1
ip netns add ns2
ip link add my_veth1 type veth peer name my_veth2
ip link set my_veth1 up netns ns1
ip link set my_veth2 up netns ns2
ip -n ns1 address add 1.2.3.4 dev my_veth1
ip -n ns1 route add 2.3.4.0/24 dev my_veth1
ip netns exec ns2 nc -u -l -p 8080
then I run this from another terminal:
ip netns exec ns1 nc -u 2.3.4.5 8080 ns2 my_veth2 gives the same output
So I tried creating the ARP table entry manually...
sh
ip -n ns1 neighbour add 2.3.4.5 dev my_veth1 lladdr $(ip netns exec ns2 cat /sys/class/net/my_veth2/address)
And now apparently the UDP packet is being sent
$ ip netns exec ns1 tcpdump -l -i my_veth1
00:24:15.164245 IP 1.2.3.4.36170 > 2.3.4.5.8080: UDP, length 39
> ns2 my_veth2 gives the same output
However, the first terminal that has the UDP netcat server running still doesn't output anything. Why?
---
**EDIT 3:** After doing all of the above, I tried assigning an IP address to my_veth2
:
sh
ip -n ns2 address add 2.3.4.5 dev my_veth2
And now, when I send the UDP packet, I get this error in the terminal that is running netcat in listen mode:
sh
no connection : Network is unreachable
```
Why? I mean, of course the network is unreachable, but that shouldn't stop the server from receiving and displaying UDP packets. In fact, that error is only displayed when it receives the UDP packet. So even if it knows that it can't answer, it should be able to receive and display the message, right?
Adrian
(249 rep)
Nov 22, 2024, 11:26 PM
• Last activity: Nov 26, 2024, 11:33 PM
0
votes
0
answers
28
views
How do I generate MS-Teams Media traffic to 13.107.64.0/18 port 3478 from linux host
I have a lab setup and wanted to trace the routing path it takes to reach MS-Teams (UDP traffic) - MS-Teams Media falling under IP range 13.107.64.0/18 52.112.0.0/14 52.120.0.0/14 and port 3478, 3479, 3480, 3481. I have a linux host connected and help needed to simulate MS-Teams (UDP) traffic from t...
I have a lab setup and wanted to trace the routing path it takes to reach MS-Teams (UDP traffic) - MS-Teams Media falling under IP range 13.107.64.0/18 52.112.0.0/14 52.120.0.0/14 and port 3478, 3479, 3480, 3481.
I have a linux host connected and help needed to simulate MS-Teams (UDP) traffic from the host
Sridevi Kiran
(1 rep)
Nov 26, 2024, 12:07 PM
• Last activity: Nov 26, 2024, 12:09 PM
0
votes
0
answers
82
views
I can´t see UDP packets in a NFQUEUE queue on Ubuntu, even though the iptables rules are configured correctly
I am trying to capture UDP packets on an Ubuntu server using iptables and the NFQUEUE queue and then process them with an application. However, although I have configured the iptables rules correctly and the UDP packets appear to arrive at the server (as shown by tcpdump), I do not see them in the N...
I am trying to capture UDP packets on an Ubuntu server using iptables and the NFQUEUE queue and then process them with an application. However, although I have configured the iptables rules correctly and the UDP packets appear to arrive at the server (as shown by tcpdump), I do not see them in the NFQUEUE queue.
This packets are created in another Computer (Sender) by a python Script and its connected directly to the Ubuntu Computer (Receiver) with a wired connection. I have checked that the packets are correct (recalculated Cheksums and proper structure).
Configuration and Operating system:
**Ubuntu 24.04.**
**Kernel: 6.8.0-47-generic.**
Rules im creating in the Ubuntu:
> sudo iptables -I INPUT 1 -i enp0s31f6 -p udp --dport 12345 -j LOG --log-prefix "UDP packet detected: "
> sudo iptables -I INPUT 2 -i enp0s31f6 -p udp --dport 12345 -j NFQUEUE --queue-num 0
With tcpdump I have confirmed that UDP packets on port 12345 are reaching the server.
> sudo tcpdump -i enp0s31f6 udp port 12345
tcpdump: listening on enp0s31f6, link-type EN10MB (Ethernet), snapshot length 262144 bytes
13:30:42.736329 IP (tos 0x0, ttl 64, id 2001, offset 0, flags [DF], proto UDP (17), length 40)
10.22.18.54.12345 > 192.168.1.10.12345: UDP, length 12
If instead of using the **enp0s31f6** interface I use **loopback(lo)** together with the command:
> echo 'hello world' | nc -u localhost 12345
It does work correctly.
To check that it works I am seeing how many packets have passed the created rule. In the loopback case we have 1 packet indicating that it has worked, and each time I run the above command I see it increase by 1.
While the **enp0s31f6** rule does not increase by anything and the dmesg log shows absolutely nothing.
> sudo iptables -L -v -n
Chain INPUT (policy ACCEPT 39989 packets, 129M bytes)
pkts bytes target prot opt in out source destination
0 0 NFQUEUE 17 -- enp0s31f6 * 0.0.0.0/0 0.0.0.0/0 udp dpt:12345 NFQUEUE num 0
1 40 NFQUEUE 17 -- lo * 0.0.0.0/0 0.0.0.0/0 udp dpt:12345 NFQUEUE num 0
What could be happening and what am I doing wrong? What do I have to change to start capturing UDP packets on the **enp0s31f6** interface?
Carlos López Martínez
(101 rep)
Nov 6, 2024, 04:03 PM
0
votes
1
answers
208
views
What is the structure that is stored in UDP socket's send/receive buffers?
I noticed that receive and send buffers of UDP sockets have an overhead of approximately 600 bytes. That is, when I write 10 bytes of data to UDP socket, the actual structure stored in send/receive buffers has a size of 768 bytes. So the questions are: 1) What is actually stored in send/receive buff...
I noticed that receive and send buffers of UDP sockets have an overhead of approximately 600 bytes. That is, when I write 10 bytes of data to UDP socket, the actual structure stored in send/receive buffers has a size of 768 bytes. So the questions are:
1) What is actually stored in send/receive buffers?
2) Is there a way to get number of overhead bytes from C code
3) How many packets of size X bytes, can be stored in receive buffer, before it gets overflown?
Here is a small demo:
int fd = socket(AF_INET, SOCK_DGRAM, 0);
struct sockaddr_in addr{
.sin_family = AF_INET,
.sin_port = htons(8080),
.sin_addr = INADDR_ANY,
.sin_zero = {},
};
sendto(fd, "0123456789", 10, 0, (sockaddr*)(&addr), sizeof(addr));
Socket that was bound to port number 8080 has its receive buffer populated with 768 bytes of data:
> netstat -anu | grep 8080
udp 768 0 0.0.0.0:8080 0.0.0.0:*
Zhalgas Yerzhanov
(101 rep)
Nov 1, 2024, 10:11 AM
• Last activity: Nov 1, 2024, 10:42 AM
0
votes
1
answers
180
views
Why isn't UDP port 443 accepting connections when iptables rules are set?
# Why isn't UDP port 443 accepting connections when iptables rules are set? ## Environment - Operating System: Linux 6.8.0-47-generic #47-Ubuntu, aarch64 - Cloud VM: Yes (Hetzner) ## Current Setup I'm trying to set up UDP communication on port 443, but I'm encountering issues despite having configur...
# Why isn't UDP port 443 accepting connections when iptables rules are set?
## Environment
- Operating System: Linux 6.8.0-47-generic #47-Ubuntu, aarch64
- Cloud VM: Yes (Hetzner)
## Current Setup
I'm trying to set up UDP communication on port 443, but I'm encountering issues despite having configured the firewall rules.
### Steps Taken
1. **Added firewall rules:**
sudo iptables -A INPUT -p udp --dport 443 -j ACCEPT
sudo netfilter-persistent save
2. **Current iptables rules:**
# IPv4 rules
iptables -L -v -n | grep 443
35318 4093K f2b-nginx-limit-req 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443
210K 25M f2b-nginx-php-accessrules 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443
209K 25M f2b-wordpress 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443
209K 25M f2b-nginx-bad-request 6 -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443
302 88281 ACCEPT 17 -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:443
# IPv6 rules
ip6tables -L -v -n | grep 443
61618 9650K f2b-wordpress 6 -- * * ::/0 ::/0 multiport dports 80,443
61600 9649K f2b-nginx-bad-request 6 -- * * ::/0 ::/0 multiport dports 80,443
61600 9649K f2b-nginx-php-accessrules 6 -- * * ::/0 ::/0 multiport dports 80,443
1467 316K ACCEPT 17 -- * * ::/0 ::/0 udp dpt:443
3. **Port scan results:**
sudo nmap -sU -p 443
Starting Nmap 7.94SVN ( https://nmap.org ) at 2024-10-26 20:14 CEST
Nmap scan report for
Host is up (0.0027s latency).
PORT STATE SERVICE
443/udp open|filtered https
Nmap done: 1 IP address (1 host up) scanned in 0.49 seconds
4. **Attempted netcat test:**
- Server side:
sudo nc -lu 443
- Client side:
echo 'test' | nc -u 443
## Issue
No traffic is getting through to the listening netcat process despite the port showing as "open|filtered" in nmap scan.
## Questions
1. Why might the port show as "open|filtered" rather than definitively "open"?
2. What additional configurations might I need to get UDP traffic flowing through port 443?
3. Are there any diagnostic steps I should take to identify where the traffic is being blocked?
## What I've Already Checked
- Firewall rules are in place and saved for both IPv4 and IPv6
- Port is not blocked according to nmap
- Basic connectivity exists between client and server
- Hetzner cloud firewall is configured to allow port 443 UDP
- fail2ban has multiple rules for ports 80,443 but these appear to be for TCP (nginx-related)
## Additional Information
If anyone needs more details about my setup or additional debugging information, please let me know.
ekadagami
(3 rep)
Oct 28, 2024, 11:04 AM
• Last activity: Oct 28, 2024, 08:14 PM
1
votes
0
answers
100
views
What might cause a UDP packet to match on --ctstate INVALID
This is more of a conceptual question. I've noticed that we sometimes drop UDP packets due to ctstate INVALID, but from my understanding that should never happen. Researching this I can't really find anything on this topic. Does ctstate INVALID only cover conntrack related problems or is it also pos...
This is more of a conceptual question. I've noticed that we sometimes drop UDP packets due to ctstate INVALID, but from my understanding that should never happen. Researching this I can't really find anything on this topic. Does ctstate INVALID only cover conntrack related problems or is it also possible to get into this state due to a broken UDP checksum or something similar?
Philippe
(569 rep)
Oct 23, 2024, 01:26 AM
8
votes
2
answers
4293
views
Start a service on a network request (socket activation)
I have a program that under normal activation listens on some port. I don't want the program running continuously. Is there a "quick and dirty" way to wrap the application in a shell script or similar that will monitor the appropriate port, and start the service on demand? The simplest approach woul...
I have a program that under normal activation listens on some port.
I don't want the program running continuously.
Is there a "quick and dirty" way to wrap the application in a shell script or similar that will monitor the appropriate port, and start the service on demand?
The simplest approach would likely lead to the connection failing since the wrapper would have to let go of the port, and then start up the application. If the client simply connects again a short time later though, it could all work.
But it would of course be even nicer if this was all completely transparent to the client.
user50849
(5482 rep)
Oct 27, 2014, 07:04 PM
• Last activity: Oct 8, 2024, 10:01 AM
2
votes
1
answers
266
views
Unicast UDP socket tx buffer fills awaiting ARP
I have an 'understanding' question but it does have a real-world case that I've simplified out. Consider this sample network (with simplified IPs): [![Sample UDP Flow Diagram][1]][1] From left to-right: - Three hosts S1-S3 (IPs , running three programs R1-R3 respectively; A switch with 3 VLANS and a...
I have an 'understanding' question but it does have a real-world case that I've simplified out. Consider this sample network (with simplified IPs):
From left to-right:
- Three hosts S1-S3 (IPs , running three programs R1-R3 respectively; A switch with 3 VLANS and a trunk port connected to eth0 of a 4th host S4. All are Ubuntu 20.04 LTS, but that's probably irrelevant.
- S4 is running P1 that binds to three UDP sockets, one on each of three vlan interfaces on the respective networks of S1,2,3
- P1 Sends unicast _and_ multicast UDP messages to several ports on each of those sockets at rates of 10-100Hz. There is a netfilter

OUTPUT
chain gating these flows.
Each of the red areas represents something that might be 'down'. In all of these cases except one the UDP datagrams are immediately discarded:
* Virtual interface V10
is link down; process R2
isn't running; or OUTPUT
chain DROP
s the flow - In each case the packets is either silently discarded by S2 or an error is reported back to the P1 and the packet dropped. All other mcast continues to flow.
* In the case of S3, after removing the OUTPUT
block, the tx
buffer of the socket on v12
rapidly fills up with unicast packets bound for 12.1
and this blocks any further writes of both unicast and multicast.
In this state an strace
on P1
might look like this (trimmed):
15:38:27 sendto(9, ... {sa_family=AF_INET, sin_port=htons(2347), sin_addr=inet_addr("12.1")}, 16) = 31674
15:38:27 sendto(9, ... {sa_family=AF_INET, sin_port=htons(2347), sin_addr=inet_addr("12.1")}, 16) = 31674
15:38:27 sendto(9, ... {sa_family=AF_INET, sin_port=htons(2347), sin_addr=inet_addr("12.1")}, 16) = 31674
15:38:27 sendto(9, ... {sa_family=AF_INET, sin_port=htons(2347), sin_addr=inet_addr("12.1")}, 16) = -1 EAGAIN (Resource temporarily unavailable)
15:38:30 sendto(9, ... {sa_family=AF_INET, sin_port=htons(2347), sin_addr=inet_addr("12.1")}, 16) = 31674
15:38:30 sendto(9, ... {sa_family=AF_INET, sin_port=htons(2347), sin_addr=inet_addr("12.1")}, 16) = 31674
15:38:30 sendto(9, ... {sa_family=AF_INET, sin_port=htons(2347), sin_addr=inet_addr("12.1")}, 16) = 31674
15:38:30 sendto(9, ... {sa_family=AF_INET, sin_port=htons(2347), sin_addr=inet_addr("12.1")}, 16) = -1 EAGAIN (Resource temporarily unavailable)
And ss
shows the problem:
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
UNCONN 0 148992 0.0.0.0:51047 0.0.0.0:* users:(("task",pid=381888,fd=9))
skmem:(r0,rb212992,t148992,tb131070,f0,w0,o0,bl0,d0)
So we have 4 * ~30kb writes to the (default 120kb) socket buffer with the 4th reporting EAGAIN
when the buffer is full. The buffer is repeatedly filling while the OS is waiting for an ARP
response from S3
that will never come.
Two questions off the back of all this:
1. UDP is unreliable by nature. Our application is quite happy that packets are otherwise discarded, so why would tx
be queued this way while the kernel is trying to resolve ARP? (Consider the case where S3 might only be occasionally connected but other hosts on v12
might still be reachable.)
2. The buffer appears to be emptied (then rapidly filled with new writes) every 3 seconds. One of the results is that any multicast that makes it into that same tx
queue comes out in little bursts rather than at the steady send rate. Sure we can open more send sockets, but where is this value set and is this behaviour tuneable in sysctl
?
SmallClanger
(190 rep)
Oct 7, 2024, 08:37 AM
• Last activity: Oct 7, 2024, 10:50 AM
0
votes
2
answers
357
views
Cannot get the TPROXY UDP to actually receive some data inside
I believe I am dealing with a routing issue on my system (which is a default Ubuntu 22 installation) but I really can't understand how to approach the debugging of said problem. By following the [Kernel's Transparent Proxy tutorial][1], I'm trying to implement a transparent proxy for all the outboun...
I believe I am dealing with a routing issue on my system (which is a default Ubuntu 22 installation) but I really can't understand how to approach the debugging of said problem.
By following the Kernel's Transparent Proxy tutorial , I'm trying to implement a transparent proxy for all the outbound UDP traffic with destination port being 53 (that is, I'm trying to filter all the DNS requests on my system).
After many tutorials online, here's the setup that I think should work (but doesn't actually):
# iptables -t mangle -N DIVERT
# iptables -t mangle -A PREROUTING -p udp -m socket -j DIVERT
# iptables -t mangle -A DIVERT -j MARK --set-mark 1
# iptables -t mangle -A DIVERT -j ACCEPT
# ip rule add fwmark 1 lookup 100
# ip route add local 0.0.0.0/0 dev lo table 100
# iptables -t mangle -A PREROUTING -p udp --dport 53 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 5354
The C code for the actual UDP socket that waits for incoming traffic:
void intercept()
{
int fd;
ssize_t bytes_received_len;
char buffer[BUFFER_SIZE];
struct sockaddr_in inbound_addr, forward_server_addr;
socklen_t addr_len = sizeof(inbound_addr);
if ((fd = socket(AF_INET, SOCK_DGRAM, 0)) < 0) {
perror("inbound socket creation failed");
exit(EXIT_FAILURE);
}
int val = 1;
setsockopt(fd, SOL_IP, IP_TRANSPARENT, &val, sizeof(val));
struct sockaddr_in addr;
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = inet_addr("127.0.0.1");
addr.sin_port = htons(5354);
bind(fd, (struct sockaddr *)&addr, sizeof(addr));
for (;;)
{
printf("waiting on port 5354\n");
bytes_received_len = recvfrom(fd, buffer, BUFFER_SIZE, 0, (struct sockaddr *)&inbound_addr, &addr_len);
printf("received %zd bytes\n", bytes_received_len);
if (bytes_received_len == 0)
{
continue;
}
}
}
All the error handling is skipped for clarity.
As you can see from the iptables commands, I'm trying to outbound traffic to port 53 into my local proxy at 127.0.0.1: 5354. I never actually receive anything useful from recvfrom
even if I actuall DNS requests on my system and they work, but they seem to bypass the proxy completely.
Am I missing anything super obvious?
I did look into all these articles:
- Above mentioned kernel tutorial
- This question: https://stackoverflow.com/questions/42738588/ip-transparent-usage
- Even if it's about Rust, it was still useful: https://serverfault.com/questions/1143600/tproxy-is-not-redirecting-all-traffic-to-a-specified-port
- go-tproxy: https://github.com/KatelynHaworth/go-tproxy
Any pointer would be super helpful. It really just feels like I'm missing something really obvious.
Andrei Glingeanu
(103 rep)
Aug 26, 2024, 04:24 PM
• Last activity: Oct 3, 2024, 02:23 PM
2
votes
2
answers
489
views
traceroute (UDP) lost packets
I am facing the following issue when running ```traceroute``` between two nodes in the same subnet. This is done as a test whether the network connection between this 2 nodes is reliable or not. We were told to use this approach from a known DB vendor's Support Team. While running the command: ```tr...
I am facing the following issue when running
between two nodes in the same subnet.
This is done as a test whether the network connection between this 2 nodes is reliable or not.
We were told to use this approach from a known DB vendor's Support Team.
While running the command:
-s 10.1.3.205 -r 10.1.3.210
there are packets randomly not received, and no RTT
is reported:
traceroute -s 10.1.3.205 -r 10.1.3.210
traceroute to 10.1.3.210 (10.1.3.210), 30 hops max, 60 byte packets
1 10.1.3.210 (10.1.3.210) 0.152 ms 0.064 ms *
In opposite, running traceroute with option ICMP: traceroute -I -s 10.1.3.205 -r 10.1.3.210
is reliable and no missing packets occur.
The same issue was discovered on several Linux systems in our environment with different patch levels, different versions of traceroute and no matter whether system is a VM or physical.
To simplify and for easier reading of tcpdump, I tried with the following command:
for i in {1..10}; do traceroute -s 10.1.3.205 -r 10.1.3.210 -n 1 -m 1 -q 1; done
The output is the following:
for i in {1..10}; do traceroute -s 10.1.3.205 -r 10.1.3.210 -n 1 -m 1 -q 1; done
traceroute to 10.1.3.210 (10.1.3.210), 1 hops max, 28 byte packets
1 10.1.3.210 0.203 ms
traceroute to 10.1.3.210 (10.1.3.210), 1 hops max, 28 byte packets
1 10.1.3.210 0.067 ms
traceroute to 10.1.3.210 (10.1.3.210), 1 hops max, 28 byte packets
1 10.1.3.210 0.067 ms
traceroute to 10.1.3.210 (10.1.3.210), 1 hops max, 28 byte packets
1 10.1.3.210 0.071 ms
traceroute to 10.1.3.210 (10.1.3.210), 1 hops max, 28 byte packets
1 10.1.3.210 0.067 ms
traceroute to 10.1.3.210 (10.1.3.210), 1 hops max, 28 byte packets
1 10.1.3.210 0.075 ms
traceroute to 10.1.3.210 (10.1.3.210), 1 hops max, 28 byte packets
1 *
traceroute to 10.1.3.210 (10.1.3.210), 1 hops max, 28 byte packets
1 10.1.3.210 0.142 ms
traceroute to 10.1.3.210 (10.1.3.210), 1 hops max, 28 byte packets
1 10.1.3.210 0.067 ms
traceroute to 10.1.3.210 (10.1.3.210), 1 hops max, 28 byte packets
1 10.1.3.210 0.054 ms
Every 7th packet gets no response and this is reproducable. Now the Support team finalizes like we would have an issue in our network setup with this packet loss.
Running the same loop with a delay of 1 sec. all 10 packets are sent and received:
for i in {1..10}; do traceroute -s 10.1.3.205 -r 10.1.3.210 -n 1 -m 1 -q 1; sleep 1; done
I am a little bit in doubt if this way of testing network reliability is correct or not, since usually traceroute is being used for monitoring of network path over routed connections.
I tried a network connection test over several hours with
from SAP, where no lost connections where discovered.
So is there anything obvious I have missed and is using
this way a feasible way to test network reliabilty?
MMAX
(256 rep)
Sep 9, 2024, 02:36 PM
• Last activity: Sep 9, 2024, 06:01 PM
0
votes
0
answers
75
views
Forwarding KDEConnect UDP packets between bridged APs with firewalld/firewall-cmd
I was having an issue where KDEConnect peers on my LAN could not see each other intermittently. Because the peers are on APs that are physically connected to and bridged on my main router, I tried changing settings involving hairpin, multicast to unicast, etc, which no real improvement. Fortunately,...
I was having an issue where KDEConnect peers on my LAN could not see each other intermittently. Because the peers are on APs that are physically connected to and bridged on my main router, I tried changing settings involving hairpin, multicast to unicast, etc, which no real improvement. Fortunately, this answer by [@A.B](https://unix.stackexchange.com/users/251756/a-b ) seems to have solved the issue with bridge family forwarding/routing.
https://unix.stackexchange.com/questions/745847/nftables-doesnt-see-kde-connect-packets-between-two-machines-on-the-same-interf
Steps 1 and 2 were pretty straightforward for me to implement permanently. However, I would like to convert Step 3 of the answer, which uses nftables, into a permanent solution using firewalld/firewall-cmd instead. The rest of my firewall setup is already defined in firewalld, and I would prefer to keep it all straight in there, and not layer raw nft stuff also. I mostly understand what the nft commands are doing, but not well enough to find their exact corollary in firewall-cmd. I would have asked this in a comment on that other question, but, alas, I need more rep first.
So, how can I implement these nft commands in firewalld instead?
table bridge filter {
chain conntrack {
ct state vmap { invalid : drop, established : accept, related : accept }
}
chain kdeconnect {
udp dport 1714-1764 counter accept
tcp dport 1714-1764 counter accept
}
chain forward {
type filter hook forward priority filter; policy drop;
jump conntrack
ether type ip6 drop # just like OP did: drop any IPv6
icmp type echo-request counter accept
jump kdeconnect
ether type arp accept # mandatory for IPv4 connectivity
counter
}
}
Thanks!
D.H
(1 rep)
Sep 6, 2024, 05:47 PM
0
votes
1
answers
136
views
Implement UDP packets filtering in C
I have a question on how to setup the filtering as described in the following scenario: I see that `systemd-resolved` listens on port 53 and sends all the DNS requests to the real DNS resolvers. I am looking for a way to do this: 1. Catch the response from the DNS resolver that is **about to go into...
I have a question on how to setup the filtering as described in the following scenario:
I see that
systemd-resolved
listens on port 53 and sends all the DNS requests to the real DNS resolvers.
I am looking for a way to do this:
1. Catch the response from the DNS resolver that is **about to go into the systemd-resolved
at port 53
**
2. Hold this packet, read its content (see the actual IP address response), do something with this IP
3. Release the response packet to go through to port 53
Is this something that can be achieved with iptables
, libpcap
or any other available solution?
I know it's possible to install my own listener at port 53
but I really don't want to cut short the systemd-resolver
here, and I am looking for a transparent solution.
Andrei Glingeanu
(103 rep)
Aug 22, 2024, 11:59 AM
• Last activity: Aug 23, 2024, 12:31 PM
Showing page 1 of 20 total questions