Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
2
answers
835
views
Is there a way to set the default TTL for multicast packets at system level?
pcap capture from my Centos 7 to multicast address shows TTL 25. Firewall considered it as DDOS and dropped packets. I know that application code can set the multicast TTL with a call to setsockopt() with the flag IP_MULTICAST_TTL, stating further that default value is 1. I want to know if there is...
pcap capture from my Centos 7 to multicast address shows TTL 25. Firewall considered it as DDOS and dropped packets.
I know that application code can set the multicast TTL with a call to setsockopt() with the flag IP_MULTICAST_TTL, stating further that default value is 1. I want to know if there is a way to set that value at the system level.
Only tunable parameter I found is net.ipv4.ip_default_ttl, but changing it, would change the ttl for entire IP stack and wouldn't be specific to multicast packets only.
Any advice on this ? Thanks
------------------
Editing for additional information asked in comment --> Application is not working. As per network guys, their firewall conside this DDOS attack and dropping packets. To fix this, they suggested to reduce multicast TTL from current 25 to 10 or lower.
As per application guy, they did not enforce this in their application and want me to enforce from system/OS level. But I can't find any tunable parameter at OS level, which can do this.
user3183426
(11 rep)
Dec 8, 2023, 06:35 AM
• Last activity: Jul 18, 2025, 09:40 AM
1
votes
0
answers
105
views
UDP multicast fails with "network unreachable" after static IP change on Linux 4.1
We have a requirement for UDP multicasting in our project using Linux 4.1 kernel with static ip address. Basic UDP multicasting by the use of the `sendto` function to send data works fine with device static ip 10.13.204.100; the issue comes when I change the ip address of the device to 10.13.204.101...
We have a requirement for UDP multicasting in our project using Linux 4.1 kernel with static ip address.
Basic UDP multicasting by the use of the
sendto
function to send data works fine with device static ip 10.13.204.100; the issue comes when I change the ip address of the device to 10.13.204.101 or to any other ip in the same series, the udp multicasting starts showing the error:
sendto: network unreachable
The UDP has been initialized by the following function:
int udp_init()
{
char multicastTTL = 10;
// Create UDP socket:
memset(&socket_desc, 0, sizeof(socket_desc));
socket_desc = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (socket_desc %d\nsocket_desc==>%d\n", udp_socket_fd, socket_desc);
/* Set the TTL (time to live/hop count) for the send */
// if (setsockopt(socket_desc, IPPROTO_IP, IP_MULTICAST_TTL, &multicastTTL, sizeof(multicastTTL)) %s:%d\n", __FILE__, __FUNCTION__, __LINE__);
return 1;
}
}
Once the ip has been changed, the UDP socket is closed by the instruction:
close(socket_desc)
Once again the udp_init
function (showed above) is used to initialize the UDP socket and then it is used the sendto
function to transmit the data but this causes the error:
sendto:network unreachable
thanks in advance
Rajesh
(23 rep)
Jan 4, 2023, 10:11 AM
• Last activity: Apr 24, 2025, 12:17 PM
3
votes
1
answers
3965
views
Is Pulseaudio able to receive RTP Multicast from any source?
Been attempting to get Pulseaudio to receive an RTP stream from VLC. I can get it to receive a TCP audio stream no problem - the pi running Pulse is receiving the multicast data as expected, but when running --verbose I just see it sitting suspended. sap_address is configured to the correct network...
Been attempting to get Pulseaudio to receive an RTP stream from VLC. I can get it to receive a TCP audio stream no problem - the pi running Pulse is receiving the multicast data as expected, but when running --verbose I just see it sitting suspended. sap_address is configured to the correct network and rcv is uncommented, is it listening on all ports since there is not one defined? Does it need to have CIDR after the address?
InfieldFly
(31 rep)
Sep 17, 2017, 08:12 AM
• Last activity: Feb 22, 2025, 04:44 PM
0
votes
1
answers
97
views
How can I have multiple TCP clients connected to server ttyS0?
I'm trying to test following envirorment: One server (it's a router, It has busybox and few other cmd) with a a physical serial port and and open socket #tcpsvd -v 0.0.0.0 -p 999 cat /dev/ttyS0 Several clients connecting to server My issues: 1 when I write to ttyS0 a new line data Is random sent to...
I'm trying to test following envirorment:
One server (it's a router, It has busybox and few other cmd) with a a physical serial port and and open socket
#tcpsvd -v 0.0.0.0 -p 999 cat /dev/ttyS0
Several clients connecting to server
My issues:
1 when I write to ttyS0 a new line data Is random sent to only One client at time, i would this to be to all at same time.
2 what can I use instead of cat to have a bidirectional communication?
3 How can I have flow done byte by bye sent not line be line?
Thanks!
Carmine Esposito
(3 rep)
Jul 11, 2024, 05:23 AM
• Last activity: Jul 11, 2024, 07:09 AM
1
votes
1
answers
1227
views
Linux won't send multicast packets to interface
I have a server on VPN configured as a TAP device (as opposed to TUN, since it is joining to a wider network). I can send unicast traffic of that network, as well as receive multicast packets (and reply to them). But, the issue comes when I send multicast packets. For instance, if I issue: ``` echo...
I have a server on VPN configured as a TAP device (as opposed to TUN, since it is joining to a wider network). I can send unicast traffic of that network, as well as receive multicast packets (and reply to them).
But, the issue comes when I send multicast packets. For instance, if I issue:
echo "hello" | nc -u 224.0.0.251 5354 -w 1
while watching
tcpdump -i tap0 -w test.pcap
No packets are sent.
But if I for instance say: echo "hello" | nc -ub 192.168.1.255
packets come through without issue.
I can look in ip maddr show
and see
3: tap0
inet 224.0.0.251
If I use a regular C socket, and send data to 224.0.0.251, no data appears in the packet capture or on the network.
For background - I am using debian with systemd, and I have enabled multicast on the interface.
[Match]
Name=tap0
[Network]
DNS=127.0.0.67
Domains=local
DHCP=yes
Multicast=yes
MulticastDNS=yes
LLMNR=no
It is visible in ifconfig
tap0: flags=4163 mtu 1500
inet 192.168.1.170 netmask 255.255.255.0 broadcast 192.168.1.255
I also monitored /proc/net/dev
and there are no packets or dropped packets going through tap0
when I try sending packets.
What am I missing? What needs to be different to send multicast packets on my interface?
Charles Lohr
(133 rep)
Jun 4, 2024, 11:11 PM
• Last activity: Jun 5, 2024, 09:16 PM
0
votes
0
answers
31
views
Application does not get mulicast messages on ubuntu os
I have an applicaion running on ubuntu 16.04 os. The application is receive mulicast messages however the application **does not** get the messages. I can see on wireshark IGMP message that the application register to the multicast address and when I run netstat -ng on the application computer I als...
I have an applicaion running on ubuntu 16.04 os.
The application is receive mulicast messages however the application **does not** get the messages.
I can see on wireshark IGMP message that the application register to the multicast address and when I run
netstat -ng on the application computer I also see that the right nic register to the multicast address.
I also see the multicast messages arrived to the computer using tcpdump.
Why the application does not get the multicast messages? Am I missing something?
Ala
(1 rep)
Mar 26, 2024, 06:45 AM
0
votes
1
answers
681
views
configure raspberry pi as multicast server and host
I have a network setup that I want to use to research igmp. I have two routers and a switch, three raspberry pis are connected to the switch with different ip address subnets. I have already configured the dynamic routing protocols on the routers and turned on igmp. I have issues with the raspberry...
I have a network setup that I want to use to research igmp.
I have two routers and a switch, three raspberry pis are connected to the switch with different ip address subnets. I have already configured the dynamic routing protocols on the routers and turned on igmp. I have issues with the raspberry pis and I do not know where to begin to configure them for multicast. I am new to unix and linux backgrounds and I would love any input to point me in the right direction.
I have two routers and a switch, three raspberry pis are connected to the switch with different ip address subnets. I have already configured the dynamic routing protocols on the routers and turned on igmp. I have issues with the raspberry pis and I do not know where to begin to configure them for multicast. I am new to unix and linux backgrounds and I would love any input to point me in the right direction.
uche9260
(1 rep)
Mar 12, 2024, 02:11 AM
• Last activity: Mar 15, 2024, 12:24 PM
1
votes
1
answers
1278
views
nftables - multicast packets not matched
I've set up a rule to match multicast packets as follows: ```bash add rule filter_4 new_out_4 meta pkttype multicast goto multicast_out_4 ``` `filter_4` is IPv4 table, `new_out4` is output chain and `multicast_out_4` is a chain to handle multicast only traffic. Here is a more complete picture of the...
I've set up a rule to match multicast packets as follows:
add rule filter_4 new_out_4 meta pkttype multicast goto multicast_out_4
filter_4
is IPv4 table, new_out4
is output chain and multicast_out_4
is a chain to handle multicast only traffic.
Here is a more complete picture of the IPv4 table excluding non-relevant portion:
#!/usr/sbin/nft -f
add table filter_4
add chain filter_4 output {
# filter = 0
type filter hook output priority filter; policy drop;
}
add chain filter_4 multicast_out_4 {
comment "Output multicast IPV4 traffic"
}
add chain filter_4 new_out_4 {
comment "New output IPv4 traffic"
}
#
# Stateful filtering
#
# Established IPv4 traffic
add rule filter_4 input ct state established goto established_in_4
add rule filter_4 output ct state established goto established_out_4
# Related IPv4 traffic
add rule filter_4 input ct state related goto related_in_4
add rule filter_4 output ct state related goto related_out_4
# New IPv4 traffic ( PACKET IS MATCHED HERE )
add rule filter_4 input ct state new goto new_in_4
add rule filter_4 output ct state new goto new_out_4
# Invalid IPv4 traffic
add rule filter_4 input ct state invalid log prefix "drop invalid_filter_in_4: " counter name invalid_filter_count_4 drop
add rule filter_4 output ct state invalid log prefix "drop invalid_filter_out_4: " counter name invalid_filter_count_4 drop
# Untracked IPv4 traffic
add rule filter_4 input ct state untracked log prefix "drop untracked_filter_in_4: " counter name untracked_filter_count_4 drop
add rule filter_4 output ct state untracked log prefix "drop untracked_filter_out_4: " counter name untracked_filter_count_4 drop
In the above setup new output traffic including multicast is matched by rule add rule filter_4 output ct state new goto new_out_4
Here is new_out_4
chain with only the relevant (non working) multicast rule that doesn't work:
# Multicast IPv4 traffic ( THIS RULE DOES NOT WORK, SEE LOG OUTPUT BELOW)
add rule filter_4 new_out_4 meta pkttype multicast goto multicast_out_4
#
# Default chain action ( MULTICAST PACKET IS DROPPED HERE )
#
add rule filter_4 new_out_4 log prefix "drop new_out_4: " counter name new_out_filter_count_4 drop
Here is what the log says about dropped multicast packet:
> drop new_out_4: IN= OUT=eth0 SRC=192.168.1.100 DST=224.0.0.251 LEN=163 TOS=0x00 PREC=0x00 TTL=255 ID=27018 DF PROTO=UDP SPT=5353 DPT=5353 LEN=143
The packet that is dropped was sent to destination address 224.0.0.251
, this is multicast address, it was supposed to be matched by multicast rule in new_out_4
chain and was supposed to be processed by multicast_out_4
chain but was not.
Instead the packet was not matched and was dropped by default drop rule in new_out_4
chain above, see comment (Default chain action).
Obviously this means that the multicast rule does not work.
Why multicast rule doesn't work?
**Expected:**
meta pkttype multicast
matches destination address 224.0.0.251
**EDIT:**
System info:\
Kernel: 6.5.0-0.deb12.4-amd64\
had the same problem with earlier kernel 6.1
nftables: v1.0.6 (Lester Gooch #5)
metablaster
(776 rep)
Mar 8, 2024, 10:35 AM
• Last activity: Mar 9, 2024, 04:32 PM
1
votes
1
answers
792
views
systemd-networkd multicast route removed when no carrier
We have a gateway product running Ubuntu 20.04 LTS. It has two network interfaces (WAN/LAN) configured using `systemd-networkd`. We use multicast messages to communicate to thousands of IoT devices connected to the LAN. It is important these multicast messages do NOT go out on the WAN since it could...
We have a gateway product running Ubuntu 20.04 LTS. It has two network interfaces (WAN/LAN) configured using
systemd-networkd
. We use multicast messages to communicate to thousands of IoT devices connected to the LAN. It is important these multicast messages do NOT go out on the WAN since it could inadvertently change the behavior of IoT devices that may be present on the WAN side.
To ensure multicast messages are routed only to the LAN, the LAN configuration includes the following:
[Route]
Destination=224.0.0.0/4
Type=multicast
Problem is the multicast route only gets loaded if the LAN interface has a CARRIER (cable plugged in). If there is no carrier, the multicast route is never loaded so multicast messages go out on the WAN. Most of the time, the LAN interface will have a cable plugged in, but this cannot be guaranteed (esp. during bring-up).
We've experimented with setting ConfigureWithoutCarrier=true
on the LAN interface. This seems to ensure the multicast route is always loaded regardless of whether or not the cable is plugged in. But is there a better way to configure a multicast route to ensure multicast messages never go out on the WAN interface?
andy.jackson
(11 rep)
Dec 18, 2020, 09:51 PM
• Last activity: Jan 2, 2024, 09:40 AM
0
votes
1
answers
736
views
Is there a tool to generate layer 2 multicast traffic for test
Is there a tool to generate **layer 2 multicast** traffic on Linux (debian/raspberry/ubuntu). Need it for testing purpose. Need to monitor some netfilter rules. Expecting no one on the other side to listen to this traffic or send any response. Just that the traffic should reach the destination. I be...
Is there a tool to generate **layer 2 multicast** traffic on Linux (debian/raspberry/ubuntu). Need it for testing purpose. Need to monitor some netfilter rules. Expecting no one on the other side to listen to this traffic or send any response. Just that the traffic should reach the destination. I believe NIC accepts Layer 2 multicast traffic without any additional config, so tcpdump and wireshark will be able to sniff the traffic on the listening host. Or maybe wireshark converts NIC to promiscuous mode which enables to sniff multicast traffic.
Thanks
Haswell
(141 rep)
Oct 19, 2023, 03:26 PM
• Last activity: Oct 19, 2023, 04:40 PM
0
votes
2
answers
235
views
What multicast protocol NIC uses?
I have `CONFIG_IP_MULTICAST=y` and currently learning about `IP` multicast and found out that there're 2 common multicast protocols for IP networks: `PIM SS` and `PIM DS`. Querying my wifi adapter info I found out that multicast is supported: $ ip link show dev wlp2s0 2: wlp2s0: mtu 1500 qdisc noque...
I have
CONFIG_IP_MULTICAST=y
and currently learning about IP
multicast and found out that there're 2 common multicast protocols for IP networks: PIM SS
and PIM DS
. Querying my wifi adapter info I found out that multicast is supported:
$ ip link show dev wlp2s0
2: wlp2s0: mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
The question is what multicast protocol the device uses?
Some Name
(297 rep)
Oct 3, 2023, 09:20 PM
• Last activity: Oct 4, 2023, 01:14 PM
0
votes
1
answers
767
views
How to enable a host to reply to multicast ping
I'm experimenting with `multicast` traffic within my wireless network and tried to ping some pre-defined multicast address: $ ping 224.0.0.251 The `ip` address of the ping machine is `192.168.0.11`. So I ran `tcpdump` on another `Linux` machine within the same `LAN` and noticed the following: $ sudo...
I'm experimenting with
multicast
traffic within my wireless network and tried to ping some pre-defined multicast address:
$ ping 224.0.0.251
The ip
address of the ping machine is 192.168.0.11
. So I ran tcpdump
on another Linux
machine within the same LAN
and noticed the following:
$ sudo tcpdump -vv -n -i eth0 icmp
05:33:31.567847 IP (tos 0x0, ttl 1, id 23235, offset 0, flags [none], proto ICMP (1), length 84)
192.168.0.11 > 224.0.0.251: ICMP echo request, id 23235, seq 1, length 64
06:33:32.570106 IP (tos 0x0, ttl 1, id 42255, offset 0, flags [none], proto ICMP (1), length 84)
192.168.0.11 > 224.0.0.251: ICMP echo request, id 42255, seq 2, length 64
As can be seen the ICMP
packets are received on that particular member of the multicast group, but the ICMP
echo reply is not sent back. Why? Is it possible to configure it to be sent?
Some Name
(297 rep)
Oct 4, 2023, 09:36 AM
• Last activity: Oct 4, 2023, 11:11 AM
25
votes
5
answers
244675
views
How can I know if IP Multicast is enabled
I have scripts that run IP multicast tests; however, my scripts are failing on a particular linux machine. I know that I can look at `CONFIG_IP_MULTICAST` in the kernel configuration file to determine whether the kernel was compiled with this. However, it would be easier to flag missing requirements...
I have scripts that run IP multicast tests; however, my scripts are failing on a particular linux machine.
I know that I can look at
CONFIG_IP_MULTICAST
in the kernel configuration file to determine whether the kernel was compiled with this. However, it would be easier to flag missing requirements in my script if I could look at /proc
or sysctl
and get the answer.
Is there a way to find if IP Multicast was compiled into the kernel without looking at CONFIG_IP_MULTICAST
?
Mike Pennington
(2521 rep)
Dec 1, 2011, 01:02 PM
• Last activity: Sep 19, 2023, 01:03 PM
1
votes
1
answers
2903
views
Accessing qemu multicast networks (from Docker containers)
QEMU [allows](https://people.gnome.org/~markmc/qemu-networking.html) to connect different VMs by using a virtual network based on a common multicast address by specifying `-netdev socket,mcast=230.0.0.1:1234` on startup. This way I can easily connect multiple VMs and join new VMs on the fly. Is it p...
QEMU [allows](https://people.gnome.org/~markmc/qemu-networking.html) to connect different VMs by using a virtual network based on a common multicast address by specifying
-netdev socket,mcast=230.0.0.1:1234
on startup.
This way I can easily connect multiple VMs and join new VMs on the fly.
Is it possible to join that network **without** using QEMU? Especially is it possible to connect a docker container to that network?
michas
(21862 rep)
Mar 17, 2018, 05:22 AM
• Last activity: Jul 17, 2023, 03:06 AM
0
votes
1
answers
377
views
igmpproxy not routing SSDP between interfaces
I have hostapd running on two wireless devices in isolated and bridged mode: `wlp1s0` is behind the bridge `wan`, and `wlp5s0` is behind the bridge `iot`. The exact configuration for each bridge is the one described [here][1]. `wan` has the subnet `192.168.2.0/24`n and `iot` the subnet `192.168.3.0/...
I have hostapd running on two wireless devices in isolated and bridged mode:
wlp1s0
is behind the bridge wan
, and wlp5s0
is behind the bridge iot
. The exact configuration for each bridge is the one described here . wan
has the subnet 192.168.2.0/24
n and iot
the subnet 192.168.3.0/24
.
I'm trying to setup SSDP forwarding from wan
to iot
so I can connect to Sonos players on iot
using a controller on wan
. I'm following this guide . Note that it's written with two different VLANs in mind but I assume the same should work for two different bridges.
I have thus set up an igmpproxy instance with the configuration
phyint wan upstream ratelimit 0 threshold 1
phyint iot downstream ratelimit 0 threshold 1
For testing purposes I have disabled packet filtering entirely between the two bridges on the firewall.
I would expect this setup to be enough, but the controller on wan
cannot see the players on iot
. The players do register correctly to the IGMP proxy (192.168.3.29
is one of the players):
igmpproxy: SENT Membership query from 192.168.3.1 to 224.0.0.1
...
igmpproxy: RECV V2 member report from 192.168.3.29 to 239.255.255.250
igmpproxy: Should insert group 239.255.255.250 (from: 192.168.3.29) to route table. Vif Ix : 1
igmpproxy: Updated route entry for 239.255.255.250 on VIF #1
I can check using TCP dump that the controller indeed sends SSDP packets (192.168.2.67
is the controller):
> tcpdump -i wan port 1900
...
16:16:07.600003 IP 192.168.2.67.49628 > 239.255.255.250.ssdp: UDP, length 202
16:16:07.600003 IP 192.168.2.67.49628 > 255.255.255.255.ssdp: UDP, length 202
...
and it seems igmpproxy is receiving these correctly:
igmpproxy: Vif bits : 0x00000002
igmpproxy: Setting TTL for Vif 1 to 1
igmpproxy: Adding MFC: 192.168.2.67 -> 239.255.255.250, InpVIf: 2
...
igmpproxy: Current routing table (Insert Route):
igmpproxy: -----------------------------------------------------
igmpproxy: #0: Src0: 192.168.2.67, Dst: 239.255.255.250, Age:2, St: A, OutVifs: 0x00000002, dHosts
igmpproxy: -----------------------------------------------------
I am not seeing these packets being forwarded with tcpdump though. I would expect some packet on iot
with destination the IPs that got registered for multicast on 239.255.255.250
(so the sonos player in particular). Hence I assume this is what causes the discovery to fail.
Why am I not seeing the SSDP packets being forwarded ? What should I change for the Sonos controller to discover the players through SSDP ?
Quentin
(25 rep)
Jun 4, 2023, 02:26 PM
• Last activity: Jun 17, 2023, 06:56 PM
1
votes
0
answers
270
views
disable multicast and promiscuity is off, nic still receive multicast packet
``` [root@VM-0-13-centos ~]# ip link set dev eth0 multicast off [root@VM-0-13-centos ~]# [root@VM-0-13-centos ~]# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever prefer...
[root@VM-0-13-centos ~]# ip link set dev eth0 multicast off
[root@VM-0-13-centos ~]#
[root@VM-0-13-centos ~]# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:b1:26:bf brd ff:ff:ff:ff:ff:ff
inet 172.16.0.13/24 brd 172.16.0.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:feb1:26bf/64 scope link
valid_lft forever preferred_lft forever
[root@VM-0-13-centos ~]#
[root@VM-0-13-centos ~]#
[root@VM-0-13-centos ~]# ip -d link show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
2: eth0: mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 52:54:00:b1:26:bf brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
```
**promiscuity is off, multicast is off**
nic receive a multicast mac(**33:33:00:01:02**), but centos still deal with this packet, **not drop**.
what is the reason?
Ccfeiker
(21 rep)
Mar 2, 2023, 03:41 AM
2
votes
1
answers
1006
views
ipv6 multicast fails when it should loop back to self
So far I use multicast with ipv4 and it works; all involved computers run linux. I listen on two machines and send on one of those two (in a separate terminal). In the below example 'Hello 1' is received on the sending machine (strawberry) and on the remote machine (ero). ``` ero:~$ sudo ip addr add...
So far I use multicast with ipv4 and it works; all involved computers run linux. I listen on two machines and send on one of those two (in a separate terminal). In the below example 'Hello 1' is received on the sending machine (strawberry) and on the remote machine (ero).
ero:~$ sudo ip addr add 224.4.19.42 dev enp4s0 autojoin
ero:~$ netcat -l -k -u -p 9988
strawberry:~ $ sudo ip addr add 224.4.19.42 dev wlan0 autojoin
strawberry:~ $ netcat -l -k -u -p 9988
strawberry:~ $ echo "Hello 1" | netcat -s 192.168.178.109 -w 0 -u 224.4.19.42 9988
With ipv6 it works as long as only remote machines listen; 'Hello 2' in the below example is received by ero. Once the sender (strawberry) has also joined the multicast group, neither the sender (strawberry) nor the remote machine (ero) receives 'Hello 3':
ero:~$ sudo ip addr add ff05:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:4141 dev enp4s0 autojoin
ero:~$ netcat -l -k -u -p 9988
strawberry:~ $ echo "Hello 2" | netcat -w 0 -s 2001:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:76d0 -u ff05:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:4141 9988
strawberry:~ $ sudo ip addr add ff05:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:4141 dev wlan0 autojoin
strawberry:~ $ netcat -l -k -u -p 9988
strawberry:~ $ echo "Hello 3" | netcat -w 0 -s 2001:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:76d0 -u ff05:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:4141 9988
Maybe of interest: when I do not provide a sender address, i.e., no -s option, then the ipv4 example shows the same behaviour as ipv6: message only received as long as strawberry has not joined the multicast group. Thus I tried different sending addresses with ipv6: the global address shown in the example (2001:...), a unique local address (ULA; fd00:...) and a link-local address (LLA; fe80:...). Neither helps.
Any hints what I am doing wrong?
Al_
(23 rep)
Feb 11, 2023, 04:08 PM
• Last activity: Feb 19, 2023, 10:49 PM
6
votes
2
answers
1537
views
Avahi seems to stop publishing/refreshing services after a while
Firstable, I looked up to several Q/A, i can ensure that the following points are fullfiled: - IGMP snooping isn't filtered by switch/router. - Bonjour services (`mDNSResponder.exe`) is granted and allowed on the firewall as well as __UDP port 5353__ (windows side). - Avahi config is correct (plus u...
Firstable, I looked up to several Q/A, i can ensure that the following points are fullfiled:
- IGMP snooping isn't filtered by switch/router.
- Bonjour services (
mDNSResponder.exe
) is granted and allowed on the firewall as well as __UDP port 5353__ (windows side).
- Avahi config is correct (plus use of ipv6 is disabled) as well as nssitch.conf
needed modifications have been done
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
- Avahi-daemon is running as well as bonjour services.
- windows side ipv6
is disabled on active network interface, also avahi-daemon is configured not to use ipv6: use-ipv6=no
.
## Issue with Pi's mDNS
After a while the hostname .local
corresponding to the raspberry pi isn't resolved anymore after issuing a ping pi.local
from windows, note that on startup it responded perfectly, and a restart of avahi-daemon will fix it temporary before the issue starts again.
Just after the mdns resolve fail I executed the following on the rasp avahi-resolve -n pi.local
it shows it ipv6 (fe80::xxaa:yybb:zzde:ee
) which is weird because as I mentionned I have _disabled ipv6_ in avahi configs, right just after i reexecute the same command this time i get ipv4 as an answer
pi.local 192.168.1.7
also ping seems to respond again.
### P.S.
- Running Linux pi 4.4.38-v7+ #938
- Using Bonjour Print Services for Windows v2.0.2: https://support.apple.com/kb/DL999 , and it's running as service.
- Disabling ipv6 only the return of avahi-resolve -n pi.local
command gives 192.168.1.7
instead of ipv6 but issue remains.
Nothing in /var/log/messages
concerning Avahi.
Any thoughts about the root of the problem ?
syslog after a while from restarting avahi-daemon
and sending resolve command above:
18:21:47 pi systemd: Stopping Avahi mDNS/DNS-SD Stack...
18:21:47 pi avahi-daemon: Got SIGTERM, quitting.
18:21:47 pi avahi-daemon: Leaving mDNS multicast group on
interface wlan0.IPv4 with address 192.168.1.7.
18:21:47 pi avahi-daemon: avahi-daemon 0.6.31 exiting.
18:21:47 pi systemd: Starting Avahi mDNS/DNS-SD Stack...
18:21:47 pi avahi-daemon: Process 427 died: No such process;
trying to remove PID file. (/var/run/avahi-daemon//pid)
18:21:47 pi avahi-daemon: Found user 'avahi' (UID 105) and group
'avahi' (GID 110).
18:21:47 pi avahi-daemon: Successfully dropped root privileges.
18:21:47 pi avahi-daemon: avahi-daemon 0.6.31 starting up.
18:21:47 pi avahi-daemon: Successfully called chroot().
18:21:47 pi avahi-daemon: Successfully dropped remaining
capabilities.
18:21:47 pi avahi-daemon: Loading service file
/services/multiple.service.
18:21:47 pi avahi-daemon: Loading service file
/services/udisks.service.
18:21:47 pi avahi-daemon: Joining mDNS multicast group on
interface wlan0.IPv4 with address 192.168.1.7.
18:21:47 pi avahi-daemon: New relevant interface wlan0.IPv4 for
mDNS.
18:21:47 pi avahi-daemon: Network interface enumeration
completed.
18:21:47 pi avahi-daemon: Registering new address record for
fe80::f2f:3b5b:ab5b:35c1 on wlan0.*.
18:21:47 pi avahi-daemon: Registering new address record for
192.168.1.7 on wlan0.IPv4.
18:21:47 pi avahi-daemon: Registering HINFO record with values
'ARMV7L'/'LINUX'.
18:21:47 pi systemd: Started Avahi mDNS/DNS-SD Stack.
18:21:48 pi avahi-daemon: Server startup complete. Host name is
pi.local. Local service cookie is 2501181696.
18:21:49 pi avahi-daemon: Service "pi"
(/services/udisks.service) successfully established.
18:21:49 pi avahi-daemon: Service "pi"
(/services/multiple.service) successfully established.
mdns
(61 rep)
Mar 6, 2017, 08:41 PM
• Last activity: Jan 29, 2023, 09:25 PM
0
votes
1
answers
3578
views
Forward multicast traffic to bridge interface
I am trying to forward the multicast traffic arriving on interface eth1 to a bridge I created with `ip link add br0 type bridge`. Elsewhere I have done this with a simple `ip route add 226.3.2.1 dev docker0` (other machine). I have now tried several things and also played around with multicast route...
I am trying to forward the multicast traffic arriving on interface eth1 to a bridge I created with
Ok, so I try to provide a little more context. I use multus-cni to attach multiple network interfaces to the container. This additional interface inside the container is called eth1 and is bridged with br0 on the host (via cni bridge plugin).
A thing which is currently working is, if I run an iperf client on the eth1 interface on the host and and iperf client on eth1 on the container the traffic is shown.
The problem is the eth1->br0 part on the host this multicast traffic don’t gets forwarded and therefore is not accessible of the br0 interface. I cannot tell if it would be accessible inside the container if the forwarding would succeed.
Currently if I run a small python script, which binds on the network interfaces and joins the 226.3.2.1 multicast group it only shows the traffic if bound to eth1 on the host. With br0 it shows nothing.
The TTL of the udp packets is 3 but I could increase it if necessary.
ip link add br0 type bridge
.
Elsewhere I have done this with a simple ip route add 226.3.2.1 dev docker0
(other machine).
I have now tried several things and also played around with multicast routers like pimd. However, I do not manage to redirect or forward the traffic. When I use the ip route add
command in my test setup, I still see the multicast traffic in tcpdump
but not in my multicast receive script (on eth1).
In context, I want to forward the multicast traffic to a bridge, which I somehow include in a kubernetes container via multus-cni.
The interfaces (eth1 is receiving the multicast traffic on 226.3.2.1):
2324: eth0@if2325: mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:80:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.128.2/20 brd 192.168.143.255 scope global eth0
valid_lft forever preferred_lft forever
21: br0: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether be:ec:42:84:8c:35 brd ff:ff:ff:ff:ff:ff
2340: eth1@if2341: mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:90:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.144.5/20 brd 192.168.159.255 scope global eth1
valid_lft forever preferred_lft forever
tcpdump of eth1 226.3.2.1
15:07:52.288989 IP (tos 0x0, ttl 3, id 22735, offset 0, flags [DF], proto UDP (17), length 34)
192.168.144.1.37394 > 226.3.2.1.9000: [bad udp cksum 0x34ce -> 0xd209!] UDP, length 6
15:07:52.435168 IP (tos 0x0, ttl 3, id 22745, offset 0, flags [DF], proto UDP (17), length 34)
192.168.144.1.37394 > 226.3.2.1.9000: [bad udp cksum 0x34ce -> 0xd209!] UDP, length 6
15:07:52.620215 IP (tos 0x0, ttl 3, id 22758, offset 0, flags [DF], proto UDP (17), length 34)
192.168.144.1.37394 > 226.3.2.1.9000: [bad udp cksum 0x34ce -> 0xd209!] UDP, length 6
15:07:52.747806 IP (tos 0x0, ttl 3, id 22783, offset 0, flags [DF], proto UDP (17), length 34)
192.168.144.1.37394 > 226.3.2.1.9000: [bad udp cksum 0x34ce -> 0xd209!] UDP, length 6
ip maddr
21: br0
link 33:33:00:00:00:01
link 01:00:5e:00:00:6a
link 33:33:00:00:00:6a
link 01:00:5e:00:00:01
link 01:00:5e:03:02:01
inet 226.3.2.1
inet 224.0.0.1
inet 224.0.0.106
inet6 ff02::6a
inet6 ff02::1
inet6 ff01::1
2324: eth0
link 33:33:00:00:00:01
link 01:00:5e:00:00:01
inet 224.0.0.1
inet6 ff02::1
inet6 ff01::1
2340: eth1
link 33:33:00:00:00:01
link 01:00:5e:00:00:01
link 01:00:5e:03:02:01
inet 226.3.2.1
inet 224.0.0.1
inet6 ff02::1
inet6 ff01::1
If I run iperf locally with a server bound to br0 and a client bound to eth1 without any ip route the server on br0 receives the traffic. (iperf -c 226.3.2.1%eth1 -u -T 32 -t 3 -i 1
and iperf -s -u -B 226.3.2.1%br0 -i 1
)
Also if I run iperf iperf -c 226.3.2.1%eth1 -u -T 32 -t 3 -i 1
on the server and iperf -s -u -B 226.3.2.1%eth1 -i 1
inside the container it receives the traffic (eth1 are different interfaces here).
The problem is just how do I forward from eth1->br0
**EDIT:**

joel_muehlena
(1 rep)
Sep 30, 2022, 03:18 PM
• Last activity: Oct 5, 2022, 07:16 AM
1
votes
0
answers
2833
views
multicast to all interfaces (how to set up the routes?)
I want a host to send out multicast traffic to all interfaces, but it seems I can only have one route. I tried `ip route add 224/4 dev eth0`, and then `ip route add 224/4 dev sit1`, but the second one was not accepted. Then I deleted the multicast routes and tried `ip route add 224/4 dev eth0`, and...
I want a host to send out multicast traffic to all interfaces, but it seems I can only have one route.
I tried
ip route add 224/4 dev eth0
, and then ip route add 224/4 dev sit1
, but the second one was not accepted.
Then I deleted the multicast routes and tried ip route add 224/4 dev eth0
, and then ip route append 224/4 dev sit1
instead, but it seems only the first route is actually used, even though ip route show
shows both routes.
Is it possible at all, and if so: How?
Also: What is the effect of route append
when it seems only the first one is ever used?
U. Windl
(1715 rep)
Sep 29, 2022, 12:55 PM
• Last activity: Sep 30, 2022, 06:21 AM
Showing page 1 of 20 total questions