Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
1
answers
29
views
Discrepancy in nftables counters
Here is an edited nft ruleset that shows what appears to be a problem with the values in the packet counters. In the INPUT chain, the second rule counter shows more packets than the first rule counter. As far as I understand it, all of the packets that are evaluated by the second rule, were also eva...
Here is an edited nft ruleset that shows what appears to be a problem with the values in the packet counters.
In the INPUT chain, the second rule counter shows more packets than the first rule counter.
As far as I understand it, all of the packets that are evaluated by the second rule, were also evaluated by the first rule, so the values should be identical.
Where do the apparent extra packet counts come from?
Linux Mint 22.1 Cinnamon
Intel© Core™ i5-2400 CPU
Kernel 6.8.0-64-generic
js@js-mint-22:~$ sudo nft list ruleset
# Warning: table ip filter is managed by iptables-nft, do not touch!
table ip filter {
chain INPUT {
type filter hook input priority filter; policy drop;
counter packets 378886 bytes 2143315517 jump LIBVIRT
counter packets 379132 bytes 2143334159 jump ufw-before-logging-input
counter packets 379132 bytes 2143334159 jump ufw-before-input
counter packets 25852 bytes 4400083 jump ufw-after-input
counter packets 2259 bytes 409305 jump ufw-after-logging-input
counter packets 2259 bytes 409305 jump ufw-reject-input
counter packets 2259 bytes 409305 jump ufw-track-input
}
chain LIBVIRT {
iifname "virbr0" udp dport 53 counter packets 0 bytes 0 accept
iifname "virbr0" tcp dport 53 counter packets 0 bytes 0 accept
iifname "virbr0" udp dport 67 counter packets 0 bytes 0 accept
iifname "virbr0" tcp dport 67 counter packets 0 bytes 0 accept
}
chain ufw-before-logging-input {
}
chain ufw-before-input {
iifname "lo" counter packets 81100 bytes 6874130 accept
ct state related,established counter packets 267229 bytes 2131142550 accept
ct state invalid counter packets 0 bytes 0 jump ufw-logging-deny
ct state invalid counter packets 0 bytes 0 drop
ip protocol icmp icmp type destination-unreachable counter packets 0 bytes 0 accept
ip protocol icmp icmp type time-exceeded counter packets 0 bytes 0 accept
ip protocol icmp icmp type parameter-problem counter packets 0 bytes 0 accept
ip protocol icmp icmp type echo-request counter packets 0 bytes 0 accept
udp sport 67 udp dport 68 counter packets 0 bytes 0 accept
counter packets 30803 bytes 5317479 jump ufw-not-local
ip daddr 224.0.0.251 udp dport 5353 counter packets 4951 bytes 917396 accept
ip daddr 239.255.255.250 udp dport 1900 counter packets 0 bytes 0 accept
counter packets 25852 bytes 4400083 jump ufw-user-input
}
chain ufw-logging-deny {
ct state invalid limit rate 3/minute burst 10 packets counter packets 0 bytes 0 return
limit rate 3/minute burst 10 packets counter packets 0 bytes 0 log prefix "[UFW BLOCK] "
}
chain ufw-not-local {
fib daddr type local counter packets 1735 bytes 407616 return
fib daddr type multicast counter packets 5947 bytes 955901 return
fib daddr type broadcast counter packets 23121 bytes 3953962 return
limit rate 3/minute burst 10 packets counter packets 0 bytes 0 jump ufw-logging-deny
counter packets 0 bytes 0 drop
}
chain ufw-user-input {
}
chain ufw-after-input {
udp dport 137 counter packets 496 bytes 39120 jump ufw-skip-to-policy-input
udp dport 138 counter packets 194 bytes 46241 jump ufw-skip-to-policy-input
tcp dport 139 counter packets 0 bytes 0 jump ufw-skip-to-policy-input
tcp dport 445 counter packets 0 bytes 0 jump ufw-skip-to-policy-input
udp dport 67 counter packets 15 bytes 6590 jump ufw-skip-to-policy-input
udp dport 68 counter packets 0 bytes 0 jump ufw-skip-to-policy-input
fib daddr type broadcast counter packets 22888 bytes 3898827 jump ufw-skip-to-policy-input
}
chain ufw-skip-to-policy-input {
counter packets 23593 bytes 3990778 drop
}
chain ufw-after-logging-input {
limit rate 3/minute burst 10 packets counter packets 1808 bytes 266671 log prefix "[UFW BLOCK] "
}
chain ufw-reject-input {
}
chain ufw-track-input {
}
}
jsotola
(540 rep)
Jul 25, 2025, 08:28 PM
• Last activity: Jul 25, 2025, 11:31 PM
4
votes
2
answers
5377
views
How do I create a named set of interfaces by name in nftables?
Using sets in [nftables][1] is really cool. I am currently using a lot of statements like these in my `nftables.conf` rulesets: iifname {clients0, dockernet} oifname wan0 accept \ comment "Allow clients and Docker containers to reach the internet" In the rule above `{clients0, dockernet}` is an *ano...
Using sets in nftables is really cool. I am currently using a lot of statements like these in my
nftables.conf
rulesets:
iifname {clients0, dockernet} oifname wan0 accept \
comment "Allow clients and Docker containers to reach the internet"
In the rule above {clients0, dockernet}
is an *anonymous* (inline) set of interfaces. Instead of repetition in the rules over and over, I'd like to define a set of interfaces at the top of the file, called a *named set* in nftables. The manpage (Debian Buster) shows how to do that for several types of sets: *ipv4_addr*, *ipv6_addr*, *ether_addr*, *inet_proto*, *inet_service* and *mark*. However, it seems it's not available for interfaces by name or simple primitive type such as strings.
I've the approach below, but this does not work with the errors given:
1. Omitting the type:
table inet filter {
set myset {
elements = {
clients0,
dockernet,
}
}
[...]
}
Result: Error: set definition does not specify key
.
1. Using the string
type:
table inet filter {
type string;
set myset {
elements = {
clients0,
dockernet,
}
}
[...]
}
Result: Error: unqualified key type string specified in set definition
.
Is there really no way of naming the anonymous set I've shown on the top?
gertvdijk
(14517 rep)
May 5, 2019, 11:01 AM
• Last activity: Jun 26, 2025, 01:07 PM
3
votes
1
answers
4405
views
Set up nftables to only allow connections through a vpn and block all ipv6 traffic
I am trying to set up a nftables firewall on my archlinux distribution that only allows traffic through a vpn (and blocks all ipv6 traffic in order to prevent any ipv6 leaks) I have been playing around with it for a while now and ended up with a configuration that lets me browse the web, even though...
I am trying to set up a nftables firewall on my archlinux distribution that only allows traffic through a vpn (and blocks all ipv6 traffic in order to prevent any ipv6 leaks)
I have been playing around with it for a while now and ended up with a configuration that lets me browse the web, even though as far as I understand nftable so far, it should not let me do that. The ruleset is pretty short and looks like this:
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
jump base_checks
ip saddr VPN_IP_ADRESS udp sport openvpn accept
}
chain forward {
type filter hook forward priority 0; policy drop;
}
chain output {
type filter hook output priority 0; policy drop;
ip daddr VPN_IP_ADRESS udp dport openvpn accept
oifname "tun0" accept
}
chain base_checks {
ct state { related, established} accept
ct state invalid drop
}
}
I tried to find my way thorugh with trial and error and had many other rules in there, but with just this, i am able to connect to the VPN server first and then browse the web. Once I remove the last rule from the outout chain though, it won't let me browse the web anymore.
I am completely new to this and pretty much overall clueless, trying to learn. Unfortunately, the documentation on nftables is not that extensive, so I am kind of stuck at the moment.
From what I understand so far, this setup should allow to make a connection to the vpn but it should not allow any other incoming traffic - yet I can browse the web without problems.
Does anyone know why it works and how i should proceed with the setup of nftables to get a more complete setup?
user246093
(41 rep)
Aug 11, 2017, 02:18 PM
• Last activity: Jun 17, 2025, 09:06 PM
2
votes
1
answers
60
views
How do I make a virtual "alias" for a remote IP without a proxy process?
I have interfaces `enp101s0f0u2u{1..3}`, on each of which there is device responding to `192.168.8.1`. I want a local processes to be able to reach all of them simultaneously. This is one process, so network namespaces are not an option. I am looking for a solution that doesn't use socat or another...
I have interfaces
enp101s0f0u2u{1..3}
, on each of which there is device responding to 192.168.8.1
.
I want a local processes to be able to reach all of them simultaneously.
This is one process, so network namespaces are not an option.
I am looking for a solution that doesn't use socat or another proxy that can bind an outgoing interface.
I thought of locally making virtual IPs 192.168.8.1{1..3}
to point to them.
# What I got so far:
* Interface enp101s0f0u2ux
has ipv4 192.168.8.2x/32
.
* ip rule 100x: from all to 192.168.8.1x lookup 20x
* ip route default dev enp101s0f0u2ux table 20x scope link src 192.168.8.2x
(this means the interface and src are correct when chosen automatically)
chain output {
type nat hook output priority dstnat; policy accept;
ip daddr 192.168.8.1x meta mark set 20x counter dnat to 192.168.8.1
}
(this means the destination ip is changed to .1, unfortunately I only found a way to do this before routing decision is made, so we need the next thing)
* ip rule 110x: from all fwmark 20x lookup 20x
(this means that despite dst being 192.168.8.1
, it goes to the …ux interface) now the hard part:
chain input {
type nat hook input priority filter; policy accept;
ip saddr 192.168.8.1 ip daddr 192.168.8.2x counter snat to 192.168.8.1x
}
(this should restore the src of the return packet to .1x, so the socket and application are not astonished)
Unfortunately, at this point if I try to curl, tcpdump
sees a 192.168.8.21.11111 > 192.168.8.1.80
(SYN) and multiple 192.168.8.1.80 > 192.168.8.21.11111
(SYN-ACK) attempts, but the input
chain counter is not hit.
However, if I add the seemingly useless
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
ip daddr 192.168.8.1 counter masquerade
}
I get 1 packet hitting the input snat rule, and the application gets some data back!
However, all the consequent packets from 192.168.8.1 in the flow are dropped.
[**Here is a tcpdump and a conntrack**](https://gist.github.com/homeassistant-hacs-m/49216c8f100f75f3701e163954641384)
I'm at the end of my rope, been at it for days.
There's no firewall/filter happening (which conntrack would be opening for me), I have empty nftables besides the chains I showed here.
I cannot understand why the masquerade makes a difference, and in general what goes on in conntrack. (The entry gets created and destroyed twice, and then an entry starting from outside gets created?)
Of note is that the entries are not symmetrical, they mention both 192.168.8.1
and 192.168.8.12
in each entry for opposite directions.
I especially don't understand how or why in absence of masquerade the returning 192.168.8.1.80 > 192.168.8.21.11111
(SYN-ACK) packets get dropped instead of going to input chain. Would this happen if the application TCP socket did CONNECT and so only wants replies from .11?
But shouldn't input
be able to intercept before the socket? And I can't snat in prerouting anyway, so where would this have to be done?
## Update:
Adding
type filter hook output priority raw; policy accept;
ip daddr 192.168.8.11 counter notrack
makes it stop hitting this counter too:
type nat hook output priority dstnat; policy accept;
ip daddr 192.168.8.11 meta mark set 201 counter dnat to 192.168.8.1
Does notrack prevent entering nat chains, instead of entering them for all packets and not just first? And so, prevents doing -nat actions altogether?
Mihail Malostanidis
(121 rep)
Jun 11, 2025, 03:58 PM
• Last activity: Jun 13, 2025, 02:06 PM
0
votes
2
answers
9884
views
How to add rule to nftables.conf
In my terminal, I write : `` sudo nft add table inet f2b-table `` `` systemctl reload nftables.service `` then : `` sudo nft list ruleset `` result in the terminal (ok) : `` table inet f2b-table { } `` But when i open nftables.conf, why table inet f2b not appearing why it displays the table in the t...
In my terminal, I write :
``
sudo nft add table inet f2b-table
``
``
systemctl reload nftables.service
``
then :
``
sudo nft list ruleset
``
result in the terminal (ok) :
``
table inet f2b-table { }
``
But when i open nftables.conf, why table inet f2b not appearing
why it displays the table in the terminal and not in the nftables.conf (is it two different files?)
What should be done to update the nftables.conf?
Bricoreur
(1 rep)
Jul 4, 2022, 04:42 PM
• Last activity: Jun 9, 2025, 01:06 PM
1
votes
2
answers
49
views
nftables anonymous subchains
Using ferm (the iptables generator) I can make anonymous chains like this: ``` saddr (1.2.3.4 2.3.4.5 3.4.5.6 4.5.6.7 5.6.7.8) @subchain { proto tcp dport (http https ssh) ACCEPT; proto udp dport domain ACCEPT; } ``` Is it possible to do something similar with nftables? I tried this, but I'm not abl...
Using ferm (the iptables generator) I can make anonymous chains like this:
saddr (1.2.3.4 2.3.4.5 3.4.5.6 4.5.6.7 5.6.7.8) @subchain {
proto tcp dport (http https ssh) ACCEPT;
proto udp dport domain ACCEPT;
}
Is it possible to do something similar with nftables? I tried this, but I'm not able to make it work.
ip saddr {1.2.3.4, 2.3.4.5, 3.4.5.6, 4.5.6.7, 5.6.7.8} jump {
accept;
}
Cherrytopia
(23 rep)
Jun 3, 2025, 11:50 AM
• Last activity: Jun 4, 2025, 09:54 AM
0
votes
0
answers
58
views
nftables query interface address
Is it possible to query interface address using `nftables`? For example, ``` ip daddr = ifname_addr "eth0" counter accept ``` Consider a system that has 4 interfaces: `eth0 eth1 eth2 eth3`. It is desirable to isolate `eth0` from `eth3`, but not from `eth1 eth2`. To implement it, traffic coming from...
Is it possible to query interface address using
nftables
?
For example,
ip daddr = ifname_addr "eth0" counter accept
Consider a system that has 4 interfaces: eth0 eth1 eth2 eth3
. It is desirable to isolate eth0
from eth3
, but not from eth1 eth2
. To implement it, traffic coming from eth0
needs to be rejected in case its destination address belongs to eth3
. For example,
iifname "eth0" ip daddr eth3.ip.4.address counter reject
This implies that the address of eth3
is known in advance, but this might not be the case. Does nftables
provide any tools to deal with this kind of situation?
**EDIT**
There was a confusion due to my uneducated phrasing of the question. Consider a system with 3 interfaces.
eth0 172.0.0.1/24
eth1 172.0.1.1/24
eth2 172.0.2.1/24
Consider a machine named X
that is connected to eth0
together with you. If X
uses you as a gateway to reach e.g. 172.0.1.10
, then the traffic flows through the forwarding chain in netfilter. This traffic is easily filtered using the forward
hook in nftables.
On the other hand, if X
tries to reach 172.0.1.1
, the traffic will be processed by the input chain in netfilter. Due to Linux using the weak host model, the traffic will not even touch the eth1
interface, i.e. it will arrive on eth0
and leave through eth0
despite formally accessing IP address that is assigned to eth1
.
Consider that you want to prevent X
from accessing the address assigned to eth2
, but do not mind it accessing the address assigned to eth1
. It can be done using the following rule in the input
hook
iifname "eth0" ip daddr 172.0.2.1 counter reject
I was wondering if the same could be done without knowing the assigned address of eth2
in advance.
EmErAJID
(26 rep)
Jun 1, 2025, 05:18 PM
• Last activity: Jun 2, 2025, 07:35 AM
1
votes
1
answers
2200
views
nftables rules not blocking traffic
I am testing NFtables and am attempting to set up a basic routing firewall on a linux machine with 2 interfaces, ens37 and ens38. Here is the ifconfig output for these 2 interfaces. ens37: flags=4163 mtu 1500 inet 192.168.0.3 netmask 255.255.255.0 broadcast 192.168.0.255 ether 00:0c:29:74:33:e7 txqu...
I am testing NFtables and am attempting to set up a basic routing firewall on a linux machine with 2 interfaces, ens37 and ens38. Here is the ifconfig output for these 2 interfaces.
ens37: flags=4163 mtu 1500
inet 192.168.0.3 netmask 255.255.255.0 broadcast 192.168.0.255
ether 00:0c:29:74:33:e7 txqueuelen 1000 (Ethernet)
RX packets 20 bytes 2524 (2.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 156 bytes 9952 (9.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens38: flags=4163 mtu 1500
inet 192.168.0.4 netmask 255.255.255.0 broadcast 192.168.0.255
ether 00:0c:29:74:33:f1 txqueuelen 1000 (Ethernet)
RX packets 147 bytes 9340 (9.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18 bytes 1672 (1.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I am wanting to emulate ens38 being a WAN port, and block all non-lan-initiated traffic that is inbound, but allow LAN traffic outbound.
I have these rules set up in
/etc/nftables.conf
:
#!/usr/sbin/nft -f
flush ruleset
table ip filter {
# allow all packets sent by the firewall machine itself
chain output {
type filter hook output priority 100; policy accept;
}
# allow LAN to firewall, disallow WAN to firewall
chain input {
type filter hook input priority 0; policy accept;
iifname "ens37" accept
iifname "ens38" drop
}
# allow packets from LAN to WAN, and WAN to LAN if LAN initiated the connection
chain forward {
type filter hook forward priority 0; policy drop;
iifname "ens37" oifname "ens38" accept
iifname "ens38" oifname "ens37" ct state related,established accept
}
}
To test if the rules are successful, I am setting up a listener with netcat:
nc -lp 80 -s 192.168.0.3
Then I connect from the other interface using netcat:
nc 192.168.0.3 80 -s 192.168.0.4
My issue is that these nftables rules are not blocking traffic from the emulated WAN port. The netcat connections work perfectly fine bidirectionally, which is not what I am looking for.
If I run nft list table filter
, I get the rules I am expecting to see as output.
I am new to nftables, how can I get these rules to run against these two interfaces correctly? What is wrong with my current approach?
another_stack_user999
(43 rep)
Oct 30, 2019, 03:24 PM
• Last activity: May 17, 2025, 04:04 PM
0
votes
2
answers
233
views
Firewall to allow only web browsing and no other network access
I am working on Debian Stable and it is working very well. I see [apf-firewall][1] to simplify iptables. I want my firewall to only allow web browsing (including forms) and block all other network access. How is this possible with apf-firewall? Or could I do it with [FireHol][2] software? It seems t...
I am working on Debian Stable and it is working very well.
I see apf-firewall to simplify iptables. I want my firewall to only allow web browsing (including forms) and block all other network access. How is this possible with apf-firewall?
Or could I do it with FireHol software? It seems to have simple configuration commands:
version 6
interface4 eth0 home
server dns accept
server ftp accept
server samba accept
server squid accept
server dhcp accept
server http accept
server ssh accept
server icmp accept
interface4 ppp+ internet
server smtp accept
server http accept
server ftp accept
Which lines should I keep if I want only web browsing to be permitted?
Edit: Will following 2 rules using nftables be sufficient for my needs?
nft add rule ip filter input tcp dport 80 ct state new,established accept
nft add rule ip filter input tcp dport 443 ct state new,established accept
rnso
(323 rep)
Sep 30, 2024, 01:27 PM
• Last activity: Apr 26, 2025, 10:30 PM
3
votes
1
answers
9708
views
Ubuntu 22.04 iptables command not working
Totally new to netfilter thing, currently am running an application which uses three interfaces eth0/eth1/eth2, my application will run on two servers and they both can communicate between them via their own interfaces (eth0/eth1/eth2) In ubuntu 18.04 (kernel version 4.*), I just used iptables comma...
Totally new to netfilter thing, currently am running an application which uses three interfaces eth0/eth1/eth2, my application will run on two servers and they both can communicate between them via their own interfaces (eth0/eth1/eth2)
In ubuntu 18.04 (kernel version 4.*), I just used iptables commands to break the communication between them.
In 22.04 (kernel version 6.2.*), I use the same iptable commands to break the communication between two servers but things are not working as expected (My app code remains unchanged) -- my application has the mechanism to report whether the neighbor server is reachable or not -- in 22.04 with iptables rules applied, it still reports the other server is reachable (not the case in 18.04).
I could see there's a lot has been changed regard to how Network traffic can be filtered between two kernel versions (more tools in recent one).
I removed the ufw just to avoid conflicts with nftables, one observation is, when I applied the rule, for a brief moment my app reports the neighbor server is unreachable and suddenly it will change to reachable, something is overriding the rule, am unsure.
Now am seeking help here to see what I have missed...
-A INPUT -s x.x.x.x/32 -d y.y.y.y/32 -i eth2 -j DROP
-A INPUT -s x.x.x.y/32 -d y.y.y.x/32 -i eth1 -j DROP
-A INPUT -s x.x.y.y/32 -d y.y.x.x/32 -i eth0 -j DROP
-A OUTPUT -s y.y.y.y/32 -d x.x.x.x/32 -o eth2 -j DROP
-A OUTPUT -s y.y.y.x/32 -d x.x.x.y/32 -o eth1 -j DROP
-A OUTPUT -s y.y.x.x/32 -d x.x.y.y/32 -o eth0 -j DROP
Note:
All my rules are prepended in the chain to make sure that are taking precedence over anything else
Chain INPUT (policy DROP)
target prot opt source destination
DROP all -- xxxx yyyy
DROP all -- zzzz AAAA
DROP all -- BBBB CCCC
RaGa__M
(179 rep)
Nov 29, 2023, 12:59 PM
• Last activity: Apr 8, 2025, 09:09 PM
1
votes
1
answers
418
views
Determining the performance Impact of firewalld rule count
I was working on tweaking the performance of fail2ban and I read that a too-long ban can result in a build-up of rules that will negatively impact performance, which made me wonder, "Is there any particular idea of a number of rules that is 'too high' for nftables?" I currently use firewalld with nf...
I was working on tweaking the performance of fail2ban and I read that a too-long ban can result in a build-up of rules that will negatively impact performance, which made me wonder, "Is there any particular idea of a number of rules that is 'too high' for nftables?"
I currently use firewalld with nftables as the backend, and maybe 10-20 rules.
However, a few servers are specifically intended for audiences of certain countries and should not be accessed outside them. If I pull down a country-IP database (e.g. MaxMind) and then generate a list of rules for all CIDRs for countries outside the allowed list, I end up with nearly 17,000 rules.
On one hand, that's a lot of rules (IMHO), but on the other hand, there's nothing but spam and hack attempts coming from outside the designated countries (even legitimate users traveling abroad need to VPN into the US before they can access anyway).
Is that kind of volume going to negatively impact nftables? I assume that the impact is relative to the amount of volume that has to be checked, but I haven't found a good way to see or measure the impact of the rules, and I don't want to start loading up thousands of rules without knowing the possible ramifications ahead of time.
jhilgeman
(113 rep)
Jul 19, 2024, 10:37 PM
• Last activity: Mar 21, 2025, 03:47 PM
1
votes
0
answers
26
views
What does the phrase "consider native interface" refer to when the nftables wiki says that xt_bpf match is unsupported
In [this](https://wiki.nftables.org/wiki-nftables/index.php/Supported_features_compared_to_xtables) list of unsupported xtables features. xt_bpf is listed as one of the unsupported features. The comment says to "consider native interface". But what interface is being referred to here? This is the on...
In [this](https://wiki.nftables.org/wiki-nftables/index.php/Supported_features_compared_to_xtables) list of unsupported xtables features. xt_bpf is listed as one of the unsupported features. The comment says to "consider native interface". But what interface is being referred to here? This is the only ever mention of bpf in the entire nftables wiki.
Philippe
(569 rep)
Mar 21, 2025, 01:19 PM
1
votes
1
answers
326
views
Why is nftables giving me trouble adding a jump to another chain?
I'm putting together nft rules for chains and I've gotten to this point where I'm stuck. My ruleset looks like this: ``` table inet whitelist_table { chain whitelist_chain { ip saddr 127.0.0.1 accept ct state established,related accept #more ip addresses for the whitelist drop } chain enabled_state_...
I'm putting together nft rules for chains and I've gotten to this point where I'm stuck. My ruleset looks like this:
table inet whitelist_table {
chain whitelist_chain {
ip saddr 127.0.0.1 accept
ct state established,related accept
#more ip addresses for the whitelist
drop
}
chain enabled_state_chain {
type filter hook input priority filter - 10; policy accept;
jump whitelist_chain
# I dynamically manage this so that if the whitelist is enabled,
# it goes to the white_list chain, otherwise it accepts
}
chain allowed_port_chain {
type filter hook input priority -20; policy accept;
# This is where I'm trying to add my rule for ports, but it won't let me jump/goto.
}
}
When I use the nft command add rule inet whitelist_table allowed_port_chain tcp dport {22, 80, 443} goto enabled_state_chain
I get the following error:
Error: Could not process rule: Operation not supported
add rule inet whitelist_table allowed_port_chain tcp dport {22, 80, 443} goto enabled_state_chain
^^^^^^^^^^^^^^^^^^^
What am I not understanding?
Isaac
(113 rep)
Oct 29, 2024, 06:20 PM
• Last activity: Mar 14, 2025, 10:18 PM
0
votes
2
answers
2631
views
nftables dnat in input chain
I'm trying to redirect all traffic arriving on the firewall on ports 80 and 443 to `10.133.8.11`. I gather from [this](https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_hooks) that NATting in an input chain should be possible. But it seems that dnat is not. What kind of NATting is possible...
I'm trying to redirect all traffic arriving on the firewall on ports 80 and 443 to
10.133.8.11
.
I gather from [this](https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_hooks) that NATting in an input chain should be possible. But it seems that dnat is not. What kind of NATting is possible?
The reason I am trying to do this using the input chain is because, if I put it in the prerouting chain, all traffic is dnatted, not only traffic destined for the firewall.
This is my attempt so far:
table inet nat already exists.
```
# nft 'add chain inet nat input { type nat hook input priority -100; }'
# nft 'add rule inet nat input tcp dport { 80, 443 } dnat ip to 10.133.8.11'
Error: Could not process rule: Operation not supported
add rule inet nat input tcp dport { 80, 443 } dnat ip to 10.133.8.11
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
``
Philippe
(569 rep)
Jan 9, 2023, 05:16 PM
• Last activity: Mar 13, 2025, 09:33 AM
0
votes
1
answers
152
views
Captive Portal w/ nginx, hostapd, nftables, dnsmasq
I'm trying to make captive portal with nginx, hostapd, nftables, dnsmasq and python-flask. I have two main problems 1) I'm not getting a popup on Android, but am on Iphone/OSX. 2) I'm not sure how to redirect the user after the connection. I have a nftables command, but I need an IP address for this...
I'm trying to make captive portal with nginx, hostapd, nftables, dnsmasq and python-flask.
I have two main problems
1) I'm not getting a popup on Android, but am on Iphone/OSX.
2) I'm not sure how to redirect the user after the connection. I have a nftables command, but I need an IP address for this. Since nginx is formwarding from port 80 to 8080 (python app) I don't know how to get this.
Here's the nginx.conf
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
if ($request_method !~ ^(GET|HEAD|POST)$) { return 444; }
# Handle iOS
if ($http_user_agent ~* (CaptiveNetworkSupport) ) {
return 302 http://go.portal ;
}
# Handle Android captive portal detection
location = /generate_204 {
return 302 http://go.portal ;
}
location = /gen_204 {
return 302 http://go.portal ;
}
# Default redirect for any unexpected requests to trigger captive portal
# sign in screen on device.
location / {
return 302 http://go.portal ;
}
}
server {
listen 80;
listen [::]:80;
server_name go.portal;
# Only allow GET, HEAD, POST
if ($request_method !~ ^(GET|HEAD|POST)$) { return 444; }
root /var/www;
index index.html;
location /api/ {
proxy_pass http://127.0.0.1:8080/api/ ;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
try_files $uri $uri/ =404;
}
# Redirect these errors to the home page.
error_page 401 403 404 =200 /index.html;
}
}
dnsmasq.conf
listen-address=192.168.2.1
no-hosts
# log-queries
log-facility=/var/log/dnsmasq.log
dhcp-range=192.168.2.2,192.168.2.254,72h
dhcp-option=option:router,192.168.2.1
dhcp-authoritative
dhcp-option=114,http://go.portal/index.html
# Resolve captive portal check domains to a "fake" external IP
address=/connectivitycheck.gstatic.com/10.45.12.1
address=/connectivitycheck.android.com/10.45.12.1
address=/clients3.google.com/10.45.12.1
address=/clients.l.google.com/10.45.12.1
address=/play.googleapis.com/10.45.12.1
# Resolve everything to the portal's IP address.
address=/#/192.168.2.1
Here's the bash that starts everything.
INET_NIC=$(cat /run/inet_nic 2>/dev/null) || { echo "Connect to WiFi first"; exit 1; }
AP_NIC=$(cat /run/ap_nic 2>/dev/null) || { echo "Create AP first"; exit 1; }
echo 1 > /proc/sys/net/ipv4/ip_forward
nft flush ruleset
# Set up the filter table (Mode 1)
nft add table ip filter
nft add chain ip filter input '{ type filter hook input priority 0; policy accept; }'
nft add chain ip filter forward '{ type filter hook forward priority 0; policy accept; }'
nft add chain ip filter output '{ type filter hook output priority 0; policy accept; }'
# Set up the NAT table and chain for masquerading (Mode 2)
nft add table ip nat
nft add chain ip nat postrouting '{ type nat hook postrouting priority 100; }'
kill -9 $(pidof dnsmasq) 2>/dev/null
dnsmasq -C /etc/dnsmasq.conf -d 2>&1 > $LOG_F &
kill -9 $(pidof nginx) 2>/dev/null
mkdir /var/log/nginx 2>/dev/null
nginx &
kill -9 $(pidof evil_portal) 2>/dev/null
ip link set lo up
/usr/bin/evil_portal &
And here's the command I would issue when the user accepts the terms.
nft add rule ip nat postrouting oifname wlan1 ip saddr 192.168.2.217 masquerade
I won't share the python/html stuff because that's all working fine. Basically I'm getting the users button push, and my python function is calling. But python is telling me the IP is 127.0.0.1 because nginx if forwarding the traffic from port 80 to 8080
Thanks :)
user3666672
(11 rep)
Mar 5, 2025, 07:45 PM
• Last activity: Mar 6, 2025, 01:02 AM
2
votes
0
answers
70
views
NFT Tables: modify DUP packet
I have a server **H** with two NICs with ip address `192.168.105.10` and `192.168.104.10`. An application running on **H** receives data from server **C** on UDP port `1703`. Server **C** IP address is `192.168.105.14` I want to duplicate the incoming UDP packets and send them to server **D**, where...
I have a server **H** with two NICs with ip address
192.168.105.10
and 192.168.104.10
. An application running on **H** receives data from server **C** on UDP port 1703
. Server **C** IP address is 192.168.105.14
I want to duplicate the incoming UDP packets and send them to server **D**, where another application listens on 192.168.104.11
also on port 1703
.
**H** runs debian 11 (kernel 5.10).
So far I have the following NFT table setup on **H**:
#!/sbin/nft -f
table ip route_C_packets
delete table ip route_C_packets
table ip route_C_packets {
chain C_in {
type filter hook prerouting priority 0; policy accept;
ip saddr "192.168.105.14" udp port 1703 ip daddr set "192.168.104.11" dup to "192.168.104.11" ip daddr set "192.168.105.10"
}
}
This works, however it seems a bit ugly. From my understanding:
ip saddr "192.168.105.14" udp port 1703
: filter only UDP packets from **C** on the port I am interested in
ip daddr set "192.168.104.11
: overrides the destination address (so that the application running on **D** can actually receive them)
dup_to "192.168.104.11"
: duplicates the packet and sends it to **D**, but does not modify daddr
by itself
ip daddr set "192.168.105.10"
: restores the original destination address for the *non duplicated* packet so that the application running on **H** can actually receive it.
This trick of changing daddr
and then restoring it back seems wrong to me, is there any syntax to set daddr
*on the duplicated packet* rather than on the original one?
**EDIT**: Everything here has netmask 255.255.255.0
. 168.105
and 168.104
are effectively two segregated networks.
sbabbi
(121 rep)
Jul 20, 2024, 11:00 PM
• Last activity: Mar 4, 2025, 09:20 PM
0
votes
1
answers
82
views
Misdocumentation in nftables?
As someone who hasn't hammered in all the parts of the OSI layers, I got quite frustrated with the documentation of bridge filtering in nftables: https://wiki.nftables.org/wiki-nftables/index.php/Bridge_filtering Have I misunderstood or are IP address, ports, ICMP matching and the like not possible...
As someone who hasn't hammered in all the parts of the OSI layers, I got quite frustrated with the documentation of bridge filtering in nftables: https://wiki.nftables.org/wiki-nftables/index.php/Bridge_filtering
Have I misunderstood or are IP address, ports, ICMP matching and the like not possible in bridge tables for nftables? Still, it does not complain when checking the rules with
nft
and the page documentation I included even gives IP ports as an example for bridge filtering. Is there faulty documentation, and have i misunderstood something?
Caesar
(25 rep)
Feb 11, 2025, 10:42 PM
• Last activity: Feb 15, 2025, 10:51 PM
1
votes
1
answers
239
views
nftables returns "Error: No such file or directory" when trying to list or modify a table that clearly exists
I have two tables in nftables: ``` $ sudo nft list tables table inet filter table ip nat ``` The `nat` table can be listed just fine: ``` $ sudo nft list table nat table ip nat { chain prerouting { type nat hook prerouting priority filter; policy accept; } chain postrouting { type nat hook postrouti...
I have two tables in nftables:
$ sudo nft list tables
table inet filter
table ip nat
The nat
table can be listed just fine:
$ sudo nft list table nat
table ip nat {
chain prerouting {
type nat hook prerouting priority filter; policy accept;
}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
}
}
But not the filter
table:
$ sudo nft list table filter
Error: No such file or directory
list table filter
^^^^^^
This means I cannot add new rules at the command line:
$ sudo nft add rule filter forward oif ip saddr accept
Error: No such file or directory; did you mean table ‘filter’ in family inet?
add rule filter forward oif ip saddr accept
^^^^^^
This is my first time trying to use the nft
command line, up til now I have just had a static configuration in /etc/nftables.conf
and that has been sufficient. What's going on here?
Mark Raymond
(654 rep)
Feb 10, 2025, 08:43 PM
• Last activity: Feb 10, 2025, 09:17 PM
0
votes
0
answers
62
views
NFTables tables, hooks and rules ordering
I'm new to nftables but have used iptables for quite a while now. While playing with nftables, I was thinking: "Hey, this is cool, I could have like a management table, where all the mngt stuff goes in, and is accepted and then I could have like tenants tables, that can do whatever". So I tried to d...
I'm new to nftables but have used iptables for quite a while now. While playing with nftables, I was thinking: "Hey, this is cool, I could have like a management table, where all the mngt stuff goes in, and is accepted and then I could have like tenants tables, that can do whatever". So I tried to do that and failed. The nftables tables are:
table inet management {
chain input {
type filter hook input priority -1000; policy drop;
iifname "lo" counter packets 16 bytes 996 accept
ct state established,related counter packets 22982 bytes 1739629 accept
tcp dport 22 counter packets 0 bytes 0 accept
log prefix "Dropped by management: "
}
}
table ip tenant1 {
set web {
type inet_service
elements = { 80, 443 }
}
chain input {
type filter hook input priority 1000; policy drop;
tcp dport @web accept
log prefix "Dropped by tenant1: "
}
}
What I would expect to happen is to allow traffic on port 22 to reach this machine (via the management table, input chain, tcp dport 22 rule) and to also allow the traffic on port 80 and 443 (via the tenant1 table). What actually happens is that traffic on port 22 is sent to the next table to be processed (tenant1) while everything else is dropped....?
Now, from what I understand from reading the documentation, and from what I can observe, it seems that an accepted packet is not necessarily accepted if there are other tables/chains with the same hook (input in our case) that will ultimately drop the packet.
If I'm right about my above assumption, the question is, how would one use tables in nftables? Because I expected this to work something along the lines: we got two tables, if the packet is accepted in one of them, then it's accepted for good. If the packet's dropped in one of them, then it's dropped for good. I thought that the traffic is being evaluated in EACH of the tables, and the first table to decide what to do with that packet takes precedence.
Any help would be greatly appreciated, as I'm a bit lost on how I could use the tables in nftables.
P.S. I saw that an accept action could be set to mark the packets, and subsequent tables would check for that mark, but this seems to be a hack, more than a solution.
To be honest, after playing with this new and improved firewall, at least for my needs (simple NAT/filtering rules), this is way worse. I mean, in iptables, an accept is ... well, an accept, not a maybe. What the hell am I missing??? And don't get me started on the priority problem, where a smaller number means a higher priority :))
Silviu Bajenaru Marcu
(1 rep)
Feb 5, 2025, 10:08 AM
• Last activity: Feb 5, 2025, 10:27 AM
0
votes
1
answers
76
views
Why does no packet traverse the nat chain in the output or postrouting hook with this ruleset?
I have a machine with the network interface `enp0s3` which is assigned the IPv4 address `192.168.20.254`. Furthermore, on another machine there is a DNS server listening on the IPv4 address `192.168.20.10`. The network between the two machines works without issues, the machines can reach each other....
I have a machine with the network interface
enp0s3
which is assigned the IPv4 address 192.168.20.254
. Furthermore, on another machine there is a DNS server listening on the IPv4 address 192.168.20.10
. The network between the two machines works without issues, the machines can reach each other. On both machines, IPv6 is completely disabled via kernel parameter ipv6.disable=1
.
On the first machine, I have the following nftables ruleset:
root@charon /etc/network # nft list ruleset
table ip t_IP {
chain output-route {
type route hook output priority mangle; policy accept;
ip daddr 192.168.20.10 meta nftrace set 1
}
chain output-filter {
type filter hook output priority filter; policy accept;
ip daddr 192.168.20.10 meta nftrace set 1
}
chain output-nat {
type nat hook output priority 100; policy accept;
ip daddr 192.168.20.10 meta nftrace set 1
}
chain postrouting-filter {
type filter hook postrouting priority filter; policy accept;
ip daddr 192.168.20.10 meta nftrace set 1
}
chain postrouting-nat {
type nat hook postrouting priority srcnat; policy accept;
ip daddr 192.168.20.10 meta nftrace set 1
}
}
The purpose of this ruleset is to accept all packets no matter what, and to be able to trace all packets that originate at the machine through all chains that are part of the packet output path.
I have put meta nftrace set 1
in every chain because I don't know which the first chain is that the packets traverse. To avoid too much noise, tracing is enabled only for DNS packets (which here can easily be identified either by their destination IP address or their destination port; the above ruleset does the former).
nft monitor trace
shows the following output when I enter host blahblah
in a second terminal on the machine (I have inserted a blank line to add a bit of structure):
trace id b00605cc ip t_IP output-route packet: oif "enp0s3" ip saddr 192.168.20.254 ip daddr 192.168.20.10 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 8386 ip length 67 udp sport 37348 udp dport 53 udp length 47 @th,64,96 0x69ea01000001000000000000
trace id b00605cc ip t_IP output-route rule ip daddr 192.168.20.10 meta nftrace set 1 (verdict continue)
trace id b00605cc ip t_IP output-route verdict continue
trace id b00605cc ip t_IP output-route policy accept
trace id b00605cc ip t_IP output-filter packet: oif "enp0s3" ip saddr 192.168.20.254 ip daddr 192.168.20.10 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 8386 ip length 67 udp sport 37348 udp dport 53 udp length 47 @th,64,96 0x69ea01000001000000000000
trace id b00605cc ip t_IP output-filter rule ip daddr 192.168.20.10 meta nftrace set 1 (verdict continue)
trace id b00605cc ip t_IP output-filter verdict continue
trace id b00605cc ip t_IP output-filter policy accept
trace id b00605cc ip t_IP postrouting-filter packet: oif "enp0s3" ip saddr 192.168.20.254 ip daddr 192.168.20.10 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 8386 ip length 67 udp sport 37348 udp dport 53 udp length 47 @th,64,96 0x69ea01000001000000000000
trace id b00605cc ip t_IP postrouting-filter rule ip daddr 192.168.20.10 meta nftrace set 1 (verdict continue)
trace id b00605cc ip t_IP postrouting-filter verdict continue
trace id b00605cc ip t_IP postrouting-filter policy accept
trace id 4e72ea91 ip t_IP output-route packet: oif "enp0s3" ip saddr 192.168.20.254 ip daddr 192.168.20.10 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 42220 ip length 50 udp sport 44559 udp dport 53 udp length 30 @th,64,96 0x395901000001000000000000
trace id 4e72ea91 ip t_IP output-route rule ip daddr 192.168.20.10 meta nftrace set 1 (verdict continue)
trace id 4e72ea91 ip t_IP output-route verdict continue
trace id 4e72ea91 ip t_IP output-route policy accept
trace id 4e72ea91 ip t_IP output-filter packet: oif "enp0s3" ip saddr 192.168.20.254 ip daddr 192.168.20.10 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 42220 ip length 50 udp sport 44559 udp dport 53 udp length 30 @th,64,96 0x395901000001000000000000
trace id 4e72ea91 ip t_IP output-filter rule ip daddr 192.168.20.10 meta nftrace set 1 (verdict continue)
trace id 4e72ea91 ip t_IP output-filter verdict continue
trace id 4e72ea91 ip t_IP output-filter policy accept
trace id 4e72ea91 ip t_IP postrouting-filter packet: oif "enp0s3" ip saddr 192.168.20.254 ip daddr 192.168.20.10 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 42220 ip length 50 udp sport 44559 udp dport 53 udp length 30 @th,64,96 0x395901000001000000000000
trace id 4e72ea91 ip t_IP postrouting-filter rule ip daddr 192.168.20.10 meta nftrace set 1 (verdict continue)
trace id 4e72ea91 ip t_IP postrouting-filter verdict continue
trace id 4e72ea91 ip t_IP postrouting-filter policy accept
The trace shows how two different packets flow through the chains. Both packets are typical DNS client packets that originate at the local machine (192.168.20.254
, random non-privileged source port) and go to the DNS server (192.168.20.10
, destination port 53
).
Each packet first reaches the output hook. At that hook, it first traverses the route
chain, then the filter
chain, according to the priorities these chains are registered at.
But no packet ever goes through the nat
chain at the output
hook. **Why does this not happen?**
Likewise, each packet, after having left the output
hook, reaches the postrouting
hook and traverses the filter
chain that is registered there. But no packet ever traverses the nat
chain that is also registered at the postrouting
hook. **Again, why is this not the case?**
The documentation (in section "Verdict Statement") says that an accept
verdict for a packet stops the evaluation of further rules in the current chain, but that the packet nonetheless traverses later chains that are registered at the same hook and all chains at later hooks, until it meets another verdict which is *really* final, e.g. a drop
verdict. This means that an accept
verdict is final only for the chain scope, but not for the whole hook scope or the global scope. In contrast, a drop
verdict is final and immediately stops *any* further evaluation.
Since every chain in the ruleset above has an accept
policy and no explicit verdicts, at least the first packet of every connection should go through the nat
chains at the output
or postrouting
hook. But that is not the case.
What am I missing?
Binarus
(3871 rep)
Feb 3, 2025, 08:48 PM
• Last activity: Feb 4, 2025, 03:38 PM
Showing page 1 of 20 total questions