Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
2148 views
Firewalld: Error: Invalid_Zone
I got some error I can not solve while setting up a default zone in firewalld. I added the interface with firewall-cmd --zone=public --change-interface=ens3 and then I saw the default public zone active. so then I `firewall-cmd --reload` *error: Command_failed: 'usr/sbin/ip6tables-restore -w -n' fai...
I got some error I can not solve while setting up a default zone in firewalld. I added the interface with firewall-cmd --zone=public --change-interface=ens3 and then I saw the default public zone active. so then I firewall-cmd --reload *error: Command_failed: 'usr/sbin/ip6tables-restore -w -n' failed: ip6tables-restore v1.8.2 (nf_tables): line 4: Rule_Replace faaled (no Such file or directory: rule in chain INPUT" so ip6tables-restore is trying to do something upon restart of firewalld. Yet when I "iptables -L" I get "bash: iptables: command not found. firewall-cmd --list-all *Error: Invalid_zone* But the zone showed moments ago...
mister mcdoogle (505 rep)
Sep 5, 2021, 01:44 AM • Last activity: Jul 25, 2025, 03:01 PM
0 votes
1 answers
2258 views
firewalld vs CSF on a Centos 7 VPS
I just got a new GoDaddy dedicated VPS and I am trying to secure it. Its Centos 7.7 and the WHM does not come with any installed FW that I can tell. On an older VPS `configServ security and firewall` was pre-installed (but not active) on Centos 6.8 WHM. I tried getting it running once, ran into trou...
I just got a new GoDaddy dedicated VPS and I am trying to secure it. Its Centos 7.7 and the WHM does not come with any installed FW that I can tell. On an older VPS configServ security and firewall was pre-installed (but not active) on Centos 6.8 WHM. I tried getting it running once, ran into trouble and backed out never to touch it again. But now I need to get a FW operational. Are both firewalld and csf just front ends to using iptables? Or are they completely different? Which is easier to use, which has a better gui interface?....is the protection the same? Is there even a gui for firewalld - if so, how do I access it because I can't seem to find anything? It appears I have both firewalld and iptables installed (are iptables always installed?): ~]$ systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) For iptables I get this: # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere multiport dports smtp,urd,submission owner GID match mailman ACCEPT tcp -- anywhere anywhere multiport dports smtp,urd,submission owner GID match mail ACCEPT tcp -- anywhere localhost multiport dports smtp,urd,submission owner UID match cpanel ACCEPT tcp -- anywhere anywhere multiport dports smtp,urd,submission owner UID match root # systemctl status iptables Unit iptables.service could not be found. So...is iptables actually running...or not? Please help me make sense of all this.
rolinger (175 rep)
Apr 29, 2020, 08:50 PM • Last activity: Jun 27, 2025, 09:01 AM
2 votes
1 answers
3068 views
Centos 7 port forwarding with firewalld not working
I can't seem to make firewalld-based port forwarding work under Centos 7. I am forwarding 192.168.0.148:905 to 192.168.56.102:22. When I try to ssh to 192.168.0.148 -p 905 I get "Connection refused". Here are some relevant settings: [root@GraceDev3 log]# firewall-cmd --list-all public (active) targe...
I can't seem to make firewalld-based port forwarding work under Centos 7. I am forwarding 192.168.0.148:905 to 192.168.56.102:22. When I try to ssh to 192.168.0.148 -p 905 I get "Connection refused". Here are some relevant settings: [root@GraceDev3 log]# firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: br0 sources: services: ssh dhcpv6-client https ports: 3389/tcp 905/tcp 908/tcp protocols: masquerade: yes forward-ports: port=905:proto=tcp:toport=22:toaddr=192.168.56.102 port=908:proto=tcp:toport=22:toaddr=192.168.56.105 source-ports: icmp-blocks: rich rules: Port forwarding: [root@GraceDev3 log]# cat /proc/sys/net/ipv4/ip_forward 1 tcpdump on 192.168.0.148 port 22 shows the ssh request arriving. The firewalld log does not show any packets being dropped. What am I missing? I note that others have had the same problem, but I haven't found any solutions posted.
user810702 (21 rep)
Jun 28, 2019, 07:51 PM • Last activity: Jun 22, 2025, 07:05 AM
0 votes
1 answers
2260 views
FirewallD not working properly on Fedora 25
I don't know what happened with FirewallD on recent updates but it's all messed up, [first I had issues with my active rules on Fedora 24][1], where I supposedly have the samba-server services enabled but I couldnt connect, the solution was to manually add the 145 and 339 ports. But things get worse...
I don't know what happened with FirewallD on recent updates but it's all messed up, first I had issues with my active rules on Fedora 24 , where I supposedly have the samba-server services enabled but I couldnt connect, the solution was to manually add the 145 and 339 ports. But things get worse on Fedora 25, where I just can't even set a default zone. I can execute the firewall-cmd --set-default-zone FedoraServer command properly, however, upon issuing firewall-cmd --reload I get an error about a bad argument COMMIT. And on top of all if I just do systemctl restart firewalld I lose all the changes I made, e.g. if I now run firewall-cmd --get-default-zone I get an empty string. What's even worse is that runtime changes don't even come into effect because if I run firewall-cmd --add-port 22/tcp I can't still connect because (surprise!) none of my interfaces is bound to a zone (not even the default) and I can't even set a default zone because well, I can't even reload the service to apply changes. Has anyone run into these issues? How can I go about this? Right now, both my production servers are running without a firewall and this is driving me mad. Edit: These are two "strange" things in the log of systemctl status firewalld when the service is stopped (systemctl stop firewalld): > ERROR: Failed to flush eb firewall: '/usr/sbin/ebtables-restore --noflush' failed: Bad argument : 'COMMIT'. > > ... > > ERROR: Failed to set policy of eb firewall: '/usr/sbin/ebtables-restore --noflush' failed: Bad argument : 'COMMIT'.
arielnmz (559 rep)
Dec 11, 2016, 06:55 AM • Last activity: Jun 19, 2025, 03:04 AM
4 votes
1 answers
3473 views
firewalld port still open after removing port and services
I have created a script to install and configured firewalld on Centos 7. Most of the rules have worked correctly, but the SSH port still shows as Open when running a nmap scan. I know this is not a big deal and that changing ports is just security by obscurity but would like to know why. firewall-cm...
I have created a script to install and configured firewalld on Centos 7. Most of the rules have worked correctly, but the SSH port still shows as Open when running a nmap scan. I know this is not a big deal and that changing ports is just security by obscurity but would like to know why. firewall-cmd --zone=dmz --add-masquerade --permanent firewall-cmd --zone=dmz --add-interface=eth0 firewall-cmd --zone=internal --add-port=${MONGO}/tcp --permanent firewall-cmd --zone=internal --add-port=${CHAT}/tcp --permanent firewall-cmd --zone=internal --add-port=${NFS_CLIENT}/tcp --permanent firewall-cmd --zone=internal --add-port=${NODE_EX}/tcp --permanent firewall-cmd --zone=dmz --add-forward- port=port=${22}:proto=tcp:toport=${22123} --permanent firewall-cmd --zone=dmz --add-port=${RSSH}/tcp --permanent --permanent Starting Nmap 7.40 ( https://nmap.org ) at 2017-10-04 17:33 BST Nmap scan report for Host is up (0.45s latency). Not shown: 997 filtered ports PORT STATE SERVICE 22/tcp open ssh 8083/tcp open us-srv 8086/tcp closed d-s-n sudo firewall-cmd --list-all dmz (active) target: default icmp-block-inversion: no interfaces: eth0 eth1 sources: services: ports: 22123/tcp 8086/tcp 8083/tcp protocols: masquerade: yes forward-ports: port=22:proto=tcp:toport=22123:toaddr= source-ports: icmp-blocks: rich rules: All ideas welcome. Thanks
CJW101 (41 rep)
Oct 4, 2017, 04:47 PM • Last activity: May 13, 2025, 09:01 AM
2 votes
1 answers
3165 views
How check a port forwarded from localhost to localhost on.. localhost?
I'm learning about iptables, firewalling, routing and so on. I'm on Linux, Centos7, and I've set up a local port forwarding to localhost with: `firewall-cmd --add-forward-port=port=2023:proto=tcp:toport=22` It is working as expected, trying from another machine. Locally, is not visible. I've tried w...
I'm learning about iptables, firewalling, routing and so on. I'm on Linux, Centos7, and I've set up a local port forwarding to localhost with: firewall-cmd --add-forward-port=port=2023:proto=tcp:toport=22 It is working as expected, trying from another machine. Locally, is not visible. I've tried with netstat and ss, nmap lsof and nc. Nothing, all of them "sees" everything except the 2023, even if it is currently forwarding an ssh session. After much reading, here on stackexchange I found a way to make it visible locally, (from https://unix.stackexchange.com/questions/113521/iptables-redirect-local-request-with-nat) , but actually that is not a solution, it just made me understand why is not visible from local, but I really would like to know if exists a way to check it locally.. Or the only option is the remote connection? Thank you :) Edit: The set up of the test machine is easy, just execute the firewall-cmd line I wrote in this question. No other rules added. Then test it with ssh (ore nmap) from outside: works. Check it from localhost itself: both ssh and nmap gives connection refused. Edit2: *Sorry, I wrote the firewall-cmd line incorrectly with a :toaddr=127.0.0.1 at the end, fixed.*
nnsense (389 rep)
Jun 22, 2015, 09:26 PM • Last activity: May 12, 2025, 08:02 PM
1 votes
1 answers
5109 views
Ports not really open after firewalld command
OS: CentOS 7 This is a question that is bordering on two issues. I have a `docker` machine running where I recently installed the PLEX container from `linuxserver/plex`. The current problem is that I cannot access the site to configure PLEX `https://localhost:32400/web`. In my attempts to determine...
OS: CentOS 7 This is a question that is bordering on two issues. I have a docker machine running where I recently installed the PLEX container from linuxserver/plex. The current problem is that I cannot access the site to configure PLEX https://localhost:32400/web. In my attempts to determine why this is occurring, I noticed that port 32400 appeared to be closed even though it should've been opened when the container was created, I am using the host network. I attempted to see if I could access the site using curl curl -i http://localhost:32400 curl -i http://10.0.1.200:32400 I then verified open ports with NMAP #nmap 10.0.1.200 Starting Nmap 6.40 ( http://nmap.org ) at 2019-01-18 12:52 CST Nmap scan report for 10.0.1.200 Host is up (0.00049s latency). Not shown: 999 closed ports PORT STATE SERVICE 22/tcp open ssh Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds So clearly port 32400 is not open, so I went to firewall-cmd #sudo firewall-cmd --get-active-zones public interfaces: eno1 #sudo firewall-cmd --zone=public --add-port=32400/tcp --permanent success #sudo firewall-cmd --reload success I also checked to see if it was open #sudo firewall-cmd --zone=public --list-ports 32400/tcp However, NMAP still shows its closed. Any idea why firewalld would show an open port on the docker host machine but it is actually closed? I'm not even sure this will get the site working for Plex. ---------- Verification of what Kramer had suggested that it was possible my interface was not setup # ip addr 3: eno1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 10.0.1.200/24 brd 10.0.1.255 scope global noprefixroute dynamic eno1 #firewall-cmd --zone=public --list-interfaces eno1
JMeterX (111 rep)
Jan 18, 2019, 06:58 PM • Last activity: May 10, 2025, 02:06 PM
22 votes
5 answers
78384 views
NFS servers and firewalld
I haven't found a slam-dunk document on this, so let's start one. On a CentOS 7.1 host, I have gone through [the linuxconfig HOW-TO][linuxconfig], including the `firewall-cmd` entries, and I have an exportable filesystem. [linuxconfig]: http://linuxconfig.org/quick-nfs-server-configuration-on-redhat...
I haven't found a slam-dunk document on this, so let's start one. On a CentOS 7.1 host, I have gone through the linuxconfig HOW-TO , including the firewall-cmd entries, and I have an exportable filesystem. [root@ ~]# firewall-cmd --list-all internal (default, active) interfaces: enp5s0 sources: 192.168.10.0/24 services: dhcpv6-client ipp-client mdns ssh ports: 2049/tcp masquerade: no forward-ports: rich rules: [root@ ~]# showmount -e localhost Export list for localhost: /export/home/ *.localdomain However, if I showmount from the client, I still have a problem. [root@ ~]# showmount -e .localdomain clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) Now, how am I sure that this is a firewall problem? Easy. Turn off the firewall. Server side: [root@ ~]# systemctl stop firewalld And client side: [root@ ~]# showmount -e .localdomain Export list for .localdomain: /export/home/ *.localdomain Restart firewalld. Server side: [root@ ~]# systemctl start firewalld And client side: [root@ ~]# showmount -e .localdomain clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) So, let's go to town, by adapting the iptables commands from a RHEL 6 NFS server HOW-TO ... [root@ ~]# firewall-cmd \ > --add-port=111/tcp \ > --add-port=111/udp \ > --add-port=892/tcp \ > --add-port=892/udp \ > --add-port=875/tcp \ > --add-port=875/udp \ > --add-port=662/tcp \ > --add-port=662/udp \ > --add-port=32769/udp \ > --add-port=32803/tcp success [root@ ~]# firewall-cmd \ > --add-port=111/tcp \ > --add-port=111/udp \ > --add-port=892/tcp \ > --add-port=892/udp \ > --add-port=875/tcp \ > --add-port=875/udp \ > --add-port=662/tcp \ > --add-port=662/udp \ > --add-port=32769/udp \ > --add-port=32803/tcp \ > --permanent success [root@ ~]# firewall-cmd --list-all internal (default, active) interfaces: enp5s0 sources: 192.168.0.0/24 services: dhcpv6-client ipp-client mdns ssh ports: 32803/tcp 662/udp 662/tcp 111/udp 875/udp 32769/udp 875/tcp 892/udp 2049/tcp 892/tcp 111/tcp masquerade: no forward-ports: rich rules: This time, I get a slightly different error message from the client: [root@ ~]# showmount -e .localdomain rpc mount export: RPC: Unable to receive; errno = No route to host So, I know I'm on the right track. Having said that, why can't I find a definitive tutorial on this anywhere? I can't have been the first person to have to figure this out! What firewall-cmd entries am I missing? Oh, one other note. My /etc/sysconfig/nfs files on the CentOS 6 client and the CentOS 7 server are unmodified, so far. I would prefer to not have to change (and maintain!) them, if at all possible.
dafydd (1466 rep)
Nov 18, 2015, 04:38 AM • Last activity: Apr 29, 2025, 05:02 AM
3 votes
1 answers
229 views
Firewalld ignoring rich-rule against port forwarding
I have an issue setting up my firewalld to have a perfect link together with docker and fail2ban. First, what I want to achive is the following traffic routing setup: ``` [PUBLIC] -> [FIREWALLD] -> ( [143/tcp FORWARD PORT] -----> [DOCKER/143/tcp] [ 22/tcp] -----> [openssh locally running] ) ``` **fa...
I have an issue setting up my firewalld to have a perfect link together with docker and fail2ban. First, what I want to achive is the following traffic routing setup:
[PUBLIC] -> 
  [FIREWALLD] -> (
    [143/tcp FORWARD PORT] -----> [DOCKER/143/tcp]
    [ 22/tcp]              -----> [openssh locally running]
  )
**fail2ban** I set up fail2ban to listen to my docker container, check for auth errors and set up a ban using firewall-cmd. That works so far. As soon as I mis-authenticate 3 times, it sends a command to firewalld. **Port forwarding** I also have set up the port forward for docker. I am setting it up explicitly, because I do not want to have docker destroying my networking. Maybe this is something I do not need in the future, but it is configured via the StrictForwardPorts=yes configuration. https://firewalld.org/2024/11/strict-forward-ports **Goal** The goal is to whenever a fail2ban trigger happens, the IP should **not** have access to the port 143 (forwarded one) and (maybe) neither the other ones anymore. But at first, I'd like to ban port-wise. **Problem** The problem currently is, that if a reject rich rule is created, it will block port 22 for that IP, but not port 143. **Attempts** I also tried putting the IP into the drop zone, giving it the priority -10. Same error result. Port 22 is dropped, but 143 still works. What am I doing wrong? Here's my zone configuration from the last try:
docker (active)
  target: ACCEPT
  ingress-priority: 0
  egress-priority: 0
  icmp-block-inversion: no
  interfaces: br-0aa8d4b5dde7 docker0
  sources: 
  services: 
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
        rule priority="-999" family="ipv4" source address="192.168.178.44" reject

drop (active)
  target: DROP
  ingress-priority: -10
  egress-priority: -10
  icmp-block-inversion: no
  interfaces: 
  sources: 192.168.178.44
  services: 
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

public (default, active)
  target: default
  ingress-priority: 0
  egress-priority: 0
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcpv6-client ssh
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
        port=143:proto=tcp:toport=143:toaddr=172.18.0.2
  source-ports: 
  icmp-blocks: 
  rich rules: 
        rule priority="-999" family="ipv4" source address="192.168.178.44" reject
As seen: Actually, address 192.168.178.44 should be fully blocked to the public zone. But it isnt. Additionally I added the IP to the drop zone. It seems that drop zone priority is working, as my SSH connection is dropped instead of rejected, but the port 143 is still accessible **Update 1: Some debug info**
$ sudo firewall-cmd --get-policies
allow-host-ipv6 docker-forwarding
**Update 2: --info-policy=docker-forwarding**
docker-forwarding (active)
  priority: -1
  target: ACCEPT
  ingress-zones: ANY
  egress-zones: docker
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: `
**Update 3:** Another idea that came to my mind was to create another policy with priority -10, containing the rich rule:
sudo firewall-cmd --permanent --new-policy ban-pre-routing
sudo firewall-cmd --permanent --policy ban-pre-routing --add-ingress-zone ANY
sudo firewall-cmd --permanent --policy ban-pre-routing --add-egress-zone HOST
sudo firewall-cmd --permanent --policy ban-pre-routing --set-priority -10
sudo firewall-cmd --permanent --policy ban-pre-routing --add-rich-rule="rule family=ipv4 source address=192.168.178.44 port port=143 protocol=tcp reject"
Still no effect. My *.44 Host can still connect to the machine. If I leave out the port port=143 protocol=tcp part, it would block the machine though to ssh - while still being able to access the port 143. **Update 4:** Using Update 3 with the policy configured to egress zone docker, it does not result in a difference. My Configs look like this now:
$ sudo firewall-cmd --list-all-policies
allow-host-ipv6 (active)
  priority: -15000
  target: CONTINUE
  ingress-zones: ANY
  egress-zones: HOST
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
        rule family="ipv6" icmp-type name="neighbour-advertisement" accept
        rule family="ipv6" icmp-type name="neighbour-solicitation" accept
        rule family="ipv6" icmp-type name="redirect" accept
        rule family="ipv6" icmp-type name="router-advertisement" accept

ban-pre-routing (active)
  priority: -10
  target: CONTINUE
  ingress-zones: ANY
  egress-zones: docker
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
        rule family="ipv4" source address="192.168.178.44" port port="143" protocol="tcp" reject

docker-forwarding (active)
  priority: -1
  target: ACCEPT
  ingress-zones: ANY
  egress-zones: docker
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
And for zones:
$ sudo firewall-cmd --list-all --zone=public
public (default, active)
  target: default
  ingress-priority: 0
  egress-priority: 0
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: dhcpv6-client ssh
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
        port=143:proto=tcp:toport=143:toaddr=172.18.0.2
  source-ports: 
  icmp-blocks: 
  rich rules: 

$ sudo firewall-cmd --list-all --zone=drop
drop
  target: DROP
  ingress-priority: 0
  egress-priority: 0
  icmp-block-inversion: no
  interfaces: 
  sources: 
  services: 
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

$ sudo firewall-cmd --list-all --zone=docker
docker (active)
  target: ACCEPT
  ingress-priority: 0
  egress-priority: 0
  icmp-block-inversion: no
  interfaces: br-c5f172e4effe docker0
  sources: 
  services: 
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
Marco Klein (151 rep)
Mar 31, 2025, 07:35 PM • Last activity: Apr 3, 2025, 09:59 AM
2 votes
1 answers
179 views
libvirt kvm virtual routed network: cannot ping gateway itself or beyond
I'm having trouble with libvirt kvm's routed networks where a VM inside a routed virtual network can ping every VM in my home subnet except the default gateway... or any gateway for that matter. I believe my situation is not a duplicate of https://unix.stackexchange.com/questions/611580/libvirt-kvm-...
I'm having trouble with libvirt kvm's routed networks where a VM inside a routed virtual network can ping every VM in my home subnet except the default gateway... or any gateway for that matter. I believe my situation is not a duplicate of https://unix.stackexchange.com/questions/611580/libvirt-kvm-virtualhost-cannot-ping-router-address as I am able to ping machines in the host's subnet (unlike in the aforementioned post where the author says no other hosts in the 192.168.2.0/24 network can be pinged) ### The environment - my home subnet is 192.168.1.0/24 - The default gateway is your standard consumer router, located at 192.168.1.1. Its static route table is as follows: - dst: 192.168.100.0 - gw: 192.168.1.100 - genmask: 255.255.255.0 - dst: 172.16.0.0 - gw: 192.168.1.4 - genmask: 255.240.0.0 - My machine (**server-01**, runs EndeavourOS) has the following IP address 192.168.1.100 - I am running libvirt kvm, I have a virtual *routed* network for my VMs. As such server-01 is a router for the 192.168.100.0/24 subnet. - On that subnet there is a Debian 12 VM **vm-alpha** (192.168.100.200) - A test subnet 172.20.200.0/24 exists (default gw 172.20.200.1). It can be reached from the home subnet via 192.168.1.4 (OPNsense) - There's a lonely Debian 12 VM **vm-beta** whose IP is 172.20.200.2 ### The problem I am unable to ping the default gw from **vm-alpha**. Hence vm-alpha has no "Internet connection". Below is a table outlining the pings I have attempted, with empty cells representing those that I have not tried | from =>
to ↓ | **vm-alpha**
(192.168.100.200) | server-01 home net IP (192.168.1.100) | vm-beta
(172.20.200.2) | | ---------------------------------------- | --------------------------------- | ------------------------------------- | ------------------------- | | **vm-alpha**
(192.168.100.200) | | YES | YES | | server-01 gw
(192.168.100.1) | YES | YES | YES | | server-01 home net IP
(192.168.1.100) | YES | | YES | | OPNsense home net
(192.168.1.4) | YES | YES | YES | | default gw home net
(192.168.1.1) | no response | YES | YES | | OPNsense other side
(172.20.200.1) | no response | YES | YES | | vm-beta
(172.20.200.2) | no response | YES | | | Cloudflare
(1.1.1.1) | no response | YES | YES | - vm-alpha (192.168.100.200) to home net gateway (192.168.1.1) - **no response** - packets are simply lost, there is no "destination host unreachable" or "network is unreachable" errors that I could see with wireshark running on server-01, listening on interface virbr1. #### mtr mtr from 192.168.100.200 to 172.20.200.2
My traceroute  [v0.95]
vm-alpha (192.168.100.200) -> 172.20.200.2025-03-30T00:00:24+0100
Keys:  Help   Display mode   Restart statistics   Order of fields   qui
t                             Packets               Pings
 Host                       Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 192.168.100.1            0.0%     7    0.2   0.2   0.1   0.3   0.1
 2. (waiting for reply)
^ when that mtr was done, the routing table of the host (server-01) was as follows:
Kernel IP routing table  
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  
0.0.0.0         192.168.1.1     0.0.0.0         UG    100    0        0 enp9s0  
192.168.1.0     0.0.0.0         255.255.255.0   U     100    0        0 enp9s0  
192.168.100.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr1
mtr from 172.20.200.2 to 192.168.100.200
My traceroute  [v0.95]
vm-beta (172.20.200.2) -> 12025-03-29T22:59:07+0000
Keys:  Help   Display mode   Restart statistics   Order of field
s   quit               Packets               Pings
 Host                Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 172.20.200.1      0.0%    16    0.5   0.6   0.3   1.1   0.2
 2. 192.168.1.1       0.0%    15    1.3   1.1   0.9   1.4   0.1
 3. 192.168.1.100     0.0%    15    1.1   1.2   0.7   1.9   0.3
 4. 192.168.100.200   0.0%    15    1.3   1.3   0.8   1.7   0.2
route -n for 172.20.200.2:
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.20.200.1    0.0.0.0         UG    0      0        0 eth0
172.20.200.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
### What I've tried - disabling firewalld which was running - the problem remains and the results from the ping table above still stand. - changing the routes on server-01 - I thought maybe the 192.168.1.1 gw was the problem so I removed it completely. I manually set the route for the 172.20.200.0/24 subnet. With the routing table of server-01 now being:
Kernel IP routing table  
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  
172.16.0.0      192.168.1.4     255.240.0.0     UG    0      0        0 enp9s0  
192.168.1.0     0.0.0.0         255.255.255.0   U     100    0        0 enp9s0  
192.168.100.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr1
server-01 is able to ping 172.20.200.2. However, vm-alpha still can't. mtr shows the new hop:
My traceroute  [v0.95]
vm-alpha (192.168.100.200) -> 172.20.200.2025-03-30T00:15:08+0100
Keys:  Help   Display mode   Restart statistics   Order of fields   qui
t                             Packets               Pings
 Host                       Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 192.168.100.1            0.0%     4    0.1   0.2   0.1   0.2   0.1
 2. 192.168.1.4              0.0%     4    0.7   0.7   0.5   0.9   0.1
 3. (waiting for reply)
### Other info iptables -L:
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
FWD-vm-alpha  all  --  anywhere             192.168.100.200     
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FWD-vm-alpha (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             192.168.100.200      tcp dpts:tcpmux:65535
ACCEPT     udp  --  anywhere             192.168.100.200      udp dpts:tcpmux:65535
cat /proc/sys/vm/ipv4/ip_forward returns 1. ip -c a on server-01:
(...)
2: enp9s0:  mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 10:ff:e0:3d:ab:c5 brd ff:ff:ff:ff:ff:ff
    altname enx10ffe03dabc5
    inet 192.168.1.100/24 brd 192.168.1.255 scope global dynamic noprefixroute enp9s0
       valid_lft 83966sec preferred_lft 83966sec
    inet6 fe80::6f46:313c:4159:fde4/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
(...)
11: virbr1:  mtu 1500 qdisc htb state UP group default qlen 1000
    link/ether 52:54:00:0d:2c:0d brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr1
       valid_lft forever preferred_lft forever
firewalld runs on the system. Some config info:
sudo firewall-cmd --get-active-zones 

libvirt
  interfaces: virbr0
libvirt-routed
  interfaces: virbr1
public (default)
  interfaces: enp9s0
sudo firewall-cmd --list-all --zone=libvirt-routed

libvirt-routed (active)
  target: default
  ingress-priority: 0
  egress-priority: 0
  icmp-block-inversion: no
  interfaces: virbr1
  sources: 
  services: 
  ports: 
  protocols: 
  forward: no
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
Virtual network configuration:
routed-network-100
  a4b18fac-87e8-4f6c-aebd-04ca31b8c7f7
Fulmen3161 (21 rep)
Mar 29, 2025, 11:41 PM • Last activity: Mar 30, 2025, 01:48 AM
1 votes
1 answers
418 views
Determining the performance Impact of firewalld rule count
I was working on tweaking the performance of fail2ban and I read that a too-long ban can result in a build-up of rules that will negatively impact performance, which made me wonder, "Is there any particular idea of a number of rules that is 'too high' for nftables?" I currently use firewalld with nf...
I was working on tweaking the performance of fail2ban and I read that a too-long ban can result in a build-up of rules that will negatively impact performance, which made me wonder, "Is there any particular idea of a number of rules that is 'too high' for nftables?" I currently use firewalld with nftables as the backend, and maybe 10-20 rules. However, a few servers are specifically intended for audiences of certain countries and should not be accessed outside them. If I pull down a country-IP database (e.g. MaxMind) and then generate a list of rules for all CIDRs for countries outside the allowed list, I end up with nearly 17,000 rules. On one hand, that's a lot of rules (IMHO), but on the other hand, there's nothing but spam and hack attempts coming from outside the designated countries (even legitimate users traveling abroad need to VPN into the US before they can access anyway). Is that kind of volume going to negatively impact nftables? I assume that the impact is relative to the amount of volume that has to be checked, but I haven't found a good way to see or measure the impact of the rules, and I don't want to start loading up thousands of rules without knowing the possible ramifications ahead of time.
jhilgeman (113 rep)
Jul 19, 2024, 10:37 PM • Last activity: Mar 21, 2025, 03:47 PM
6 votes
2 answers
18582 views
How to use POSTROUTING / SNAT with firewalld?
I try to set up SNAT with firewalld on my CentOS-7-Router like described [here](https://unix.stackexchange.com/questions/389756/how-to-use-snat-with-firewalld-vs-masq), with additions from [Karl Rupps explanation](https://www.karlrupp.net/de/computer/nat_tutorial), but I end up like [Eric](https://w...
I try to set up SNAT with firewalld on my CentOS-7-Router like described [here](https://unix.stackexchange.com/questions/389756/how-to-use-snat-with-firewalld-vs-masq) , with additions from [Karl Rupps explanation](https://www.karlrupp.net/de/computer/nat_tutorial) , but I end up like [Eric](https://www.reddit.com/r/homelab/comments/auks1f/how_to_masquerade_source_nat_to_an_ip_alias_with/) . I also read some other documentation, but I am not able to get it to work, so that my client-IP is translated into another source IP. Both firewall-cmd --permanent --direct --add-rule ipv4 nat POSTROUTING 0 -p tcp -o enp1s0 -d 192.168.15.105 -j SNAT --to-source 192.168.25.121 or firewall-cmd --permanent --direct --add-rule ipv4 nat POSTROUTING 0 -p tcp -s 192.168.15.105/32 -j SNAT --to-source 192.168.25.121 gives a "success". I do a firewall-cmd --reload afterwards. But if I try to examine the table with iptables -t nat -nvL POSTROUTING the rule is not listed. But if I apply one of the above rules again, firewalld warns me with e.g. Warning: ALREADY_ENABLED: rule '['-p', 'tcp', '-o', 'enp1s0', '-d', '192.168.15.105', '-j', 'SNAT', '--to-source', '192.168.25.121']' already is in 'ipv4:nat:POSTROUTING'- but no SNAT-functionality for the source-ip 192.168.15.105 to be masqueraded as 192.168.45.121 is working. Maybe someone can explain me what I am doing wrong? ----------------------------------------------------------- After hours of struggling, I still am hanging on DNAT/SNAT. I now use only iptables with: 1.) iptables -t nat -A PREROUTING -p tcp --dport 1433 -i enp1s0 -d 192.168.25.121 -j DNAT --to-destination 192.168.15.105 and 2.) iptables -t nat -A POSTROUTING -p tcp --sport 1433 -o enp1s0 -s 192.168.15.105/32 -j SNAT --to-source 192.168.25.121 so iptables -t nat -nvL PREROUTING shows:
PREROUTING (policy ACCEPT 7 packets, 590 bytes)
pkts bytes target     prot opt in     out     source               destination         
129 12089 PREROUTING_direct  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
129 12089 PREROUTING_ZONES_SOURCE  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
129 12089 PREROUTING_ZONES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
0     0 DNAT       tcp  --  enp1s0 *       0.0.0.0/0            192.168.25.121       tcp dpt:1433 to:192.168.15.105
and iptables -t nat -nvL POSTROUTING shows:
POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   97  7442 POSTROUTING_direct  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
   97  7442 POSTROUTING_ZONES_SOURCE  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
   97  7442 POSTROUTING_ZONES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 SNAT       tcp  --  *      enp1s0  192.168.15.105       0.0.0.0/0            tcp spt:1433 to:192.168.25.121
All done right, here are some more good explanations: - [https://wiki.ubuntuusers.de/iptables2](https://wiki.ubuntuusers.de/iptables2/) - [https://netfilter.org/documentation/HOWTO/NAT-HOWTO-6.html](https://netfilter.org/documentation/HOWTO/NAT-HOWTO-6.html) - [https://serverfault.com/questions/667731/centos-7-firewalld-remove-direct-rule](https://serverfault.com/questions/667731/centos-7-firewalld-remove-direct-rule) but still iptraf-ng lists: enter image description here Isn't PREROUTING (resp. POSTROUTING) done before (resp. after) ip_forwarding from internal to external interface?
Jochen Gebsattel (163 rep)
Sep 9, 2019, 02:34 PM • Last activity: Jan 18, 2025, 04:46 PM
0 votes
0 answers
78 views
How to masquerade from an interface to another on selected destination addresses?
I have a wireguard VPN running to access my local network from outside. I used to use `nft` but for that server, I use `firewalld`. Here is my nft command to allow masquerade: `PostUp = nft add rule inet POSTROUTING_%i postrouting ip daddr 192.168.1.1/24 masquerade` How can I do that with firewalld?...
I have a wireguard VPN running to access my local network from outside. I used to use nft but for that server, I use firewalld. Here is my nft command to allow masquerade: PostUp = nft add rule inet POSTROUTING_%i postrouting ip daddr 192.168.1.1/24 masquerade How can I do that with firewalld? The main interface and the VPN interface are both in the public zone.
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp2s0
  sources: 
  services: cockpit dhcpv6-client nfs samba ssh vnc-server
  ports: 80/tcp 443/tcp
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
無名前 (729 rep)
Jan 10, 2025, 09:29 AM
0 votes
0 answers
439 views
why is firewalld not processing rich rules
Using this configuration: ``` $ sudo firewall-cmd --list-all --zone=myzone myzone (active) target: default icmp-block-inversion: no interfaces: sources: 192.168.0.10/32 services: ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule family="ipv4" destination add...
Using this configuration:
$ sudo firewall-cmd --list-all --zone=myzone
myzone (active)
  target: default
  icmp-block-inversion: no
  interfaces:
  sources: 192.168.0.10/32
  services:
  ports:
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
    rule family="ipv4" destination address="0.0.0.0/0" reject
access to ip address 192.168.0.20 on all ports is being allowed, where it appears it should be rejected, according to the rich rule. By changing the zone target to reject, all traffic is now rejected, even with a corresponding rich rule which should be allowing the traffic.
$ sudo firewall-cmd --list-all --zone=myzone
myzone (active)
  target: %%REJECT%%
  icmp-block-inversion: no
  interfaces:
  sources: 192.168.0.10/32
  services:
  ports:
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
    rule family="ipv4" destination address="0.0.0.0/0" allow
Note that I have tried to add 'priority' values above and below 0, but these have not had any effect. Why are my rich-rules being ignored?
StampyCode (101 rep)
Dec 27, 2024, 02:36 PM
0 votes
0 answers
238 views
why my forward port couldn't work use firewall-cmd
1. config forward port ``` firewall-cmd --permanent --add-masquerade firewall-cmd --permanent --add-forward-port=port=81:proto=tcp:toaddr=127.0.0.1:toport=80 firewall-cmd --reload ``` now, this is my firwall-cmd. [![enter image description here][1]][1] [1]: https://i.sstatic.net/rUj4KKdk.png 2. open...
1. config forward port
firewall-cmd --permanent --add-masquerade
firewall-cmd --permanent --add-forward-port=port=81:proto=tcp:toaddr=127.0.0.1:toport=80
firewall-cmd --reload
now, this is my firwall-cmd. enter image description here 2. open a server use: nc -lk 80 3. use nc -v 127.0.0.1 81 couldn't work, but nc -v 127.0.0.1 80 could work
Yunbin Liu (101 rep)
Dec 12, 2024, 12:44 AM
0 votes
0 answers
45 views
Firewalld allowing direct connections but not connections from load balancer after update
I've just recently upgraded a system from RHEL 8.5 to RHEL 8.10 which also upgraded firewalld from 0.9.3 to 0.9.11. In firewalld we have a postfix zone for port 25 connections and the only allowed devices in there are 2 direct connections and 2 connections from an F5 load balancer. The direct connec...
I've just recently upgraded a system from RHEL 8.5 to RHEL 8.10 which also upgraded firewalld from 0.9.3 to 0.9.11. In firewalld we have a postfix zone for port 25 connections and the only allowed devices in there are 2 direct connections and 2 connections from an F5 load balancer. The direct connections are working but connections from the load balancer aren't. I've had to turn firewalld off for now so that things still function but obviously want to resolve this so we can use it again. Error being received in load balancer is "unreachable - admin prohibited filter"
dazedandconfused (161 rep)
Dec 6, 2024, 09:20 AM
15 votes
1 answers
27927 views
How do I get a list of the ports which belong to preconfigured firewall-cmd services?
I want to open the following ports in my CentOS 7 firewall: UDP 137 (NetBIOS Name Service) UDP 138 (NetBIOS Datagram Service) TCP 139 (NetBIOS Session Service) TCP 445 (SMB) I can guess that the services names include `samba` includes TCP 445 but I don't know if the other ports have a service name p...
I want to open the following ports in my CentOS 7 firewall: UDP 137 (NetBIOS Name Service) UDP 138 (NetBIOS Datagram Service) TCP 139 (NetBIOS Session Service) TCP 445 (SMB) I can guess that the services names include samba includes TCP 445 but I don't know if the other ports have a service name preconfigured. I can list supported services with: $ firewall-cmd --get-services But this doesn't tell me what ports are configured with the services. Is there a way to list what ports belong to these services so that I can grep for the one that I need?
Zhro (2821 rep)
Dec 5, 2018, 09:37 AM • Last activity: Nov 22, 2024, 06:06 AM
4 votes
2 answers
5492 views
Confused about the message "No route to host" when blocked by firewalld
Debugging a software problem, I detected a state where the attempt to make a TCP connection resulted in a "No route to host" error message. This was especially confusing as ping had no such problem: ~~~lang-sh # netcat -v 172.20.2.24 5565 netcat: connect to 172.20.2.24 port 5565 (tcp) failed: No rou...
Debugging a software problem, I detected a state where the attempt to make a TCP connection resulted in a "No route to host" error message. This was especially confusing as ping had no such problem: ~~~lang-sh # netcat -v 172.20.2.24 5565 netcat: connect to 172.20.2.24 port 5565 (tcp) failed: No route to host # ping 172.20.2.24 PING 172.20.2.24 (172.20.2.24) 56(84) bytes of data. 64 bytes from 172.20.2.24: icmp_seq=1 ttl=63 time=0.466 ms 64 bytes from 172.20.2.24: icmp_seq=2 ttl=63 time=0.470 ms ^C --- 172.20.2.24 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1020ms rtt min/avg/max/mdev = 0.466/0.468/0.470/0.002 ms ~~~ In the end it turned out that the firewalld on the target host blocked the port (i.e.: did not allow it). However from my memory I thought that blocked traffic would either - vanish in a "black hole", eventually causing connection timeouts, or - trigger an error like "connection refused" or "Connection reset" but not "no route to host". So I wonder: Was there a recent change, or is it simply a bug somewhere (incorrect error message being created)? The system in question is SLES15 SP6 on x86_64, running Linux kernel 6.4.0-150600.23.25-default and firewalld-2.0.1-150600.3.2.1.noarch. Also between initiator and target there is one gateway host (just in case that might change the error code). Further Thoughts ================ From the answers so far the source of the error code seems to be ICMP (Internet Control Message Protocol) as defined in RFC 792 . For "type=3" (Destination Unreachable) there are *four* different "codes" related to this issue: 1. 0: "net unreachable" 2. 1: "host unreachable" 3. 2: "protocol unreachable" 4. 3: "port unreachable" So I'd suggest that "code=3" would be the correct one, but "No route to host" seems to suggest that "code=1" is being used. See also "Destination unreachable" in "Internet Control Message Protocol" (Wikipedia).
U. Windl (1715 rep)
Nov 21, 2024, 09:32 AM • Last activity: Nov 21, 2024, 09:25 PM
7 votes
3 answers
1523 views
Which takes precedence: /etc/hosts.allow or firewalld?
On a RHEL 7 server, `/etc/hosts.allow` has a number of IP addresses with full access. The firewall (confirmed with `firewall-cmd`), there are no specific sources defined, and the default zone allows certain ports and services. Which takes precedence? Or for a specific example, if an IP address liste...
On a RHEL 7 server, /etc/hosts.allow has a number of IP addresses with full access. The firewall (confirmed with firewall-cmd), there are no specific sources defined, and the default zone allows certain ports and services. Which takes precedence? Or for a specific example, if an IP address listed in /etc/hosts.allow tries to connect to the server using a port/service not allowed by the firewall rules, could it connect?
Jon Pennycook (73 rep)
Jul 20, 2022, 08:53 AM • Last activity: Nov 21, 2024, 04:55 PM
-1 votes
1 answers
958 views
Bash Script fails to brace-expand list of services from Firewalld
I am attempting to clear out all existing services configured in `firewalld` via a bash script. # produces {cockpit,dhcpv6-client,ssh} as an example local EXISTING_SERVICES="{$(firewall-cmd --permanent --list-service | sed -e 's/ /,/g')}" # firewall-cmd --permanent --remove-service={cockpit,dhcpv6-c...
I am attempting to clear out all existing services configured in firewalld via a bash script. # produces {cockpit,dhcpv6-client,ssh} as an example local EXISTING_SERVICES="{$(firewall-cmd --permanent --list-service | sed -e 's/ /,/g')}" # firewall-cmd --permanent --remove-service={cockpit,dhcpv6-client,ssh} firewall-cmd --permanent --remove-service="${EXISTING_SERVICES}" When this is run, firewall-cmd returns: Warning: NOT_ENABLED: {cockpit,dhcpv6-client,ssh} success The problem seems to be firewall-cmd interprets the list of services to disable as a single service name, instead of a list. When I run the command manually from the shell, the same exact (copy/pasted) command works like expected. Example script to replicate: EXISTING_SERVICES="{$(firewall-cmd --permanent --list-service | sed -e 's/ /,/g')}" echo "firewall-cmd --permanent --remove-service=${EXISTING_SERVICES}" firewall-cmd --permanent --remove-service="${EXISTING_SERVICES}" Results: Shell results What is the difference between running this via script and via direct shell commands? ---- Update: Tried running the script with set -x as suggested by @fra-san, which produced the following results when run from the script: Results And the following results when run from the shell: enter image description here It seems the shell (and/or firewalld) behaves differently when run interactively and expands the list of services into 3 separate --remove-service= flags. This is very unexpected behavior.
SnakeDoc (490 rep)
May 12, 2021, 06:49 PM • Last activity: Nov 21, 2024, 11:53 AM
Showing page 1 of 20 total questions