Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
2238 views
resolvectl ignores new VPN network adapter
I have a strange problem when I connect to a company VPN with forticlient application. First, I did not know what was wrong. After spending some time, I figured out that DNS is not working as it should have. Unfortunately, I have no idea, who's fault is that. It may be FortiClient, systemd-resolved,...
I have a strange problem when I connect to a company VPN with forticlient application. First, I did not know what was wrong. After spending some time, I figured out that DNS is not working as it should have. Unfortunately, I have no idea, who's fault is that. It may be FortiClient, systemd-resolved, or something else. I am using Ubuntu 22.04, which is not an official version yet, but I have doubts it will get any better until official release in a week or two. This is output from resolvectl before VPN is established: username@hostname:~$ resolvectl Global Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported resolv.conf mode: stub Link 2 (enp2s0) Current Scopes: none Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported Link 3 (wlp1s0) Current Scopes: DNS Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported Current DNS Server: 192.168.1.1 DNS Servers: 192.168.1.1 2a00:ee0:d::13 2a00:ee0:e::13 DNS Domain: -- After VPN is established resolvectl reports additional link called vpn: username@hostname:~$ resolvectl Global Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported resolv.conf mode: stub Link 2 (enp2s0) Current Scopes: none Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported Link 3 (wlp1s0) Current Scopes: DNS Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported Current DNS Server: 172.20.1.21 DNS Servers: 172.20.1.16 172.20.1.21 2a00:ee0:d::13 2a00:ee0:e::13 DNS Domain: company.com Link 5 (vpn) Current Scopes: none Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported As you can see additional DNS servers are added to Link 3, which should help me resolve internal names when connected to VPN. Strange thing is that when I write username@hostname:~$ resolvectl query name.company.com name.company.com: resolve call failed: 'name.company.com' not found I do not get anything. If I try with nslookup like this username@hostname:~$ nslookup > server 172.20.1.16 Default server: 172.20.1.16 Address: 171.20.1.16#53 > name.company.com Server: 172.20.1.16 Address: 172.20.1.16#53 Name: name.company.com Address: 172.20.38.251 I get the correct answer. Since this was strange I traced network traffic to see what does nslookup differently than resolvectl query. It turned out that nslookup uses a VPN assigned address for the source IP when asking DNS for a name. On the other hand, resolvectl query uses all other addresses for source IP except the one assigned by VPN. Because of that I guess DNS server does not have the route to send back an answer correctly to my computer, or DNS queries may even not reach the newly added DNS servers. Because of that none of the programs I need can resolve the names correctly. The result is that I cannot connect anywhere within a VPN with a domain name. Does anybody have an idea how to make resolvectl realize there is newly assigned VPN address, and it should use it as the source IP. Should FortiClient do some additional configutation on establishing a connection? Probably not. I tried to restart systemd-resolved after VPN is established, but it does not help. Should I restart some other service? Which one? ---------- Update: I have checked how DNS is setup in network settings, and they are correct. Without VPN the network interface wlp1s0 shows: username@hostname:~$ nmcli device show wlp1s0 | grep DNS IP4.DNS: 192.168.1.1 IP6.DNS: 2a00:ee0:d::13 IP6.DNS: 2a00:ee0:e::13 After VPN is connected: username@hostname:~$ nmcli device show wlp1s0 | grep DNS IP4.DNS: 172.20.1.16 IP4.DNS: 172.20.1.21 username@hostname:~$ nmcli device show vpn | grep DNS IP4.DNS: 172.20.1.16 IP4.DNS: 172.20.1.21
nobody (1820 rep)
Apr 11, 2022, 01:46 PM • Last activity: Aug 5, 2025, 04:01 AM
0 votes
2 answers
14813 views
How to use DNS-over-TLS with BIND9 forwarders
BIND9 v9.18 improves support for DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH). However, while the [docs](https://bind9.readthedocs.io/en/v9_18_11/) explain how to use TLS for the server part, it does not reveal how to enable DNS-over-TLS for query forwarding. Does BIND9 v9.18 support it? How does the...
BIND9 v9.18 improves support for DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH). However, while the [docs](https://bind9.readthedocs.io/en/v9_18_11/) explain how to use TLS for the server part, it does not reveal how to enable DNS-over-TLS for query forwarding. Does BIND9 v9.18 support it? How does the config snippet need to be tweaked to use DoT for the forwarders?
options {
        […]
        forwarders {
                // Forward to Cloudflare public DNS resolver
                1.1.1.1;
                1.0.0.1;
        };
        […]
}
Simply adding port 853 and expecting some magic to happen does not seem to be enough.
Stephan (103 rep)
Feb 13, 2023, 11:46 AM • Last activity: Jul 29, 2025, 12:25 PM
0 votes
0 answers
27 views
ISC Bind9 with DNS over TLS (DOT) fails when strict tls auth is enabled
working I installed and setup Bind9 official package to test DNS forward zones based on source IP/subnets which unbound doesn't support I properly set NAT forwards, changed listening ports on Bind9 and configured it for DNS over TLS (see below) All works properly and DNS requests are properly forwar...
working I installed and setup Bind9 official package to test DNS forward zones based on source IP/subnets which unbound doesn't support I properly set NAT forwards, changed listening ports on Bind9 and configured it for DNS over TLS (see below) All works properly and DNS requests are properly forwarded and use TLS until I uncomment remote-hostname and/or ca-file options. Without them, as per Bind9 doc, encryption is granted but not TLS authentication If I enable those options to ensure strict TLS authentication, clients cannot resolve DNS entries and I get the below errors in logs:
Jul 29 00:50:29	named	92197	query-errors: debug 4: fetch completed for readaloud.googleapis.com.intranet/A in 0.056869: TLS peer certificate verification failed/success [domain:.,referral:0,restart:1,qrysent:0,timeout:0,lame:0,quota:0,neterr:0,badresp:0,adberr:0,findfail:0,valfail:0]
Jul 29 00:50:29	named	92197	query-errors: info: client @0x1414c4b10800 10.0.31.62#9512 (readaloud.googleapis.com.intranet): query failed (TLS peer certificate verification failed) for readaloud.googleapis.com.intranet/IN/A at query.c:7836
I tried with different ca-file values, but no success **My working Bind9 config (with remote-hostname commented):**
tls cloudflare-tls {
//    ca-file "/usr/local/share/certs/ca-root-nss.crt";
//    ca-file "/usr/local/etc/ssl/cert.pem";
//    ca-file "/usr/share/certs/trusted/IdenTrust_Commercial_Root_CA_1.pem";
//    remote-hostname "one.one.one.one";
    prefer-server-ciphers yes;
};

options {
    forwarders {
        1.1.1.1 port 853 tls cloudflare-tls;
        1.0.0.1 port 853 tls cloudflare-tls;
        2606:4700:4700::1111 port 853 tls "cloudflare-tls";
        2606:4700:4700::1001 port 853 tls "cloudflare-tls";
    };
};
* **Bind9 Docs:** [https://bind9.readthedocs.io/en/v9.18.14/reference.html#namedconf-statement-prefer-server-ciphers](https://bind9.readthedocs.io/en/v9.18.14/reference.html#namedconf-statement-prefer-server-ciphers) > Strict TLS provides server authentication via a pre-configured > hostname for outgoing connections. This mechanism offers both channel > confidentiality and channel authentication (of the server). In order > to achieve Strict TLS, one needs to use remote-hostname and, > optionally, ca-file options in the tls statements used for > establishing outgoing connections (e.g. the ones used to download zone > from primaries via TLS). Providing any of the mentioned options will > enable server authentication. If remote-hostname is provided but > ca-file is missed, then the platform-specific certificate authority > certificates are used for authentication. The set roughly corresponds > to the one used by WEB-browsers to authenticate HTTPS hosts. On the > other hand, if ca-file is provided but remote-hostname is missing, > then the remote side’s IP address is used instead. Any help why enabling tls auth fails?
user2565854 (1 rep)
Jul 29, 2025, 08:05 AM • Last activity: Jul 29, 2025, 08:29 AM
1 votes
2 answers
6434 views
How to proxy nmap and dns resolution of nmap
How to use nmap and dns resolution of nmap over proxy? I tried proxychains, but for dns resolution it doesn't work, it's known bug as I read on some forums. It works well without dns_proxy feature in proxychains config. But I need to proxy dns resolution requests. sudo proxychains nmap -T4 -sV -Pn -...
How to use nmap and dns resolution of nmap over proxy? I tried proxychains, but for dns resolution it doesn't work, it's known bug as I read on some forums. It works well without dns_proxy feature in proxychains config. But I need to proxy dns resolution requests. sudo proxychains nmap -T4 -sV -Pn -A --reason -v scanme.nmap.org I tried proxychains4 (or proxychains-ng), but with nmap it does scanning and send all the packets synchronously, so for example for scan of one host it's needed to wait for 30 min or ever longer. So it's not the option, but it works well. sudo proxychains4 nmap -T4 -sV -Pn -A --reason -v scanme.nmap.org I tried just like this with inside nmap proxy function: sudo nmap --proxy socks4://127.0.0.1:9050 -T4 -sV -Pn -A --reason -v scanme.nmap.org But does it dns resolution requests over the tor proxy 127.0.0.1:9050 or only scan? It seems it doesn't. What is the solution?
Sebastian Rockefeller (123 rep)
Apr 10, 2016, 06:15 PM • Last activity: Jul 26, 2025, 01:08 AM
0 votes
1 answers
2316 views
How to configure a BIND 9 name server as a slave for a zone that exists in multiple views?
I have a Bind9 hidden primary configured with views, and I need a secondary to transfer all the views of the same zone. Example: On **primary**: view "dmz-view" { match-clients { server-dmz; }; allow-transfer { transfer-dmz; }; recursion yes; allow-query-cache { server-dmz; }; zone "example.com" IN...
I have a Bind9 hidden primary configured with views, and I need a secondary to transfer all the views of the same zone. Example: On **primary**: view "dmz-view" { match-clients { server-dmz; }; allow-transfer { transfer-dmz; }; recursion yes; allow-query-cache { server-dmz; }; zone "example.com" IN { type master; file "/var/cache/bind/db.dmz.example.com"; notify yes; }; }; view "untrust-view" { allow-query { any; }; allow-transfer { transfer-untrust; }; recursion no; zone "example.com" IN { type master; file "/var/cache/bind/db.untrust.example.com"; notify yes; }; }; Now, my problem is that if I put the secondary's IP in both acls (transfer-dmz and transfer-untrust), it will match the first view and will transfer only that. I've read examples 3,4 in https://kb.isc.org/docs/aa-00851 but it doesn't seem to fit my needs (or am I misunderstanding?) I also read https://flylib.com/books/en/2.684.1/setting_up_a_slave_name_server_for_a_zone_in_multiple_views.html but since it's aged I suppose it's outdated . Any cookbook or advice?
Matteo Fabbroni (16 rep)
Feb 23, 2022, 08:21 AM • Last activity: Jul 22, 2025, 08:05 PM
0 votes
1 answers
1972 views
How to Create a PTR Record for AWS Mailserver's Elastic IP Address
I configured an SMTP server (Postfix) on an AWS instance. However, as a defense against spam most well-managed emailservers will reject messages ***sent*** from any host whose IP does not resolve back to the same hostname of the sending server. When I sent a test message from the CLI: mail -s 'TEST...
I configured an SMTP server (Postfix) on an AWS instance. However, as a defense against spam most well-managed emailservers will reject messages ***sent*** from any host whose IP does not resolve back to the same hostname of the sending server. When I sent a test message from the CLI: mail -s 'TEST Subject' addressOfRecpient@test.com <<< 'Test Message Sent from Postfix Server' It gets rejected by the recipient's mailserver. ***How do I create a PTR record for the Elastic IP assigned to my AWS mailserver?***
F1Linux (2744 rep)
Mar 12, 2020, 04:06 PM • Last activity: Jul 19, 2025, 11:06 AM
0 votes
1 answers
4205 views
Setting up a fixed IP wifi hotspot (with no internet) with DHCP and DNS using dnsmasq
I'm having trouble setting up my computer (running Ubuntu 18.04) as a hotspot with a manually fixed IP. I want devices to be able to connect to it via WiFi, and for them to be able to access my website hosted on the computer on port 80. So I wanted to set the fixed IP of my computer as 192.168.10.1,...
I'm having trouble setting up my computer (running Ubuntu 18.04) as a hotspot with a manually fixed IP. I want devices to be able to connect to it via WiFi, and for them to be able to access my website hosted on the computer on port 80. So I wanted to set the fixed IP of my computer as 192.168.10.1, so I set up the hotspot as such:
INTERFACE=wlan0 # My wifi card interface
CONNECTION_NAME=testhotspot
MY_IP="192.168.10.1"

sudo nmcli con add type wifi ifname $INTERFACE con-name $CONNECTION_NAME autoconnect yes ssid $CONNECTION_NAME
sudo nmcli con modify $CONNECTION_NAME 802-11-wireless.mode ap ipv4.method manual ipv4.addresses $MY_IP/24 ipv4.gateway $MY_IP
sudo nmcli con modify $CONNECTION_NAME wifi-sec.key-mgmt wpa-psk 
sudo nmcli con modify $CONNECTION_NAME wifi-sec.psk "somepassword"
# do I need to set ipv4.dns?
I then set up dnsmasq (in /etc/dnsmasq) as:
address=/#/127.0.0.1
interface=wlan0
except-interface=lo
listen-address=::1,127.0.0.1,192.168.10.1

# DHCP setup
dhcp-range=192.168.10.100,192.168.10.200,12h # lease out 192.168.10.100-200
dhcp-option=option:router,192.168.10.1
dhcp-option=option:dns-server,192.168.10.1
dhcp-option=option:netmask,255.255.255.0
dhcp-leasefile=/var/lib/misc/dnsmasq.leases
dhcp-authoritative
Startup dnsmasq and the hotspot:
sudo nmcli con up testhotspot
sudo systemctl restart dnsmasq.service
With this setup, I found that connecting to the wifi hotspot on another computer running Ubuntu (let's call this computer B), I could successfully ping 192.168.1.10 and access my website on 192.168.10.1:80. However, I had issues trying to connect to it using an Android phone, with the connection continuously dropping. I had to change my Android wifi settings to "Static" instead of "DHCP", and specify the DNS to 192.168.10.1 for me to successfully ping 192.168.10.1. Thus, I guessed that I hadn't "announced" to clients properly about my DNS/DHCP server? I tried changing my hotspot settings as nmcli con modify testhotspot ipv4.dns 192.168.10.1. However, this did not solve the issue on my Android device (It stopped dropping the wifi connection, but I still could not ping 192.168.10.1?). I also noticed that on computer B, while connected to both the wifi of my hotspot server, and an internet-providing router, some public websites (such as this askubuntu site) could not be reached until I turned off the wifi connection to the hotspot server. What did I do wrong in the setup above?
kekpirat (101 rep)
Sep 14, 2022, 02:12 AM • Last activity: Jul 19, 2025, 01:03 AM
1 votes
1 answers
3099 views
nslookup gets SERVFAIL but not in Windows
There's a nameserver 10.92.131.26 on my work VPN, and it appears to get configured on my machine when I connect to our anyconnect VPN server. When I run `nslookup server` on my Linux workstation, I get a SERVFAIL for it: ``` ;; Got SERVFAIL reply from 10.92.131.26, trying next server Server: 10.50.1...
There's a nameserver 10.92.131.26 on my work VPN, and it appears to get configured on my machine when I connect to our anyconnect VPN server. When I run nslookup server on my Linux workstation, I get a SERVFAIL for it:
;; Got SERVFAIL reply from 10.92.131.26, trying next server
Server:		10.50.177.208
Address:	10.50.177.208#53

** server can't find server: SERVFAIL
But when I open a Windows VM within my workstation run and run nslookup, it succeeds for the very same nameserver.
Default Server:  a.company.domain
Address: 10.92.131.26
Why is this? --- **TMI: Why do I care?** At work, our MFA system applies extra restrictions when I attempt to access certain of the company websites using my Linux workstation, but I don't experience these restrictions when I boot to Windows, nor when I attempt from a Windows VM from within my Linux system. (And I can't satisfy these extra restrictions because I.T. appears not to have planned on anyone actually encountering them legitimately.) I.T. tells me: > Normally this is due to an issue with the VPN routing to [our] servers... Try it in Google Chrome if it still doesn't work as Firefox sometimes uses its own DNS to resolve addresses so it can cause this error where Chrome will just work. ...And indeed, their assertion seems well founded: in the Windows VM, my connection attempts through Chrome succeed, and my attempts through FF do not. Still, my attempts on my Linux host do not work at all. I wonder if my attempts from Linux will succeed if I can get my Linux machine to use 10.92.131.26 for its nameserver. --- **Outputs** **Update:** as requested, here are the outputs to netstat -rn on each machine. They're pretty long, so I'm just linking pastebins: [on Linux](https://pastebin.com/H2GZQhxU) , [on Windows](https://pastebin.com/rdhrjWfV) Here's a tracert 10.92.131.26 from the Windows VM:
Tracing route to 10.92.131.26 over a maximum of 30 hops

  1    29 ms    27 ms    25 ms  192.168.100.1 
  2    35 ms    31 ms    33 ms  173.36.212.117 
  3    35 ms    34 ms    29 ms  50.216.158.108 
  4    41 ms    35 ms    37 ms  10.92.131.26 

Trace complete.
Jellicle (420 rep)
Jan 28, 2022, 05:04 PM • Last activity: Jul 11, 2025, 01:03 AM
3 votes
3 answers
1894 views
Docker dns failure
I launched [Concourse CI worker][1] with [Boot2docker][2] on OS X. Docker info: Client: Version: 1.11.0 API version: 1.23 Go version: go1.5.4 Git commit: 4dc5990 Built: Wed Apr 13 18:13:28 2016 OS/Arch: darwin/amd64 Server: Version: 1.11.0 API version: 1.23 Go version: go1.5.4 Git commit: 4dc5990 Bu...
I launched Concourse CI worker with Boot2docker on OS X. Docker info: Client: Version: 1.11.0 API version: 1.23 Go version: go1.5.4 Git commit: 4dc5990 Built: Wed Apr 13 18:13:28 2016 OS/Arch: darwin/amd64 Server: Version: 1.11.0 API version: 1.23 Go version: go1.5.4 Git commit: 4dc5990 Built: Wed Apr 13 19:36:04 2016 OS/Arch: linux/amd64 When I tried to build docker image I had a problem. Build instruction: - put: docker-registry params: build: src-develop tag: version/version Build log: Sending build context to Docker daemon 80.9 kB Step 1 : FROM python:3.5 Pulling repository docker.io/library/python Error while pulling image: Get https://index.docker.io/v1/repositories/library/python/images : dial tcp: lookup index.docker.io on 127.0.0.11:53: read udp 127.0.0.1:59668->127.0.0.11:53: read: connection refused Does anyone have idea how to solve this problem?
Alexey Kachalov (31 rep)
May 6, 2016, 10:05 PM • Last activity: Jul 10, 2025, 08:37 PM
0 votes
0 answers
12 views
dnsmasq '--read-ethers' and '--address' interaction
I run dnsmasq on a server (specifically OpenWrt) to act as both DHCP and DNS. OpenWrt DHCP configuration `/etc/config/dhcp`: option readethers '1' list address '/my-phone.lan/172.28.79.133' Which is equivalent to running: dnsmasq --read-ethers --address='/my-phone.lan/172.28.79.133' nslookup works a...
I run dnsmasq on a server (specifically OpenWrt) to act as both DHCP and DNS. OpenWrt DHCP configuration /etc/config/dhcp: option readethers '1' list address '/my-phone.lan/172.28.79.133' Which is equivalent to running: dnsmasq --read-ethers --address='/my-phone.lan/172.28.79.133' nslookup works and resolves the name to IP correctly. I set this in /etc/ethers: 00:c7:11:b4:19:1a my-phone.lan From dnsmasq manpage: > **-Z, --read-ethers** > Read /etc/ethers for information about hosts for the DHCP server. The format of /etc/ethers is a hardware address, followed > by either a hostname or dotted-quad IP address. When read by dnsmasq > these lines have exactly the same effect as --dhcp-host options > containing the same information. /etc/ethers is re-read when dnsmasq > receives SIGHUP. IPv6 addresses are NOT read from /etc/ethers. When my phone connects to the network, it does not receive the DHCP lease 172.28.79.133. But if I don't use dnsmasq --address and instead set it in /etc/hosts: 172.28.79.133 my-phone.lan It works and my phone does receive the correct DHCP lease. Why is that?
Livy (455 rep)
Jul 10, 2025, 08:56 AM • Last activity: Jul 10, 2025, 09:03 AM
0 votes
0 answers
68 views
LXC Container on Proxmox Can’t Resolve DNS — Outbound UDP Works, But No Replies
I'm trying to configure a reverse proxy on an LXC Container in proxmox, however the container is not able to resolve DNS. The proxmox node has no issue with DNS, and both the node and the container are able to ping outbound. The container specifically is able to make outbound DNS requests but just r...
I'm trying to configure a reverse proxy on an LXC Container in proxmox, however the container is not able to resolve DNS. The proxmox node has no issue with DNS, and both the node and the container are able to ping outbound. The container specifically is able to make outbound DNS requests but just receives no response. As a note, I have some restrictions being in an apartment on my apartments internet. Unfortunately I do not have access to my primary router configuration and my homelab is behind a secondary bridged router. So I've had to make some work arounds regarding this. Since I don't have access to the main apartment router and can't forward ports or run custom DNS there, I needed a local solution to resolve DNS inside my container. Initially, I tried just pointing the container to public nameservers (like 1.1.1.1 and 8.8.8.8), but DNS responses never made it back — likely because of how my network handles outbound NAT from bridged containers. To work around this, I enabled SNAT on the Proxmox node to ensure that all outgoing traffic from the container gets rewritten with the node’s IP. This should’ve made return traffic more reliable. I also set up dnsmasq on the Proxmox node as a local DNS forwarder. The idea was that the container would send DNS requests to the node (10.124.16.3), which would forward them to public resolvers and relay the responses back. This avoids having to deal with external DNS servers rejecting packets from unexpected source IPs. I've made sure dnsmasq is working by running ss -lunp | grep 53 and got the following: udp UNCONN 0 0 10.124.16.3:53 0.0.0.0:* users:(("dnsmasq",pid=xxx,fd=x)) Despite this, the container still fails to resolve DNS — even when dnsmasq is working correctly and requests are visible in tcpdump. 10.124.16.3 is the proxmox node and 10.124.16.4 is the container
Here's the node network configuration page (/etc/network/interfaces)

auto lo
iface lo inet loopback

iface enp5s0 inet manual

auto vmbr0
iface vmbr0 inet static
    address 10.124.16.3/22
    gateway 10.124.16.1
    bridge-ports enp5s0
    bridge-stp off
    bridge-fd 0

   post-up iptables -t nat -A POSTROUTING -s 10.124.16.0/22 -o vmbr0 -j SNAT --to-source 10.124.16.3

   post-down iptables -t nat -D POSTROUTING -s 10.124.16.0/22 -o vmbr0 -j SNAT --to-source 10.124.16.3


heres the container config (/etc/pve/lxc/.conf)

arch: amd64
cores: 1
memory: 256
swap: 256
hostname: cf-tunnel
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.124.16.1,ip=10.124.16.4/22,type=veth
unprivileged: 1
features: nesting=1

and in the container (/etc/resolv.conf) it contains

    nameserver 1.1.1.1
    nameserver 8.8.8.8
When I run tcpdump -ni vmbr0 port 53 on the node and I dig on the container with dig google.com (I've also tried digging with specific DNS servers with @1.1.1.1) Here's the output I get in the tcpdump
root@geeksquad:~# tcpdump -ni vmbr0 host 10.124.16.4 and port 53
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on vmbr0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
23:13:56.887877 IP 10.124.16.4.38419 > 1.1.1.1.53: 19663+ [1au] A? google.com. (51)
23:14:00.280550 IP 10.124.16.4.52162 > 10.124.16.3.53: 19721+ [1au] TXT? protocol-v2.argotunnel.com. (55)
23:14:01.892819 IP 10.124.16.4.39216 > 1.1.1.1.53: 19663+ [1au] A? google.com. (51)
23:14:05.307826 IP 10.124.16.4.44721 > 10.124.16.3.53: 13780+ [1au] SRV? _v2-origintunneld._tcp.argotunnel.com. (66)
23:14:06.898125 IP 10.124.16.4.59178 > 1.1.1.1.53: 19663+ [1au] A? google.com. (51)
23:14:10.308108 IP 10.124.16.4.48477 > 10.124.16.3.53: 45090+ [1au] SRV? _v2-origintunneld._tcp.argotunnel.com. (66)
23:14:25.321538 IP 10.124.16.4.56031 > 10.124.16.3.53: 17689+ [1au] SRV? _v2-origintunneld._tcp.argotunnel.com. (66)
also checking journalctl -u dnsmasq I get this
Jul 07 23:56:31 geeksquad systemd: Started dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server.
Jul 07 23:57:22 geeksquad systemd: Stopping dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server...
Jul 07 23:57:22 geeksquad dnsmasq: exiting on receipt of SIGTERM
Jul 07 23:57:22 geeksquad systemd: dnsmasq.service: Deactivated successfully.
Jul 07 23:57:22 geeksquad systemd: Stopped dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server.
Jul 07 23:57:22 geeksquad systemd: Starting dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server...
Jul 07 23:57:22 geeksquad dnsmasq: started, version 2.90 cachesize 150
Jul 07 23:57:22 geeksquad dnsmasq: DNS service limited to local subnets
Jul 07 23:57:22 geeksquad dnsmasq: compile time options: IPv6 GNU-getopt DBus no-UBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset nftset auth cr>
Jul 07 23:57:22 geeksquad dnsmasq: reading /etc/resolv.conf
Jul 07 23:57:22 geeksquad dnsmasq: using nameserver 1.1.1.1#53
Jul 07 23:57:22 geeksquad dnsmasq: using nameserver 8.8.8.8#53
Jul 07 23:57:22 geeksquad dnsmasq: read /etc/hosts - 11 names
Jul 07 23:57:22 geeksquad systemd: Started dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server.
Jul 07 23:57:33 geeksquad dnsmasq: reading /etc/resolv.conf
Jul 07 23:57:33 geeksquad dnsmasq: ignoring nameserver 10.124.16.3 - local interface
Jul 07 23:59:23 geeksquad dnsmasq: reading /etc/resolv.conf
Jul 07 23:59:23 geeksquad dnsmasq: using nameserver 1.1.1.1#53
Jul 07 23:59:23 geeksquad dnsmasq: using nameserver 8.8.8.8#53
Any help at all would be appreciated. As far as firewall rules go, I do not believe that's the issue. I set my firewall rules within the proxmox gui but have tried all variations of allowing all traffic in and out temporarily and have also disabled the firewalls entirely as a test, Neither changing the outcome.
tkennedy741 (1 rep)
Jul 8, 2025, 06:04 PM • Last activity: Jul 9, 2025, 05:43 AM
5 votes
1 answers
2565 views
Why is my ISP DNS still in resolv.conf after a VPN connection and how can this be fixed?
Ubuntu 15.10 and dns=dnsmasq is commented out in /etc/NetworkManager/NetworkManager.conf Before I connect to a VPN /etc/resolv.conf contains nameserver 2xx.xx.xx.xx <-- ISP DNS 1 nameserver 2xx.xx.xx.xx <-- ISP DNS 2 after a VPN connection /etc/resolv.conf contains nameserver 1xx.xx.xx.xx <-- VPN DN...
Ubuntu 15.10 and dns=dnsmasq is commented out in /etc/NetworkManager/NetworkManager.conf Before I connect to a VPN /etc/resolv.conf contains nameserver 2xx.xx.xx.xx <-- ISP DNS 1 nameserver 2xx.xx.xx.xx <-- ISP DNS 2 after a VPN connection /etc/resolv.conf contains nameserver 1xx.xx.xx.xx <-- VPN DNS 1 nameserver 1xx.xx.xx.xx <-- VPN DNS 2 nameserver 2xx.xx.xx.xx <-- ISP DNS 1 The regular wired connection and the VPN have DNS servers set in network manager with automatic (only addresses). The ISP server shouldn't be there at all. What else can I change? (removing dns=dnsmasq was one change to stop split DNS).
user157600 (51 rep)
Feb 21, 2016, 09:09 PM • Last activity: Jul 8, 2025, 07:05 PM
3 votes
1 answers
1887 views
systemd-resolved change dns cname records
I have a PC with Arch Linux (4.7.2-1-ARCH) on it. The PC uses DHCP to get its IP, but uses a different DNS Server that i configured via systemd-resolved. When I use the dig command with a domain the has a CNAME record, the associated A record is missing. If I use dig with the server manualy configur...
I have a PC with Arch Linux (4.7.2-1-ARCH) on it. The PC uses DHCP to get its IP, but uses a different DNS Server that i configured via systemd-resolved. When I use the dig command with a domain the has a CNAME record, the associated A record is missing. If I use dig with the server manualy configured the A record is there. Any ideas why the systemd-resolved changes the dns records? If you need any additional infos let me know. Here is my network (systemd-networkd) configuration [Match] Name=ens18 [Network] DNS=10.0.0.18 DHCP=ipv4 [DHCPv4] UseHostname=false UseDNS=false resolv.conf: # This is a static resolv.conf file for connecting local clients to # systemd-resolved via its DNS stub listener on 127.0.0.53. # # Third party programs must not access this file directly, but only through the # symlink at /etc/resolv.conf. To manage resolv.conf(5) in a different way, # replace this symlink by a static file or a different symlink. # # See systemd-resolved.service(8) for details about the supported modes of # operation for /etc/resolv.conf. nameserver 127.0.0.53 dig api.pushbullet.com: ; > DiG 9.10.4-P2 > api.pushbullet.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER> DiG 9.10.4-P2 > api.pushbullet.com @10.0.0.18 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33081 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 13, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;api.pushbullet.com. IN A ;; ANSWER SECTION: api.pushbullet.com. 184 IN CNAME ghs-svc-https-c573.ghs-ssl.googlehosted.com. ghs-svc-https-c573.ghs-ssl.googlehosted.com. 282 IN A 72.14.247.65 ;; AUTHORITY SECTION: . 509182 IN NS j.root-servers.net. . 509182 IN NS d.root-servers.net. . 509182 IN NS a.root-servers.net. . 509182 IN NS l.root-servers.net. . 509182 IN NS g.root-servers.net. . 509182 IN NS b.root-servers.net. . 509182 IN NS m.root-servers.net. . 509182 IN NS i.root-servers.net. . 509182 IN NS f.root-servers.net. . 509182 IN NS h.root-servers.net. . 509182 IN NS e.root-servers.net. . 509182 IN NS k.root-servers.net. . 509182 IN NS c.root-servers.net. ;; Query time: 0 msec ;; SERVER: 10.0.0.18#53(10.0.0.18) ;; WHEN: Son Sep 11 01:17:59 CEST 2016 ;; MSG SIZE rcvd: 328
Stoffl (131 rep)
Sep 10, 2016, 11:44 PM • Last activity: Jul 6, 2025, 02:01 PM
1 votes
2 answers
1335 views
How can I prevent Tailscale from overwriting /etc/resolv.conf on Linux?
I'm using Tailscale on a Linux system, but I'm running into an issue with Tailscale's "Magic DNS" feature, which is overwriting my /etc/resolv.conf file and preventing server name resolution. Here's my setup: I have Tailscale installed in a directory where tailscale and tailscaled binaries are locat...
I'm using Tailscale on a Linux system, but I'm running into an issue with Tailscale's "Magic DNS" feature, which is overwriting my /etc/resolv.conf file and preventing server name resolution. Here's my setup: I have Tailscale installed in a directory where tailscale and tailscaled binaries are located. I start the daemon using a shell script after each boot, which looks like this: /data/tailscale/tailscaled -no-logs-no-support -statedir /data/tailscale/state According to the documentation, Magic DNS can be disabled by running tailscale up --accept-dns=false which would prevent /etc/resolv.conf from being modified. Since tailscaled is already starting up and enabling Magic DNS (without my explicitly running tailscale up), I’m not sure how to apply the --accept-dns=false setting before the daemon has a chance to modify /etc/resolv.conf. Is there a way to configure Tailscale to disable Magic DNS from the start, or should I modify my startup script to include tailscale up right after starting tailscaled? Any advice on the correct sequence or setup to prevent /etc/resolv.conf from being overwritten would be greatly appreciated. Thank you!
Norbert (131 rep)
Oct 31, 2024, 07:25 AM • Last activity: Jul 6, 2025, 03:17 AM
2 votes
4 answers
15874 views
Unable to "yum install" on Oracle Linux 7 machine
I get the following error: Loaded plugins: langpacks, ulninfo http://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: yum.oracle.com; Unknown error" Trying other mirror. failure: repodata/repomd.xml from ol7_latest: [Errno 256] No mo...
I get the following error: Loaded plugins: langpacks, ulninfo http://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/repodata/repomd.xml : [Errno 14] curl#6 - "Could not resolve host: yum.oracle.com; Unknown error" Trying other mirror. failure: repodata/repomd.xml from ol7_latest: [Errno 256] No more mirrors to try. http://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/repodata/repomd.xml : [Errno 14] curl#6 - "Could not resolve host: yum.oracle.com; Unknown error" Not sure what exactly this means. Can someone please give me pointers?
kskp (187 rep)
Apr 27, 2017, 06:36 PM • Last activity: Jul 2, 2025, 02:01 AM
4 votes
2 answers
438 views
Ubuntu 25.04 not using DHCP DNS server
A strange issue with split DNS that's been annoying me for ages, DHCP dns points to my adguard (primary) and my home router (secondary). Both have DNS rewrites for my local home domain servers to the LAN IP. This works perfectly for all devices except my Ubuntu laptop. This randomly decides to find...
A strange issue with split DNS that's been annoying me for ages, DHCP dns points to my adguard (primary) and my home router (secondary). Both have DNS rewrites for my local home domain servers to the LAN IP. This works perfectly for all devices except my Ubuntu laptop. This randomly decides to find the external DNS entries for those services, which points to the external interface of my router and fails (I need this for LetsEncrypt). If I statically set my DNS to the right DNS server it does the same thing, flush the cache and an nslookup works the first time you run it, but the 2nd it's switched back. Digging into it has left me in loops and rabbit holes so figured I'd ask if anyone else can help me make sense. If I edit /etc/resolv.conf from 127.0.0.53 to my dns server this is fine... until I'm on another wi-fi network. nmcli DNS configuration: servers: 192.168.1.85 192.168.1.1 domains: localdomain interface: wlp195s0 nslookup ha.test.co.uk Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: ha.test.co.uk canonical name = fake.test.co.uk. Name: fake.test.co.uk Address: 199.199.199.199 nslookup ha.test.co.uk 192.168.1.85 Server: 192.168.1.85 Address: 192.168.1.85#53 Non-authoritative answer: Name: ha.test.co.uk Address: 192.168.1.85 Running resolvectl status Link 2 (wlp195s0) Current Scopes: DNS Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported Current DNS Server: 192.168.1.1 DNS Servers: 192.168.1.85 192.168.1.1 DNS Domain: localdomain Default Route: yes So it seems that for whatever reason my machine has decided to use the router DNS service which is my backup. nslookup ha.test.co.uk 192.168.1.1 Server: 192.168.1.1 Address: 192.168.1.1#53 Name: ha.test.co.uk Address: 192.168.1.85 ha.test.co.uk canonical name = fake.test.co.uk. The router has both the internal and external records (this is by default, I've got internal rewrites on there for local LAN), and Ubuntu us using the secondary DNS server, then from this using the secondary DNS entries... confusing! I've read around this as much as I can, looked at disabling the loopback resolver, but this broke DNS totally. Anyone any ideas?
Michael Kennedy (41 rep)
Jun 28, 2025, 05:14 PM • Last activity: Jun 29, 2025, 10:54 AM
0 votes
2 answers
8113 views
how to force renewal of dns cache
I am using a dynamic dns service which I update via command line. Now my problem is that, after updating the IP, Linux still uses the old IP when trying to access the dyndns-Adress. How do I force Debian to request updated dns info when I am using ping or Nmap on the dynamic dns address?
I am using a dynamic dns service which I update via command line. Now my problem is that, after updating the IP, Linux still uses the old IP when trying to access the dyndns-Adress. How do I force Debian to request updated dns info when I am using ping or Nmap on the dynamic dns address?
p3ppi (1 rep)
Aug 14, 2022, 05:15 PM • Last activity: Jun 21, 2025, 04:06 AM
1 votes
1 answers
4841 views
How to clear DNS cache on Fedora on any other Linux distro
I've just changed the hosting for my Domain the name got propagated (24 hours passed) I have new page (without SSL because I didn't added it yet on new hosting) on my android phone. But when I open the page in Chromium or Fedora I see old redirect to https. How can I flush/clear my local DNS so I'll...
I've just changed the hosting for my Domain the name got propagated (24 hours passed) I have new page (without SSL because I didn't added it yet on new hosting) on my android phone. But when I open the page in Chromium or Fedora I see old redirect to https. How can I flush/clear my local DNS so I'll see new page and can do something with new site. For both my phone and my laptop I use same WiFi so it's not cache in router. In this question https://unix.stackexchange.com/questions/387292/how-to-flush-the-dns-cache-in-debian first answer don't work and second is for server that have Bind, I don't have bind, it's not a server. My /etc/resolv.conf look like this: # Generated by NetworkManager nameserver 91.239.248.21 nameserver 8.8.4.4 nameserver fe80::1%wlp3s0
jcubic (10310 rep)
Aug 7, 2019, 11:02 AM • Last activity: Jun 12, 2025, 11:07 PM
1 votes
1 answers
59 views
Which IPv6 address should I use for LAN name resolution?
I set up an opnsense firewall that runs a DHCP server for IPv4 assignment in my LAN. Furthermore, as my ISP provides me with IPv6 too, my LAN clients also configure a SLAAC address with IPv6 prefix delegation. This works fine, too. Now I am running some servers in my LAN, for example Proxmox. Of cou...
I set up an opnsense firewall that runs a DHCP server for IPv4 assignment in my LAN. Furthermore, as my ISP provides me with IPv6 too, my LAN clients also configure a SLAAC address with IPv6 prefix delegation. This works fine, too. Now I am running some servers in my LAN, for example Proxmox. Of course the servers have fixed IP addresses, but I still want to be able to address them by their name. In the DNS service of the firewall, I can add manual static entries. For example, for my Proxmox host, my IPv4 static entry looks like this: pve0 A 192.168.1.10 now I would like to allow that pve0 can also resolve to an IPv6 address, so each client can choose on their own what protocol to use. I can make a static IPv6 address, too, but I am unsure which one to use. Should I use the link local address, or the one with the delegated prefix? for example pve0 AAAA fe80::3eec:efff:fea1:1515 or should I use (redacted some bits of the address) pve0 AAAA 2a00:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:6f8d which is the clean and correct way to do it? and one bonus question, I see that lots of people are struggling with this: how can I achieve name resolution in the LAN also for dynamically allocated IPv6 addresses? for IPv4 it works, as DHCP adds automatically a DNS entry, but obviously for IPv6 SLAAC addresses, this is not possible. But still I noticed that some Windows 10 clients actually are able to resolve each other's name, so in some way it must be possible, but I don't understand how.
T. Pluess (626 rep)
Jun 8, 2025, 11:36 AM • Last activity: Jun 8, 2025, 04:01 PM
0 votes
2 answers
2139 views
DNS lookup fails on first time, but second lookup works
I have a lot of Raspberry Pi systems which are located at different networks of over which I have no control. Recently I recognized that at one network *sometimes but not always* the first DNS lookup fails, but the second one is able to resolve the name: pi@pi:~ $ ping api.twilio.com ping: unknown h...
I have a lot of Raspberry Pi systems which are located at different networks of over which I have no control. Recently I recognized that at one network *sometimes but not always* the first DNS lookup fails, but the second one is able to resolve the name: pi@pi:~ $ ping api.twilio.com ping: unknown host api.twilio.com pi@pi:~ $ ping api.twilio.com PING nlb-api-public-c3207ffe0810c880.elb.us-east-1.amazonaws.com (18.211.224.155) 56(84) bytes of data. ... api.twilio.com was just an example, I also could reproduce it with other domain names like *google.com*. I was hoping that *nslookup* might give me a better hint: pi@pi:~ $ ping api.twilio.com ping: unknown host api.twilio.com pi@pi:~ $ nslookup api.twilio.com Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find api.twilio.com: REFUSED pi@pi:~ $ ping api.twilio.com PING nlb-api-public-c3207ffe0810c880.elb.us-east-1.amazonaws.com (18.208.54.140) 56(84) bytes of data. ^C --- nlb-api-public-c3207ffe0810c880.elb.us-east-1.amazonaws.com ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3106ms pi@pi:~ $ nslookup api.twilio.com Server: 127.0.0.1 Address: 127.0.0.1#53 Non-authoritative answer: api.twilio.com canonical name = virginia.us1.api-lb.twilio.com. virginia.us1.api-lb.twilio.com canonical name = nlb-api-public-c3207ffe0810c880.elb.us-east-1.amazonaws.com. Name: nlb-api-public-c3207ffe0810c880.elb.us-east-1.amazonaws.com Address: 18.212.47.248 Name: nlb-api-public-c3207ffe0810c880.elb.us-east-1.amazonaws.com Address: 18.211.224.155 Name: nlb-api-public-c3207ffe0810c880.elb.us-east-1.amazonaws.com Address: 18.208.54.140 I could reproduce the behavior over various systems but not always. Sometimes the lookup seems to work for some time and then after some time the behavior is there again (my guess at this point is that this could be after the DNS lease time?). My question is, if there are better ways to investigate why the first lookup sometimes does not work correctly? Or even better, if someone has a hint what might cause the problem?
Stefan Wegener (131 rep)
Apr 25, 2019, 11:16 AM • Last activity: Jun 8, 2025, 02:03 AM
Showing page 1 of 20 total questions