Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

5 votes
1 answers
16004 views
How to use authentication with negiotiation (e.g. Kerberos) to HTTP proxy?
Generally accepted is the use of `HTTP_PROXY`/`HTTPS_PROXY` environment variables to specify the use of a proxy server. Authentication can be included in this URL, e.g. `HTTP_PROXY=http://user:pass@myproxy.mydomain.tld:3128/`. However, I am using Kerberos SSO to authenticate with the proxy. How do I...
Generally accepted is the use of HTTP_PROXY/HTTPS_PROXY environment variables to specify the use of a proxy server. Authentication can be included in this URL, e.g. HTTP_PROXY=http://user:pass@myproxy.mydomain.tld:3128/ . However, I am using Kerberos SSO to authenticate with the proxy. How do I configure that? So, suppose a Squid proxy server configuration as described here: https://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos . It describes how Windows clients can use proxy authentication with negotiation, but there's no information how I can configure Linux/Unix clients. For cURL, the use of --proxy-negotiate -u : does the trick, e.g.: HTTPS_PROXY=http://myproxy.mydomain.tld:3128/ curl --proxy-negotiate -u : https://www.google.com How do I tell non-cURL applications to use this mechanism? E.g. Debian/Ubuntu APT with Acquire::http::Proxy "http://myproxy.mydomain.tld:3128/ ";? I found [cntlm](http://manpages.ubuntu.com/manpages/xenial/man1/cntlm.1.html) which acts as another locally running proxy in the middle, facilitating unauthenticated connections from localhost. However, this only works with NTLM, where I need Kerberos. Would Squid be able to connect as a client using Kerberos perhaps? It seems notoriously hard to find authentication capabilities on the *outgoing* connection of proxy servers. All seem to focus on authentication features on the *listening socket* instead.
gertvdijk (14517 rep)
Dec 22, 2017, 12:43 PM • Last activity: Jul 26, 2025, 10:09 PM
0 votes
0 answers
539 views
Proxy or VPN behind the router
I have a home network behind a router, and I’m running a proxy server with ports forwarded through the router. Some clients connect to the Internet via my proxy, but there's a small problem — sometimes they cannot send files through email or HTML forms (I’m using Squid as the proxy). I tried to set...
I have a home network behind a router, and I’m running a proxy server with ports forwarded through the router. Some clients connect to the Internet via my proxy, but there's a small problem — sometimes they cannot send files through email or HTML forms (I’m using Squid as the proxy). I tried to set up a PPTP VPN using PoPToP, but when connecting through my router, I encountered error 619. The server is running Fedora 16. What is the best way to provide Internet access to the clients — using a VPN, a different proxy configuration, or something else?
skayred (163 rep)
Dec 15, 2011, 06:54 AM • Last activity: Jun 16, 2025, 05:39 AM
0 votes
1 answers
5764 views
Squid with mac address filter acl
I am setting up Squid proxy with mac address acl. I have recompiled squid 3.5 rpm with `--enable-arp`, acl. But after configuring Squid.conf with mac address acl its unable to block access for unwanted mac address. Is it possible to create iptable rule and allow some mac addresses to permit web acce...
I am setting up Squid proxy with mac address acl. I have recompiled squid 3.5 rpm with --enable-arp, acl. But after configuring Squid.conf with mac address acl its unable to block access for unwanted mac address. Is it possible to create iptable rule and allow some mac addresses to permit web access? if yes how to do that? --- **Edit**: Added as follows: acl mac arp 00:E1:34:CD:C0:22 http_access allow mac http_access deny all
Aniruddha (9 rep)
Mar 12, 2015, 11:12 AM • Last activity: Jun 9, 2025, 10:03 AM
1 votes
1 answers
2658 views
Iptable rules for squid on centos 7
I have two interfaces in my proxy server eth0 and eth1. where eth0 connects to local (private) network wile eth1 connects to internet.My squid version is 3.3.8 and centos 7 is my OS. I have to configure transparent proxy. I know that for it there should be a single change like http_port 8080 interce...
I have two interfaces in my proxy server eth0 and eth1. where eth0 connects to local (private) network wile eth1 connects to internet.My squid version is 3.3.8 and centos 7 is my OS. I have to configure transparent proxy. I know that for it there should be a single change like http_port 8080 intercept I have done this but still I could not access internet and there is no infomation in squid access.log file. But When I enable proxy on client, there squid log start to populate. I think I am missing some iptable rules. What should be those rules so that my client can access internet via proxy (transparent mode). I have applied two rules iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080 After apply given two rules, I got following in tcpdum 15:56:53.858317 ARP, Request who-has localhost.localdomain tell 192.168.57.100, length 46 15:56:53.858330 ARP, Reply localhost.localdomain is-at 0a:00:27:00:00:01 (oui Unknown), length 28 15:56:53.859825 IP 192.168.57.100.55833 > localhost.localdomain.domain: 17156+ A? www.google.com. (32) 15:56:53.859866 IP localhost.localdomain > 192.168.57.100: ICMP localhost.localdomain udp port domain unreachable, length 68 15:56:53.860006 IP 192.168.57.100.55833 > localhost.localdomain.domain: 56135+ AAAA? www.google.com. (32)
Hafiz Muhammad Shafiq (603 rep)
Apr 26, 2016, 12:39 PM • Last activity: May 13, 2025, 06:02 AM
1 votes
1 answers
2433 views
squid, TLS connection between browser and proxy
I have a `squid` instance (v4.6) on a public address `A.B.C.D` setup with `basic_auth` (`ldap` backend). This works over **unencrypted** port, say `8080`, using `http_port A.B.C.D:8080`. I'm trying to fiugre out how to secure connections to my `squid` over the insecure Internet (only authenticated u...
I have a squid instance (v4.6) on a public address A.B.C.D setup with basic_auth (ldap backend). This works over **unencrypted** port, say 8080, using http_port A.B.C.D:8080. I'm trying to fiugre out how to secure connections to my squid over the insecure Internet (only authenticated users should be allwed to use the proxy). I'm using PROXY in the current Firefox 75 to test the connection. I tried many things, including: https_port A.B.C.D:8443 tls-cert=/path/to/cert tls-key=/path/to/key SLL_ports 8443 When I enter this port to the Firefox PROXY settings, nothing happens, no basic_auth prompt, is shown, nothing. Logs say: 1587588731.539 0 F.G.H.I NONE/000 0 NONE error:transaction-end-before-headers - HIER_NONE/- - Is it possible to secure basic_auth (using TLS) when using PROXY? Sending unencrypted passwords over the Internet is simply wrong. I started to think about putting nginx with TLS and basic_auth in front of squid, but I do not know yet if this is possible. Could someone help?
Kamil (1501 rep)
Apr 22, 2020, 09:01 PM • Last activity: Apr 8, 2025, 04:08 AM
6 votes
5 answers
7947 views
How to create an on-demand RPM mirror
I would like to create an RPM repository for Fedora packages on my local network. Due to storage limitations, I want the repository to be empty initially and download packages once they are accessed. ### Background I work a lot with local VMs. Anytime I create a new VM and install Fedora, a lot of p...
I would like to create an RPM repository for Fedora packages on my local network. Due to storage limitations, I want the repository to be empty initially and download packages once they are accessed. ### Background I work a lot with local VMs. Anytime I create a new VM and install Fedora, a lot of packages are downloaded from the internet, and most of the downloaded packages are the same. To speed up the process I would like the RPMs to be cached on a server located on the same network. Similar questions have been answered with a combination of createrepo & reposync. I do not like the reposync part, because I don't want to clone the whole repository up front when I need only some of the packages. ### Ideal Solution I would like the server on my local network to act as an RPM repository for my Fedora installations. It should pass-through the metadata from whatever is configured in /etc/yum.repo.d/*. The server should deliver the requested RPM if it is present in the local cache, or else download it and then deliver it. A less ambitious approach would be to configure a single RPM repository instead of https://mirrors.fedoraproject.org/ ... and just use an http proxy. ### Update: 02 Nov. 2015 I already have an nginx running on the network, so I played around with a combination of proxy_pass and proxy_cache. It kinda works, but IMHO it has more drawbacks than benefits: - a separate configuration for every repo configured in /etc/yum.repo.d/*. - can't use metadata from https://mirrors.fedoraproject.org/ because of alternate mirrors. I dropped the nginx thing and installed squid, as suggested in comments. squid works great for me. With the store_id_program configuration, I am even able to use the alternate mirrors and still hit the cache, no matter where the RPM came from originally.
Yevgeniy (161 rep)
Oct 31, 2015, 07:31 PM • Last activity: Feb 19, 2025, 04:04 PM
0 votes
1 answers
110 views
Squid won't listen on port unless INPUT policy is ACCEPT
having some trouble with fresh squid server on a VPS box. I have the box secured with iptables,simple 'iptables -P INPUT DROP', and only my home ip is allowed to connect. So, whatever http_port I set in squid.conf, squid does not listen when default action on INPUT chain is DROP (Currently I am usin...
having some trouble with fresh squid server on a VPS box. I have the box secured with iptables,simple 'iptables -P INPUT DROP', and only my home ip is allowed to connect. So, whatever http_port I set in squid.conf, squid does not listen when default action on INPUT chain is DROP (Currently I am using port 13631 as you can see in some configs below). Just to be clear, I tried adding 'obvious' rules - accepting specific dport, specific local IP address/interface, they didn't fix the problem. I have an openvpn server on the same box and it works with the only rule allowing my home IP to connect (2nd input rule below) so the basic rules seem to be working. So this config works and squid listens on specific port:
@aatest:~# iptables -L INPUT
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     all  --    anywhere
ACCEPT     all  --  192.168.0.0/24       anywhere

root@aatest:~# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      26548/sshd
tcp        0      0 0.0.0.0:13630           0.0.0.0:*               LISTEN      887/openvpn
tcp        0      0 10.8.0.1:13631          0.0.0.0:*               LISTEN      30715/(squid-1)
tcp6       0      0 :::22                   :::*                    LISTEN      26548/sshd
But this one does not work -
@aatest:~# iptables -L INPUT
Chain INPUT (policy DROP)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     all  --    anywhere
ACCEPT     all  --  192.168.0.0/24       anywhere
root@aatest:~# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      26548/sshd
tcp        0      0 0.0.0.0:13630           0.0.0.0:*               LISTEN      887/openvpn
tcp6       0      0 :::22                   :::*                    LISTEN      26548/sshd
So I need some help to figure out which exact rule I must add so squid would start listening correctly while iptables would still allow only one IP to connect to server. I tried many types of rules - allowing local interface, local address, dport, etc, but squid just won't start to listen unless I allow any-any with default input policy or rule. I think I need to allow _something_ but if allowing port or ip doesn't work then I don't know what else to try. Since I made a mistake before choosing wrong site for the question, I'll also paste my current iptables-save (did some experimenting there already, but with this config squid still does not listen when I restart it). Also keep in mind, this box was first made for simple ovpn server, I just wanted to add a proxy server to it.
@aatest:~# iptables-save
# Generated by iptables-save v1.8.2 on Sat Jul 27 08:41:10 2024
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [238615:172640979]
:OUTPUT ACCEPT [886422:1146802441]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -s  -j ACCEPT
-A INPUT -s 192.168.0.0/24 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 13631 -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sat Jul 27 08:41:10 2024
# Generated by iptables-save v1.8.2 on Sat Jul 27 08:41:10 2024
*mangle
:PREROUTING ACCEPT [67190817:38175901140]
:INPUT ACCEPT [23057184:4713328157]
:FORWARD ACCEPT [44133612:33462571051]
:OUTPUT ACCEPT [34224044:37027164282]
:POSTROUTING ACCEPT [78357656:70489735333]
COMMIT
# Completed on Sat Jul 27 08:41:10 2024
# Generated by iptables-save v1.8.2 on Sat Jul 27 08:41:10 2024
*nat
:PREROUTING ACCEPT [177306:27232375]
:INPUT ACCEPT [15937:829028]
:OUTPUT ACCEPT [1275:80000]
:POSTROUTING ACCEPT [32:1318]
-A POSTROUTING -o venet0 -j MASQUERADE
COMMIT
# Completed on Sat Jul 27 08:41:10 2024
# Generated by iptables-save v1.8.2 on Sat Jul 27 08:41:10 2024
*raw
:PREROUTING ACCEPT [67190817:38175901140]
:OUTPUT ACCEPT [34224043:37027164098]
COMMIT
# Completed on Sat Jul 27 08:41:10 2024
# Generated by iptables-save v1.8.2 on Sat Jul 27 08:41:10 2024
*security
:INPUT ACCEPT [23045729:4712756627]
:FORWARD ACCEPT [44133612:33462571051]
:OUTPUT ACCEPT [34224043:37027164098]
COMMIT
# Completed on Sat Jul 27 08:41:10 2024
Andy Scull (1 rep)
Jul 27, 2024, 08:03 AM • Last activity: Jul 27, 2024, 09:15 AM
3 votes
2 answers
3332 views
How to block keywords in https URL using squid proxy?
I want to block keyword "import" in any URL, including https websites. Example: http://www.abc.com/import/dfdsf https://xyz.com/import/hdovh How to create acl to do it? Thanks
I want to block keyword "import" in any URL, including https websites. Example: http://www.abc.com/import/dfdsf https://xyz.com/import/hdovh How to create acl to do it? Thanks
flake (31 rep)
Nov 20, 2015, 03:36 PM • Last activity: Jul 1, 2024, 11:08 PM
0 votes
1 answers
527 views
How to add proxy server settings in wifi router or access point
I have squid proxy server running in centos 8 i want to setup proxy server in wifi router or wireless access point which i can force wireless clients to go through proxy server.
I have squid proxy server running in centos 8 i want to setup proxy server in wifi router or wireless access point which i can force wireless clients to go through proxy server.
Rizwan Saleem (5 rep)
Oct 27, 2021, 06:27 AM • Last activity: Jun 26, 2024, 05:38 AM
2 votes
1 answers
1322 views
firewalld + squid : how-to setup a proxy
infra-server with IP x.x.x.x (with no internet connectivity) does the following request: $ wget http://google.com --2016-11-04 09:32:55-- http://google.com/ Resolving google.com (google.com)... 172.217.22.110, 2a00:1450:4001:81d::200e Connecting to google.com (google.com)|172.217.22.110|:8888... fai...
infra-server with IP x.x.x.x (with no internet connectivity) does the following request: $ wget http://google.com --2016-11-04 09:32:55-- http://google.com/ Resolving google.com (google.com)... 172.217.22.110, 2a00:1450:4001:81d::200e Connecting to google.com (google.com)|172.217.22.110|:8888... failed: Connection timed out. proxy-server (squid listening on 8888) has the following interfaces: eth1: 1.1.1.1 where all incoming requests from infra-server are coming in eth2: 2.2.2.2 which has internet connectivity with a default route (80,443) because its address is translated in the firewall (gateway) By doing a tcpdump on proxy-server and eth1 (incoming interface) I see correctly the traffic arriving: 09:49:10.033951 IP x.x.x.x.45977 > 1.1.1.1.8888: Flags [S], seq 258250387, win 29200, options [mss 1460,sackOK,TS val 3204336400 ecr 0,nop,wscale 7], length 0 09:49:11.034310 IP x.x.x.x.45977 > 1.1.1.1.8888: Flags [S], seq 258250387, win 29200, options [mss 1460,sackOK,TS val 3204337402 ecr 0,nop,wscale 7], length 0 09:49:13.042720 IP x.x.x.x.45977 > 1.1.1.1.8888: Flags [S], seq 258250387, win 29200, options [mss 1460,sackOK,TS val 3204339408 ecr 0,nop,wscale 7], length 0 09:49:17.047283 IP x.x.x.x.45977 > 1.1.1.1.8888: Flags [S], seq 258250387, win 29200, options [mss 1460,sackOK,TS val 3204343416 ecr 0,nop,wscale 7], length 0 09:49:22.303238 IP x.x.x.x.45977 > 1.1.1.1.8888: Flags [R], seq 258250387, win 1400, length 0 09:49:25.060419 IP x.x.x.x.45977 > 1.1.1.1.8888: Flags [S], seq 258250387, win 29200, options [mss 1460,sackOK,TS val 3204351424 ecr 0,nop,wscale 7], length 0 09:49:30.321096 IP x.x.x.x.45977 > 1.1.1.1.8888: Flags [R], seq 258250387, win 1400, length 0 By doing a tcpdump on the proxy-server and eth2 (outgoing interface) I do not see any outgoing http traffic What I have changed in the configuration of squid is only the following: acl infra-server src x.x.x.x/32 http_access allow infra-server http_port 1.1.1.1:8888 System-wise, SElinux is set to permissive: # getenforce Permissive and how firewalld is configured is: # firewall-cmd --list-all --zone=internal internal (active) interfaces: eth1 sources: services: dhcpv6-client ipp-client mdns samba-client ssh ports: 8888/tcp masquerade: no forward-ports: icmp-blocks: rich rules: # firewall-cmd --list-all --zone=external external (active) interfaces: eth2 sources: services: http https ssh ports: masquerade: yes forward-ports: icmp-blocks: rich rules: I just need the rule to forward traffic from eth1 to eth2 (I think).
nskalis (685 rep)
Nov 5, 2016, 06:02 PM • Last activity: May 18, 2024, 05:28 AM
0 votes
1 answers
184 views
the Internet does not start via squid
I installed squid 6.2. I built it from sources with ssl. the service starts normally. there are 2 network cards. one looks at the local network, the other at the Internet. I used the manual for configuring configuration files from the Internet. After executing the sudo squid -k reconfigure and sudo...
I installed squid 6.2. I built it from sources with ssl. the service starts normally. there are 2 network cards. one looks at the local network, the other at the Internet. I used the manual for configuring configuration files from the Internet. After executing the sudo squid -k reconfigure and sudo squid -k commands, the output is as follows 2024/04/09 22:08:41| Processing Configuration File: /etc/squid/squid.conf (depth 0) 2024/04/09 22:08:41| Processing: acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN) 2024/04/09 22:08:41| Processing: acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN) 2024/04/09 22:08:41| Processing: acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN) 2024/04/09 22:08:41| Processing: acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines 2024/04/09 22:08:41| Processing: acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN) 2024/04/09 22:08:41| Processing: acl localnet src 192.168.0.0/24 # RFC 1918 local private network (LAN) 2024/04/09 22:08:41| Processing: acl localnet src fc00::/7 # RFC 4193 local private network range 2024/04/09 22:08:41| Processing: acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines 2024/04/09 22:08:41| Processing: acl SSL_ports port 443 2024/04/09 22:08:41| Processing: acl Safe_ports port 80 # http 2024/04/09 22:08:41| Processing: acl Safe_ports port 21 # ftp 2024/04/09 22:08:41| Processing: acl Safe_ports port 443 # https 2024/04/09 22:08:41| Processing: acl Safe_ports port 70 # gopher 2024/04/09 22:08:41| Processing: acl Safe_ports port 210 # wais 2024/04/09 22:08:41| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2024/04/09 22:08:41| Processing: acl Safe_ports port 280 # http-mgmt 2024/04/09 22:08:41| Processing: acl Safe_ports port 488 # gss-http 2024/04/09 22:08:41| Processing: acl Safe_ports port 591 # filemaker 2024/04/09 22:08:41| Processing: acl Safe_ports port 777 # multiling http 2024/04/09 22:08:41| Processing: http_access deny !Safe_ports 2024/04/09 22:08:41| Processing: http_access deny CONNECT !SSL_ports 2024/04/09 22:08:41| Processing: http_access allow localhost manager 2024/04/09 22:08:41| Processing: http_access deny manager 2024/04/09 22:08:41| Processing: include /etc/squid/conf.d/*.conf 2024/04/09 22:08:41| Processing Configuration File: /etc/squid/conf.d/debian.conf (depth 1) 2024/04/09 22:08:41| Processing: logfile_rotate 0 2024/04/09 22:08:41| Processing: http_access allow localhost 2024/04/09 22:08:41| Processing: http_access allow localnet 2024/04/09 22:08:41| Processing: http_access allow all 2024/04/09 22:08:41| Processing: http_port 3130 2024/04/09 22:08:41| Processing: https_port 192.168.0.110:3129 intercept ssl-bump cert=/etc/squid/squidCA.pem generate-host-certificates=on dynamic_cert_mem_cache_size=4MB 2024/04/09 22:08:41| Starting Authentication on port 192.168.0.110:3129 2024/04/09 22:08:41| Disabling Authentication on port 192.168.0.110:3129 (interception enabled) 2024/04/09 22:08:41| Processing: http_port 192.168.0.110:3128 intercept 2024/04/09 22:08:41| Starting Authentication on port 192.168.0.110:3128 2024/04/09 22:08:41| Disabling Authentication on port 192.168.0.110:3128 (interception enabled) 2024/04/09 22:08:41| Processing: sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/lib/squid/ssl_db -M 4MB 2024/04/09 22:08:41| Processing: acl step1 at_step SslBump1 2024/04/09 22:08:41| Processing: ssl_bump peek step1 2024/04/09 22:08:41| Processing: ssl_bump bump all 2024/04/09 22:08:41| Processing: ssl_bump splice all 2024/04/09 22:08:41| Processing: coredump_dir /var/spool/squid 2024/04/09 22:08:41| Processing: refresh_pattern ^ftp: 1440 20% 10080 2024/04/09 22:08:41| Processing: refresh_pattern ^gopher: 1440 0% 1440 2024/04/09 22:08:41| Processing: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 2024/04/09 22:08:41| Processing: refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims 2024/04/09 22:08:41| Processing: refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims 2024/04/09 22:08:41| Processing: refresh_pattern \/InRelease$ 0 0% 0 refresh-ims 2024/04/09 22:08:41| Processing: refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims 2024/04/09 22:08:41| Processing: refresh_pattern . 0 20% 4320 2024/04/09 22:08:41| Requiring client certificates. 2024/04/09 22:08:41| Loaded signing certificate: /C=RU/ST=Moscow/L=Moscow/O=Internet Widgits Pty Ltd 2024/04/09 22:08:41| Not requiring any client certificates command sudo squid -k reconfigure 2024/04/09 22:09:41| Processing Configuration File: /etc/squid/squid.conf (depth 0) 2024/04/09 22:09:41| Processing Configuration File: /etc/squid/conf.d/debian.conf (depth 1) 2024/04/09 22:09:41| Starting Authentication on port 192.168.0.110:3129 2024/04/09 22:09:41| Disabling Authentication on port 192.168.0.110:3129 (interception enabled) 2024/04/09 22:09:41| Starting Authentication on port 192.168.0.110:3128 2024/04/09 22:09:41| Disabling Authentication on port 192.168.0.110:3128 (interception enabled) 2024/04/09 22:09:41| ERROR: cannot change current directory to /var/spool/squid: (2) No such file or directory 2024/04/09 22:09:41| Current Directory is /home/nicolay 2024/04/09 22:09:41| FATAL: failed to open /var/run/squid.pid: (2) No such file or directory exception location: File.cc(191) open All certificates are generated. The paths are spelled out correctly. The Internet over https does not work on the client's machine. ping is passing. iptables has rules for port forwarding in PREROUTING and INPUT from 443 to 3129 and 80 to 3128. squid.config acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN) acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN) acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN) acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN) acl localnet src 192.168.0.0/24 # RFC 1918 local private network (LAN) acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http http_port 3130 https_port 192.168.0.110:3129 intercept ssl-bump cert=/etc/squid/squidCA.pem generate-host-certificates=on dynamic_cert_mem_cache_size=4MB http_port 192.168.0.110:3128 intercept sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/lib/squid/ssl_db -M 4MB acl step1 at_step SslBump1 ssl_bump peek step1 ssl_bump bump all ssl_bump splice all Help please. I'm new to linux. Squida logs and cache cannot be read. All the information on the Internet is different, but nothing helped. Actions such as clearing the cache or creating a pi file manually do not help.When I installed squid 5.7, I managed to get the Internet from the repository, but over the http protocol. The config is the same. squid 6.2 had to be installed because squid 5.7 does not want to compile on my machine. I think it has to do with the source code. The machine is on ubuntu server 22.04. I use it on a virtual machine.
Николай Мельников (1 rep)
Apr 12, 2024, 08:59 PM • Last activity: Apr 13, 2024, 08:04 PM
0 votes
1 answers
170 views
Is it possible to redirect squid traffic from Host A through Host B(Squid)?
Can traffic of Host A having squid proxy redirect all its traffic to Host B's squid proxy using ssh tunnel? Couldn't find any solution. A help will be appreciated. Solution form this https://unix.stackexchange.com/a/490641/587328 helps to sent all http and https traffic thought Host B but squid traf...
Can traffic of Host A having squid proxy redirect all its traffic to Host B's squid proxy using ssh tunnel? Couldn't find any solution. A help will be appreciated. Solution form this https://unix.stackexchange.com/a/490641/587328 helps to sent all http and https traffic thought Host B but squid traffic still pass through Host A.
Vishal Sanghani (101 rep)
Sep 30, 2023, 08:19 PM • Last activity: Feb 8, 2024, 04:09 PM
0 votes
1 answers
753 views
Why squid deny the https request but allow the same site with http request?
I want to allow dev just use github copilot and deny other request. According to github info: https://docs.github.com/en/copilot/troubleshooting-github-copilot/troubleshooting-firewall-settings-for-github-copilot I added the urls to a whitelist,here are the whitelist info: ``` [root@web-ide-squid-ca...
I want to allow dev just use github copilot and deny other request. According to github info: https://docs.github.com/en/copilot/troubleshooting-github-copilot/troubleshooting-firewall-settings-for-github-copilot I added the urls to a whitelist,here are the whitelist info:
[root@web-ide-squid-cache squid]# cat whitelist.txt
.baidu.com
.github.com/login/*
.api.github.com/user
.api.github.com/copilot_internal/*
.copilot-telemetry.githubusercontent.com/telemetry
.default.exp-tas.com/
.copilot-proxy.githubusercontent.com/
.origin-tracker.githubusercontent.com
*.githubcopilot.com
Here are the conf file:
[root@web-ide-squid-cache squid]# cat squid.conf
#
# Recommended minimum configuration:
#
debug_options ALL,1 33,2 28,9
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8             # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10          # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16         # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12          # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16         # RFC 1918 local private network (LAN)
acl localnet src fc00::/7               # RFC 4193 local private network range
acl localnet src fe80::/10              # RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
acl whitelist dstdomain "/etc/squid/whitelist.txt"
http_access allow whitelist
http_access deny all

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 8080
http_port 3128 transparent
https_port 3129 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=8MB cert=/etc/squid/ssl_cert/myCA.pem
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump all
ssl_bump splice all
sslproxy_cert_error allow  all
tls_outgoing_options cipher=ALL

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320
curl without https success:
coder@cloudide:~$ curl  -v www.baidu.com
*   Trying 182.61.200.7:80...
* Connected to www.baidu.com (182.61.200.7) port 80 (#0)
> GET / HTTP/1.1
> Host: www.baidu.com
> User-Agent: curl/7.74.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse

 百度一下,你就知道  

关于百度 About Baidu

©2017 Baidu 使用百度前必读 意见反馈 京ICP证030173号

* Connection #0 to host www.baidu.com left intact
curl same site with https failed:
curl  -v https://www.baidu.com 
*   Trying 182.61.200.6:443...
* Connected to www.baidu.com (182.61.200.6) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=www.baidu.com
*  start date: Jan 11 12:21:14 2024 GMT
*  expire date: Jan  9 12:21:14 2029 GMT
*  subjectAltName: host "www.baidu.com" matched cert's "www.baidu.com"
*  issuer: C=CN; ST=Beijing; L=Beijing; O=ES; OU=IT Department; CN=easystack.cn; emailAddress=jesse@easystack.cn
*  SSL certificate verify ok.
> GET / HTTP/1.1
> Host: www.baidu.com
> User-Agent: curl/7.74.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse




ERROR: The requested URL could not be retrieved


ERROR

The requested URL could not be retrieved

The following error was encountered while trying to retrieve the URL: https://182.61.200.6/*

Access Denied.

Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect.

Your cache administrator is webmaster.


* Closing connection 0 * TLSv1.2 (OUT), TLS alert, close notify (256):
here are the failed cache log:
2024/01/24 19:52:38.494 kid1| 28,4| Eui48.cc(179) lookup: id=0x31f5fe4 query ARP table
2024/01/24 19:52:38.495 kid1| 28,4| Eui48.cc(224) lookup: id=0x31f5fe4 query ARP on each interface (120 found)
2024/01/24 19:52:38.495 kid1| 28,4| Eui48.cc(230) lookup: id=0x31f5fe4 found interface lo
2024/01/24 19:52:38.495 kid1| 28,4| Eui48.cc(230) lookup: id=0x31f5fe4 found interface eth0
2024/01/24 19:52:38.495 kid1| 28,4| Eui48.cc(239) lookup: id=0x31f5fe4 looking up ARP address for 10.0.3.223 on eth0
2024/01/24 19:52:38.495 kid1| 28,4| Eui48.cc(275) lookup: id=0x31f5fe4 got address fa:16:3e:09:f3:23 on eth0
2024/01/24 19:52:38.495 kid1| 28,3| Checklist.cc(70) preCheck: 0x3189708 checking slow rules
2024/01/24 19:52:38.495 kid1| 28,5| Acl.cc(124) matches: checking (ssl_bump rules)
2024/01/24 19:52:38.495 kid1| 28,5| Checklist.cc(397) bannedAction: Action 'ALLOWED/3' is not banned
2024/01/24 19:52:38.495 kid1| 28,5| Acl.cc(124) matches: checking (ssl_bump rule)
2024/01/24 19:52:38.495 kid1| 28,5| Acl.cc(124) matches: checking step1
2024/01/24 19:52:38.495 kid1| 28,3| Acl.cc(151) matches: checked: step1 = 1
2024/01/24 19:52:38.495 kid1| 28,3| Acl.cc(151) matches: checked: (ssl_bump rule) = 1
2024/01/24 19:52:38.495 kid1| 28,3| Acl.cc(151) matches: checked: (ssl_bump rules) = 1
2024/01/24 19:52:38.495 kid1| 28,3| Checklist.cc(63) markFinished: 0x3189708 answer ALLOWED for match
2024/01/24 19:52:38.495 kid1| 28,3| Checklist.cc(163) checkCallback: ACLChecklist::checkCallback: 0x3189708 answer=ALLOWED
2024/01/24 19:52:38.495 kid1| 33,2| client_side.cc(2748) httpsSslBumpAccessCheckDone: sslBump action peekneeded for local=182.61.200.6:443 remote=10.0.3.223:4002 FD 12 flags=33
2024/01/24 19:52:38.495 kid1| 33,2| client_side.cc(3424) fakeAConnectRequest: fake a CONNECT request to force connState to tunnel for ssl-bump
2024/01/24 19:52:38.496 kid1| 28,3| Checklist.cc(70) preCheck: 0x31a4428 checking slow rules
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking http_access
2024/01/24 19:52:38.496 kid1| 28,5| Checklist.cc(397) bannedAction: Action 'DENIED/0' is not banned
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking http_access#1
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking !Safe_ports
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking Safe_ports
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: Safe_ports = 1
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: !Safe_ports = 0
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: http_access#1 = 0
2024/01/24 19:52:38.496 kid1| 28,5| Checklist.cc(397) bannedAction: Action 'DENIED/0' is not banned
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking http_access#2
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking CONNECT
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: CONNECT = 1
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking !SSL_ports
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking SSL_ports
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: SSL_ports = 1
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: !SSL_ports = 0
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: http_access#2 = 0
2024/01/24 19:52:38.496 kid1| 28,5| Checklist.cc(397) bannedAction: Action 'ALLOWED/0' is not banned
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking http_access#3
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking localhost
2024/01/24 19:52:38.496 kid1| 28,9| Ip.cc(96) aclIpAddrNetworkCompare: aclIpAddrNetworkCompare: compare: 10.0.3.223:4002/[ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff] (10.0.3.223:4002)  vs [::1]-[::]/[ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff]
2024/01/24 19:52:38.496 kid1| 28,9| Ip.cc(96) aclIpAddrNetworkCompare: aclIpAddrNetworkCompare: compare: 10.0.3.223:4002/[ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff] (10.0.3.223:4002)  vs 127.0.0.1-[::]/[ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff]
2024/01/24 19:52:38.496 kid1| 28,9| Ip.cc(96) aclIpAddrNetworkCompare: aclIpAddrNetworkCompare: compare: 10.0.3.223:4002/[ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff] (10.0.3.223:4002)  vs 127.0.0.1-[::]/[ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff]
2024/01/24 19:52:38.496 kid1| 28,3| Ip.cc(538) match: aclIpMatchIp: '10.0.3.223:4002' NOT found
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: localhost = 0
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: http_access#3 = 0
2024/01/24 19:52:38.496 kid1| 28,5| Checklist.cc(397) bannedAction: Action 'DENIED/0' is not banned
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking http_access#4
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking manager
2024/01/24 19:52:38.496 kid1| 28,3| RegexData.cc(43) match: checking '182.61.200.6:443'
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: manager = 0
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: http_access#4 = 0
2024/01/24 19:52:38.496 kid1| 28,5| Checklist.cc(397) bannedAction: Action 'ALLOWED/0' is not banned
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking http_access#5
2024/01/24 19:52:38.496 kid1| 28,5| Acl.cc(124) matches: checking whitelist
2024/01/24 19:52:38.496 kid1| 28,3| DomainData.cc(110) match: aclMatchDomainList: checking '182.61.200.6'
2024/01/24 19:52:38.496 kid1| 28,3| DomainData.cc(115) match: aclMatchDomainList: '182.61.200.6' NOT found
2024/01/24 19:52:38.496 kid1| 28,3| DestinationDomain.cc(96) match: Can't yet compare 'whitelist' ACL for 182.61.200.6
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: whitelist = -1 async
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: http_access#5 = -1 async
2024/01/24 19:52:38.496 kid1| 28,3| Acl.cc(151) matches: checked: http_access = -1 async
2024/01/24 19:52:38.496 kid1| 28,4| FilledChecklist.cc(67) ~ACLFilledChecklist: ACLFilledChecklist destroyed 0x3189708
2024/01/24 19:52:38.496 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist: destroyed 0x3189708
2024/01/24 19:52:38.500 kid1| 28,5| InnerNode.cc(94) resumeMatchingAt: checking http_access at 4
2024/01/24 19:52:38.500 kid1| 28,5| Checklist.cc(397) bannedAction: Action 'ALLOWED/0' is not banned
2024/01/24 19:52:38.500 kid1| 28,5| InnerNode.cc(94) resumeMatchingAt: checking http_access#5 at 0
2024/01/24 19:52:38.500 kid1| 28,5| Acl.cc(124) matches: checking whitelist
2024/01/24 19:52:38.500 kid1| 28,3| DomainData.cc(110) match: aclMatchDomainList: checking '182.61.200.6'
2024/01/24 19:52:38.500 kid1| 28,3| DomainData.cc(115) match: aclMatchDomainList: '182.61.200.6' NOT found
2024/01/24 19:52:38.500 kid1| 28,3| DomainData.cc(110) match: aclMatchDomainList: checking 'none'
2024/01/24 19:52:38.500 kid1| 28,3| DomainData.cc(115) match: aclMatchDomainList: 'none' NOT found
2024/01/24 19:52:38.500 kid1| 28,3| Acl.cc(151) matches: checked: whitelist = 0
2024/01/24 19:52:38.500 kid1| 28,3| InnerNode.cc(97) resumeMatchingAt: checked: http_access#5 = 0
2024/01/24 19:52:38.500 kid1| 28,5| Checklist.cc(397) bannedAction: Action 'DENIED/0' is not banned
2024/01/24 19:52:38.500 kid1| 28,5| Acl.cc(124) matches: checking http_access#6
2024/01/24 19:52:38.500 kid1| 28,5| Acl.cc(124) matches: checking all
2024/01/24 19:52:38.500 kid1| 28,9| Ip.cc(96) aclIpAddrNetworkCompare: aclIpAddrNetworkCompare: compare: 10.0.3.223:4002/[::] ([::]:4002)  vs [::]-[::]/[::]
2024/01/24 19:52:38.500 kid1| 28,3| Ip.cc(538) match: aclIpMatchIp: '10.0.3.223:4002' found
2024/01/24 19:52:38.500 kid1| 28,3| Acl.cc(151) matches: checked: all = 1
2024/01/24 19:52:38.500 kid1| 28,3| Acl.cc(151) matches: checked: http_access#6 = 1
2024/01/24 19:52:38.500 kid1| 28,3| InnerNode.cc(97) resumeMatchingAt: checked: http_access = 1
2024/01/24 19:52:38.500 kid1| 28,3| Checklist.cc(63) markFinished: 0x31a4428 answer DENIED for match
2024/01/24 19:52:38.500 kid1| 28,3| Checklist.cc(163) checkCallback: ACLChecklist::checkCallback: 0x31a4428 answer=DENIED
2024/01/24 19:52:38.500 kid1| 28,5| Gadgets.cc(81) aclIsProxyAuth: aclIsProxyAuth: called for all
2024/01/24 19:52:38.500 kid1| 28,9| Acl.cc(96) FindByName: ACL::FindByName 'all'
2024/01/24 19:52:38.500 kid1| 28,5| Gadgets.cc(86) aclIsProxyAuth: aclIsProxyAuth: returning 0
2024/01/24 19:52:38.500 kid1| 28,8| Gadgets.cc(49) aclGetDenyInfoPage: got called for all
2024/01/24 19:52:38.500 kid1| 28,8| Gadgets.cc(68) aclGetDenyInfoPage: aclGetDenyInfoPage: no match
2024/01/24 19:52:38.500 kid1| 28,4| FilledChecklist.cc(67) ~ACLFilledChecklist: ACLFilledChecklist destroyed 0x7ffe2f431e20
2024/01/24 19:52:38.500 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist: destroyed 0x7ffe2f431e20
2024/01/24 19:52:38.500 kid1| 28,4| FilledChecklist.cc(67) ~ACLFilledChecklist: ACLFilledChecklist destroyed 0x7ffe2f431e20
2024/01/24 19:52:38.500 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist: destroyed 0x7ffe2f431e20
2024/01/24 19:52:38.500 kid1| 28,4| FilledChecklist.cc(67) ~ACLFilledChecklist: ACLFilledChecklist destroyed 0x31a4428
2024/01/24 19:52:38.500 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist: destroyed 0x31a4428
2024/01/24 19:52:38.504 kid1| 28,3| Checklist.cc(70) preCheck: 0x7ffe2f431ba0 checking fast ACLs
2024/01/24 19:52:38.504 kid1| 28,5| Acl.cc(124) matches: checking access_log daemon:/var/log/squid/access.log
2024/01/24 19:52:38.504 kid1| 28,5| Acl.cc(124) matches: checking (access_log daemon:/var/log/squid/access.log line)
2024/01/24 19:52:38.504 kid1| 28,3| Acl.cc(151) matches: checked: (access_log daemon:/var/log/squid/access.log line) = 1
2024/01/24 19:52:38.504 kid1| 28,3| Acl.cc(151) matches: checked: access_log daemon:/var/log/squid/access.log = 1
2024/01/24 19:52:38.504 kid1| 28,3| Checklist.cc(63) markFinished: 0x7ffe2f431ba0 answer ALLOWED for match
2024/01/24 19:52:38.504 kid1| 28,4| FilledChecklist.cc(67) ~ACLFilledChecklist: ACLFilledChecklist destroyed 0x7ffe2f431ba0
2024/01/24 19:52:38.504 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist: destroyed 0x7ffe2f431ba0
2024/01/24 19:52:38.508 kid1| 33,2| client_side.cc(891) kick: local=182.61.200.6:443 remote=10.0.3.223:4002 flags=33 Connection was closed
2024/01/24 19:52:38.508 kid1| 28,3| Checklist.cc(70) preCheck: 0x7ffe2f431f10 checking fast ACLs
2024/01/24 19:52:38.508 kid1| 28,5| Acl.cc(124) matches: checking access_log daemon:/var/log/squid/access.log
2024/01/24 19:52:38.508 kid1| 28,5| Acl.cc(124) matches: checking (access_log daemon:/var/log/squid/access.log line)
2024/01/24 19:52:38.508 kid1| 28,3| Acl.cc(151) matches: checked: (access_log daemon:/var/log/squid/access.log line) = 1
2024/01/24 19:52:38.508 kid1| 28,3| Acl.cc(151) matches: checked: access_log daemon:/var/log/squid/access.log = 1
2024/01/24 19:52:38.508 kid1| 28,3| Checklist.cc(63) markFinished: 0x7ffe2f431f10 answer ALLOWED for match
2024/01/24 19:52:38.508 kid1| 28,4| FilledChecklist.cc(67) ~ACLFilledChecklist: ACLFilledChecklist destroyed 0x7ffe2f431f10
2024/01/24 19:52:38.508 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist: destroyed 0x7ffe2f431f10
2024/01/24 19:52:38.508 kid1| 33,2| client_side.cc(582) swanSong: local=182.61.200.6:443 remote=10.0.3.223:4002 flags=33
squid version:
[root@web-ide-squid-cache squid]# squid -v
Squid Cache: Version 4.9
张龙飞 (1 rep)
Jan 24, 2024, 12:06 PM • Last activity: Jan 25, 2024, 07:02 AM
4 votes
1 answers
4246 views
How to install FreeBSD in minimum size?
I want to install FreeBSD to run a squid cache server on it. I want to know how can I can make this installation as small as possible? I installed the boot-only ISO file on virtual box, but it took around 600 megabytes. By the way it is an old machine so I want it work in this minimum size. Is there...
I want to install FreeBSD to run a squid cache server on it. I want to know how can I can make this installation as small as possible? I installed the boot-only ISO file on virtual box, but it took around 600 megabytes. By the way it is an old machine so I want it work in this minimum size. Is there any script to download just the needed files and which file system is the best for squid cache holding partition?
Hojat Taheri (5146 rep)
Mar 23, 2014, 07:19 PM • Last activity: Jan 11, 2024, 11:37 PM
4 votes
2 answers
2284 views
How do I configure a transparent proxy where the proxy server is remote?
**What I am trying to acheive** I have a CentOS (6.8) box 1.1.1.1 and a remote squid proxy server 2.2.2.2 I am trying to emulate the results of `curl http://google.com -x 2.2.2.2:3128` with applications that don't have a HTTP proxy option, and don't respect the `http_proxy` variable (such as telegra...
**What I am trying to acheive** I have a CentOS (6.8) box 1.1.1.1 and a remote squid proxy server 2.2.2.2 I am trying to emulate the results of curl http://google.com -x 2.2.2.2:3128 with applications that don't have a HTTP proxy option, and don't respect the http_proxy variable (such as telegraf) **What I have tried so far** I've tried setting up iptables rules to forward traffic to the proxy: iptables -t nat -A PREROUTING -i eth0 ! -s 2.2.2.2 -p tcp --dport 80 -j DNAT --to 2.2.2.2:3128 iptables -t nat -A POSTROUTING -o eth0 -s 1.1.1.1 -d 2.2.2.2 -j SNAT --to 1.1.1.1 iptables -A FORWARD -s 1.1.1.1 -d 2.2.2.2 -i eth0 -o eth0 -p tcp --dport 3128 -j ACCEPT I then discovered that DNAT doesn't work properly if one is sending the traffic to a remote box, as return traffic doesn't route correctly. Based on this, I moved the application to a docker container (172.17.0.9) on the CentOS box, and planned to keep the iptables config on the host, amending the iptables config thusly: iptables -t nat -A PREROUTING -i eth0 ! -s 2.2.2.2 -p tcp --dport 80 -j DNAT --to 2.2.2.2:3128 iptables -t nat -A POSTROUTING -o eth0 -s 172.17.0.0/16 -d 2.2.2.2 -j SNAT --to 1.1.1.1 iptables -A FORWARD -s 172.17.0.0/16 -d 2.2.2.2 -i eth0 -o eth0 -p tcp --dport 3128 -j ACCEPT I also tried the following ruleset: iptables -t nat -A PREROUTING -i eth0 -s 172.17.0.0/16 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:3128 iptables -t nat -A POSTROUTING -o eth0 -d 2.2.2.2/32 -j MASQUERADE The result of both of these rulesets was that http traffic was still attempting to go directly to the destination, rather than through the proxy. [root@host ~]# tcpdump -nnn host 216.58.201.35 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 15:26:24.536668 IP 1.1.1.1.38566 > 216.58.201.35.80: Flags [S], seq 3885286223, win 14600, options [mss 1460,sackOK,TS val 2161333858 ecr 0,nop,wscale 9], length 0 The default gateway on the docker container is correctly set to the host, IP forwarding is enabled on the host. [root@docker /]# ip route default via 172.17.0.1 dev eth0 [root@host ~]# sysctl net.ipv4.ip_forward net.ipv4.ip_forward = 1 Am I missing something obvious here?
Oliver Smith (61 rep)
Oct 4, 2016, 03:08 PM • Last activity: Aug 19, 2023, 06:34 PM
1 votes
0 answers
412 views
Squid proxy server configuration issue with Instagram media content
I am trying to configure a squid proxy on the remote virtual server in order to access blocked content from my smartphone. I have installed Squid and everything works, but for some reason Instagram media content is not loading. Instagram itself works, but only text. Again, other sites and social net...
I am trying to configure a squid proxy on the remote virtual server in order to access blocked content from my smartphone. I have installed Squid and everything works, but for some reason Instagram media content is not loading. Instagram itself works, but only text. Again, other sites and social networks work fine, for instance Facebook or Linkedin. What can be the problem? Why exactly instagram's media content does not work properly? My simple config below. I've tried 3proxy - same result, IG media content doesn't load. When I connect via SSH tunnel via same server or via OpenVPN which is installed on the same remote server and don't use proxy in this case, all works fine. Therefore, something wrong with squid configuration. When I try to open instagram.com on PC via my proxy, site totally doesn't open. ERR_TOO_MANY_REDIRECTS occurred.
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwords
acl all src 0.0.0.0/0.0.0.0
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
http_access allow localhost
http_access deny all
http_port 3129
coredump_dir /var/spool/squid
VladF (63 rep)
May 10, 2023, 03:14 PM • Last activity: May 10, 2023, 08:24 PM
1 votes
0 answers
183 views
Run squid proxy service in a systemd enable centos 7 container
I am unable to start squid service in a systemd enabled centos 7 container. I am able to install the squid package. this is what I got before starting the service : ``` [root@c65bf5111b85 /]# systemctl status squid ● squid.service - Squid caching proxy Loaded: loaded (/usr/lib/systemd/system/squid.s...
I am unable to start squid service in a systemd enabled centos 7 container. I am able to install the squid package. this is what I got before starting the service :
[root@c65bf5111b85 /]# systemctl status squid
● squid.service - Squid caching proxy
   Loaded: loaded (/usr/lib/systemd/system/squid.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
and then when I do the systemctl start squid this is what I got :
[root@c65bf5111b85 /]# systemctl status squid
● squid.service - Squid caching proxy
   Loaded: loaded (/usr/lib/systemd/system/squid.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Apr 28 23:46:23 c65bf5111b85 squid: Squid Parent: will start 1 kids
Apr 28 23:46:23 c65bf5111b85 squid: Squid Parent: (squid-1) process 273 started
Apr 28 23:46:23 c65bf5111b85 systemd: Started Squid caching proxy.
Apr 28 23:46:23 c65bf5111b85 squid: Squid Parent: will start 1 kids
Apr 28 23:46:23 c65bf5111b85 squid: Squid Parent: (squid-1) process 280 started
Apr 28 23:46:23 c65bf5111b85 squid: Squid Parent: (squid-1) process 273 exited with status 0
Apr 28 23:46:23 c65bf5111b85 squid: Squid Parent: will start 1 kids
Apr 28 23:46:23 c65bf5111b85 squid: Squid Parent: (squid-1) process 290 started
Apr 28 23:46:23 c65bf5111b85 squid: Squid Parent: (squid-1) process 280 exited with status 0
Apr 28 23:46:23 c65bf5111b85 squid: squid: No running copy
[root@c65bf5111b85 /]#
I am expecting the service to start as I was able to install and start httpd, nginx, mariadb services.
Lionel Tsimi Biloa (11 rep)
Apr 30, 2023, 10:44 AM • Last activity: May 2, 2023, 09:42 AM
0 votes
1 answers
2516 views
Squid (proxy) is eating up its own resources (and other issues)
I have several squid issues, but one at a time: **WARNING! Your cache is running out of filedescriptors** This can happen when the proxy are getting a lot of calls, and can be fixed by increasing the limit, but mine isn't even "*open*" yet.. I found out that it's squid somehow constantly connecting...
I have several squid issues, but one at a time: **WARNING! Your cache is running out of filedescriptors** This can happen when the proxy are getting a lot of calls, and can be fixed by increasing the limit, but mine isn't even "*open*" yet.. I found out that it's squid somehow constantly connecting to it self? (from my access.log) 1628674032.019 59108 192.168.0.129 NONE/200 0 CONNECT 192.168.0.129:3129 - ORIGINAL_DST/192.168.0.129 - 1628674032.019 59098 192.168.0.129 NONE/200 0 CONNECT 192.168.0.129:3129 - ORIGINAL_DST/192.168.0.129 - 1628674032.019 59087 192.168.0.129 NONE/200 0 CONNECT 192.168.0.129:3129 - ORIGINAL_DST/192.168.0.129 - My configuration was originally created by pfsense, but is used on a stand-alone squid running on Ubuntu 20.04. # This file is automatically generated by pfSense # Do not edit manually ! acl all src all http_access allow all http_port 3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=10MB cert=/usr/local/squid/etc/ssl_cert/myCA.pem cafile=/usr/local/squid/etc/ssl_cert/myCA.crt capath=/usr/local/squid/etc/rootca/ cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS tls-dh=prime256v1:/usr/local/squid/etc/dhparam.pem options=NO_SSLv3,NO_TLSv1,SINGLE_DH_USE,SINGLE_ECDH_USE https_port 3129 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=10MB cert=/usr/local/squid/etc/ssl_cert/myCA.pem cafile=/usr/local/squid/etc/rootca/ca-root-nss.crt capath=/usr/local/squid/etc/rootca/ cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS tls-dh=prime256v1:/usr/local/squid/etc/dhparam.pem options=NO_SSLv3,NO_TLSv1,SINGLE_DH_USE,SINGLE_ECDH_USE #tcp_outgoing_address 10.10.66.1 icp_port 0 #digest_generation off dns_v4_first on #pid_filename /var/run/squid/squid.pid cache_effective_user proxy cache_effective_group proxy error_default_language en #icon_directory /usr/local/etc/squid/icons visible_hostname Satan cache_mgr admin@localhost access_log /var/log/squid/access.log cache_log /var/log/squid/cache.log cache_store_log none netdb_filename /var/log/squid/netdb.state pinger_enable on pinger_program /usr/lib/squid/pinger sslcrtd_program /usr/lib/squid/security_file_certgen -s /usr/local/squid/var/logs/ssl_db -M 4MB -b 4096 tls_outgoing_options cafile=/usr/local/squid/etc/rootca/ca-root-nss.crt tls_outgoing_options capath=/usr/local/squid/etc/rootca/ tls_outgoing_options options=NO_SSLv3,NO_TLSv1,SINGLE_DH_USE,SINGLE_ECDH_USE tls_outgoing_options cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS sslcrtd_children 5 logfile_rotate 10 debug_options rotate=0 shutdown_lifetime 3 seconds # Allow local network(s) on interface(s) acl localnet src 192.168.0.0/24 forwarded_for delete via off httpd_suppress_version_string on uri_whitespace strip acl dynamic urlpath_regex cgi-bin \? cache deny dynamic cache_mem 2048 MB maximum_object_size_in_memory 8192 KB memory_replacement_policy heap GDSF cache_replacement_policy heap LFUDA minimum_object_size 0 KB maximum_object_size 16 MB cache_dir aufs /cache 10000 16 256 offline_mode off cache_swap_low 90 cache_swap_high 95 cache allow all # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 #Remote proxies # Setup some default acls # ACLs all, manager, localhost, and to_localhost are predefined. acl allsrc src all acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901 3128 3129 1025-65535 acl sslports port 443 563 acl purge method PURGE acl connect method CONNECT # Define protocols used for redirects acl HTTP proto HTTP acl HTTPS proto HTTPS # SslBump Peek and Splice # http://wiki.squid-cache.org/Features/SslPeekAndSplice # http://wiki.squid-cache.org/ConfigExamples/Intercept/SslBumpExplicit # Match against the current step during ssl_bump evaluation [fast] # Never matches and should not be used outside the ssl_bump context. # # At each SslBump step, Squid evaluates ssl_bump directives to find # the next bumping action (e.g., peek or splice). Valid SslBump step # values and the corresponding ssl_bump evaluation moments are: # SslBump1: After getting TCP-level and HTTP CONNECT info. # SslBump2: After getting TLS Client Hello info. # SslBump3: After getting TLS Server Hello info. # These ACLs exist even when 'SSL/MITM Mode' is set to 'Custom' so that # they can be used there for custom configuration. acl step1 at_step SslBump1 acl step2 at_step SslBump2 acl step3 at_step SslBump3 http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !safeports http_access deny CONNECT !sslports # Always allow localhost connections http_access allow localhost request_body_max_size 0 KB delay_pools 1 delay_class 1 2 delay_parameters 1 -1/-1 -1/-1 delay_initial_bucket_level 100 delay_access 1 allow allsrc # Reverse Proxy settings # Custom options before auth ssl_bump peek step1 ssl_bump bump all # Setup allowed ACLs # Allow local network(s) on interface(s) http_access allow localnet # Default block all to be sure http_access deny allsrc other bonus questions are: 2. Do I need a **http** configuration (port 3128) when I'm only using https/ssl 2. **Yes**, apparently it's necessary 3. **acl all src all** (the first command in the configuration) results the following in syslog. It's only a warning, but how do I fix it? --- Aug 11 12:28:46 socks squid: WARNING: because of this '::/0' is ignored to keep splay tree searching predictable Aug 11 12:28:46 socks squid: WARNING: You should probably remove '::/0' from the ACL named 'all' --- 4. If you find anything else that's wrong, please say so, and if possible, explain why (so we can learn).
JoBe (417 rep)
Aug 11, 2021, 10:39 AM • Last activity: Feb 23, 2023, 09:17 AM
1 votes
1 answers
1542 views
Reverse proxy multiple backend web servers
I am trying to have multiple copies of the same web application appear under different paths of a single URL. Each application is a unique instance with its own login. All run with http. In this example John, Jane and Jerry all have their own instance on different servers. I don't know if the apps s...
I am trying to have multiple copies of the same web application appear under different paths of a single URL. Each application is a unique instance with its own login. All run with http. In this example John, Jane and Jerry all have their own instance on different servers. I don't know if the apps support a host http header yet so I would like to proxy requests and rewrite the links in the html. I have tried using tinyproxy but the website ends up redirecting. I have also tried using squid but couldn't get it to work either. A visual representation of what I am trying to do: Request: http://example.com:5002/john_server/ ---> example.com (listening ports 5000, 5001, 5002) + | +----+john_server.local (5000, 5001, 5002) | | +----+jane_server.local (5000, 5001, 5002) | | +----+jerry_server.local (5000, 5001, 5002) Does anyone know how to configure tinyproxy or squid to do this? Is it even possible? Thanks, Tim
Tim (33 rep)
Jul 29, 2017, 05:55 PM • Last activity: Dec 26, 2022, 11:03 AM
4 votes
3 answers
26504 views
Setting up squid transparent proxy with SSL bumping on Debian 10
Debian 10 with squid working as a transparent proxy. Now want to add SSL. ``` # apt-get install openssl # mkdir -p /etc/squid/cert # cd /etc/squid/cert # openssl req -new -newkey rsa:4096 -sha256 -days 365 -nodes -x509 -keyout myCA.pem -out myCA.pem # openssl x509 -in myCA.pem -outform DER -out myCA...
Debian 10 with squid working as a transparent proxy. Now want to add SSL.
# apt-get install openssl
# mkdir -p /etc/squid/cert
# cd /etc/squid/cert
# openssl req -new -newkey rsa:4096 -sha256 -days 365 -nodes -x509 -keyout myCA.pem -out myCA.pem
# openssl x509 -in myCA.pem -outform DER -out myCA.der
# 

# iptables -t nat -A PREROUTING -i br0 -p tcp --dport 443 -j DNAT --to 192.168.1.51:3129
# iptables -t nat -A PREROUTING -i br0 -p tcp --dport 443 -j REDIRECT --to-port 3129
# iptables-save > /etc/iptables/rules.v4
**Question 1**: Now what I read says that next I need to
/usr/lib/squid/security_file_certgen -c -s /var/cache/squid/ssl_db -M 4MB
however I cannot find security_file_certgen on my system. **Question 2**: If I now proceed anyway to add in squid.conf:
https_port 3129 intercept ssl-bump cert=/etc/squid/cert/myCA.pem generate-host-certificates=on
then squid fails to start:
2020/10/07 14:09:27| FATAL: Unknown https_port option 'ssl-bump'.
2020/10/07 14:09:27| FATAL: Bungled /etc/squid/squid.conf line 5: https_port 3129 int
2020/10/07 14:09:27| Squid Cache (Version 4.6): Terminated abnormally.
CPU Usage: 0.017 seconds = 0.017 user + 0.000 sys
Maximum Resident Size: 57792 KB
Page faults with physical i/o: 0
FATAL: Bungled /etc/squid/squid.conf line 5: https_port 3129 intercept ssl-bump cert=
squid.service: Control process exited, code=exited, status=1/FAILURE
squid.service: Failed with result 'exit-code'.
Failed to start Squid Web Proxy Server.
I notice that squid -v contains neither --enable-ssl-crtd nor --with-openssl, but I don't understand what to do about this. **Update** All of the guides on the Internet at the time of writing are obsolete because https://wiki.squid-cache.org/Features/SslBump ssl-bump has been replaced with https://wiki.squid-cache.org/Features/BumpSslServerFirst server-first and server-first has been replaced with https://wiki.squid-cache.org/Features/SslPeekAndSplice peek-n-splice. I was hoping this might work that I got from https://serverfault.com/questions/743483/transparent-http-https-domain-filtering-proxy :
https_port 3129 intercept ssl-bump
ssl_bump peek all
ssl_bump splice all
but no:
2020/10/08 09:57:49| FATAL: Unknown https_port option 'ssl-bump'.
2020/10/08 09:57:49| FATAL: Bungled /etc/squid/squid.conf line 6: https_port 3129 int
2020/10/08 09:57:49| Squid Cache (Version 4.6): Terminated abnormally.
CPU Usage: 0.017 seconds = 0.008 user + 0.008 sys
Maximum Resident Size: 57152 KB
Page faults with physical i/o: 0
FATAL: Bungled /etc/squid/squid.conf line 6: https_port 3129 intercept ssl-bump
squid.service: Control process exited, code=exited, status=1/FAILURE
squid.service: Failed with result 'exit-code'.
Failed to start Squid Web Proxy Server.
**Update: compiling squid with SSL**
# cd ~
# mkdir squid-build
# cd squid-build
# apt-get install openssh-server net-tools
# apt-get install openssl devscripts build-essential fakeroot libdbi-perl libssl-dev# libssl1.0-dev
# apt-get install dpkg-dev
# apt-get source squid
# apt-get build-dep squid
# cd squid-4.6/
# vi debian/rules
# dpkg-source --commit
In debian/rules file add to DEB_CONFIGURE_EXTRA_FLAGS the flags:

--with-default-user=proxy \
--enable-ssl \
--enable-ssl-crtd \
--with-openssl \
--disable-ipv6
...and build...
# debuild -us -uc
...and install...
# cd ..
# pwd 
/root/squid-build
# mv squid3*.deb squid3.deb.NotIncluded
# dpkg -i *.deb
However, still no ssl_crtd. Has it been renamed to security_file_certgen ? (https://bugzilla.redhat.com/show_bug.cgi?id=1397644) **Update: compiled squid** Got squid compiled and running for HTTP but don't know what to do for HTTPS -- and nor apparently does anyone else. Is it impossible? It seems to be something to do with certificates and squid.conf.
Richard Barraclough (550 rep)
Oct 7, 2020, 01:30 PM • Last activity: Dec 5, 2022, 06:47 PM
Showing page 1 of 20 total questions