Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

-1 votes
1 answers
60 views
How to troubleshoot duckDNS from MacOS / Safari
A NCP (nextcloudpi) server is stationed at downwind.duckdns.org Safari returns: [![enter image description here][1]][1] Chrome returns: [![enter image description here][2]][2] A private Safari browser returns the expected NCP webpage. I seek a troubleshooting procedure to diagnose the issue: I am wo...
A NCP (nextcloudpi) server is stationed at downwind.duckdns.org Safari returns: enter image description here Chrome returns: enter image description here A private Safari browser returns the expected NCP webpage. I seek a troubleshooting procedure to diagnose the issue: I am wondering if there is a certificate issue.
gatorback (1522 rep)
Apr 6, 2025, 05:53 AM • Last activity: Jul 10, 2025, 05:39 AM
2 votes
1 answers
2068 views
proxify a custom program to proxychains
After some research, I didn't find any solutions. So I post here. **My goal:** redirected the HTTPs traffic from my custom program to BurpSuite in order to analyse the server response and debug my program. - I have Debian 4.4.3 - I have two network interfaces : eth0 and tap0, I work on tap0. - I hav...
After some research, I didn't find any solutions. So I post here. **My goal:** redirected the HTTPs traffic from my custom program to BurpSuite in order to analyse the server response and debug my program. - I have Debian 4.4.3 - I have two network interfaces : eth0 and tap0, I work on tap0. - I have a php program that just send an HTTPS request to a local server (tap0). My php code use CURL to send the request (curl_init(), curl_setopt() etc.) In a debugging goal, I have thought to send my flows via BurpSuite in order to see the HTTPS requests. So: - I launch Burp that listen on all interfaces port 8080 - I configure /etc/proxychains.conf and in my ProxyList there are: socks4 127.0.0.1 8080 socks5 127.0.0.1 8080 socks4 XX.XX.XX.217 8080 socks5 XX.XX.XX.217 8080 And when I use proxychains: prochychains php myProgramme.php My program is executed but proxychains doesn't "proxify" the flow and so Burp doesn't see nothing... I think that's because I am in my local network ? What do you think about the best solution to intercept and see the HTTPS flows to my php program ?
Venux (21 rep)
Jul 17, 2016, 11:46 PM • Last activity: Jul 8, 2025, 04:04 AM
2 votes
1 answers
4891 views
Redirect Main domain and subdomain from http to https
i want to redirect these sub domains and main domain from http to https and sub blogs too. **Home directories** - WWW (Wordpress): `/home1/placehq5/public_html/.htaccess` - My Somadome (PHP): `/home1/placehq5/public_html/my` - Kiosk site (HTML): `/home1/placehq5/public_html/kiosk` Webserver is Apach...
i want to redirect these sub domains and main domain from http to https and sub blogs too. **Home directories** - WWW (Wordpress): /home1/placehq5/public_html/.htaccess - My Somadome (PHP): /home1/placehq5/public_html/my - Kiosk site (HTML): /home1/placehq5/public_html/kiosk Webserver is Apache version 2.2.31 (on Linux) please correct this htaccess file. RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteRule ^(.*)$ https://somadome.com/$1 [L,R=301] RewriteCond %{SERVER_PORT} 443 RewriteCond %{HTTP_HOST} ^www[.].+$ RewriteRule ^(.*)$ https://somadome.com/$1 [L,R=301] in this picture first column is how my urls are right now, 2nd column is what i want after htaccess rules, 3rd coulmn is what iam getting after applying these rule below:- enter image description here
Navdeep.D2 (35 rep)
May 8, 2017, 07:36 PM • Last activity: Jun 30, 2025, 06:05 PM
6 votes
2 answers
4932 views
cURL does not recognize certificate
At our company they enforce a web proxy which breaks SSL connections and replaces the certificate by its own fake certificate. (To be precise it uses a proxy cert which is signed by the company cert.) In order to download from a https URL I therefore have to make my system trust that fake certificat...
At our company they enforce a web proxy which breaks SSL connections and replaces the certificate by its own fake certificate. (To be precise it uses a proxy cert which is signed by the company cert.) In order to download from a https URL I therefore have to make my system trust that fake certificate (or disable certificate checking). I therefore added both the proxy cert and the company cert to both /etc/ssl/certs/ca-bundle.crt and /etc/ssl/certs/ca-certificates.crt. (Both link to the same file.) Now downloading with wget works fine, however downloading with curl does not work, because curl is not able to verify the certificate: * Rebuilt URL to: https://company.net/ * Hostname was NOT found in DNS cache * Trying 172.18.111.111... * Connected to 172.18.111.111 (172.18.111.111) port 3128 (#0) * Establish HTTP proxy tunnel to company.net:443 > CONNECT company.net:443 HTTP/1.1 > Host: company.net:443 > User-Agent: curl/7.39.0 > Proxy-Connection: Keep-Alive > < HTTP/1.1 200 Connection established < * Proxy replied OK to CONNECT request * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-bundle.crt CApath: none * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem: self signed certificate in certificate chain * Closing connection 0 curl: (60) SSL certificate problem: self signed certificate in certificate chain What might be wrong? How can I debug further?
michas (21862 rep)
Mar 6, 2015, 12:15 PM • Last activity: Apr 23, 2025, 12:02 PM
0 votes
1 answers
4240 views
How do I make privoxy block filters work for HTTPS websites?
My filters work on HTTP websites, I have put this in my config file: enforce-blocks 1 actionsfile blacklist.action and `blacklist.action` contains { +block } *facebook.com/tr/* When I visit `facebook.com/tr/`, it loads the page like normal, unlike HTTP sites where I have put a filter. I have set bot...
My filters work on HTTP websites, I have put this in my config file: enforce-blocks 1 actionsfile blacklist.action and blacklist.action contains { +block } *facebook.com/tr/* When I visit facebook.com/tr/, it loads the page like normal, unlike HTTP sites where I have put a filter. I have set both the HTTP and the HTTPS proxy in the proxy settings, so that's not the issue.
DisplayName (12016 rep)
Sep 9, 2016, 10:19 AM • Last activity: Apr 21, 2025, 09:00 PM
1 votes
1 answers
2365 views
Apache / OpenSSL configuration keywords `SSLProtocol` vs. `SSLCipherSuite`
According to the [Apache docs](http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslciphersuite) I can configure the cipher suite with (a.o.) two different keywords and examples on Internet often use both (but not necessarily identical to below example). What is the difference between `SSLProtocol`...
According to the [Apache docs](http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslciphersuite) I can configure the cipher suite with (a.o.) two different keywords and examples on Internet often use both (but not necessarily identical to below example). What is the difference between SSLProtocol and SSLCipherSuite, should I use them either or both? SSLProtocol all -SSLv2 -SSLv3 SSLCipherSuite ALL:!SSLv2:!SSLv3 Or is it better to list individual ciphers for SSLCipherSuite? SSLCipherSuite ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:AES128-SHA:RC4-SHA ... Are both keywords fundamentally different in what they configure? I have this feeling I am overlooking something essential here. Above configurations are not necessarily good practice, they're just an example to explain my doubt.
jippie (14566 rep)
Mar 23, 2015, 05:39 PM • Last activity: Apr 21, 2025, 02:02 PM
1 votes
2 answers
2997 views
Curl Parallel requests using links source file
I have this script to go through a list of URLs and the check return codes using Curl. Links file goes like this: https://link1/... https://link2/... https://link200/... (...) The script: INDEX=0 DIR="$(grep [WorkingDir file] | cut -d \" -f 2)" WORKDIR="${DIR}/base" ARQLINK="navbase.txt" for URL in...
I have this script to go through a list of URLs and the check return codes using Curl. Links file goes like this: https://link1/ ... https://link2/ ... https://link200/ ... (...) The script: INDEX=0 DIR="$(grep [WorkingDir file] | cut -d \" -f 2)" WORKDIR="${DIR}/base" ARQLINK="navbase.txt" for URL in $(cat $WORKDIR/$ARQLINK); do INDEX=$((INDEX + 1)) HTTP_CODE=$(curl -m 5 -k -o /dev/null --silent --head --write-out '%{http_code}\n' $URL) if [ $HTTP_CODE -eq 200 ]; then printf "\n%.3d => OK! - $URL" $INDEX; else printf "\n\n%.3d => FAIL! - $URL\n" $INDEX; fi done It takes a little while to run through every URL, so I was wondering how to speed those up curl requests. Maybe I could use some parallel Curl requests, but using "xargs" inside a "for" loop while also printing a message doesn't seem the way to go. I was able to use "xargs" out of the script and it sort of works, although not showing the correct HTTP code. cat navbase.txt | xargs -I % -P 10 curl -m 5 -k -o /dev/null --silent --head --write-out '%{http_code}\n' % I couldn't find a way to insert that into the script. Any tips?
markfree (425 rep)
Jan 4, 2022, 02:05 PM • Last activity: Mar 19, 2025, 11:10 AM
3 votes
2 answers
2564 views
Is the Web server on repo.skype.com down?
For at least three days (today is 2024-06-01), repo.skype.com has been useless though up: ```sh # cat /etc/apt/sources.list.d/skype-stable.list deb [arch=amd64] https://repo.skype.com/deb stable main # aptitude update Hit http://security.debian.org/debian-security oldoldstable/updates InRelease Hit...
For at least three days (today is 2024-06-01), repo.skype.com has been useless though up:
# cat /etc/apt/sources.list.d/skype-stable.list
deb [arch=amd64] https://repo.skype.com/deb  stable main
# aptitude update
Hit http://security.debian.org/debian-security  oldoldstable/updates InRelease
Hit http://debian.mirror.lrz.de/debian  oldoldstable InRelease                                                                                             
Hit http://debian.mirror.lrz.de/debian  oldoldstable-updates InRelease
Hit http://debian.mirror.lrz.de/debian  oldstable InRelease
Hit http://security.debian.org/debian-security  oldstable-security InRelease
Hit http://security.debian.org/debian-security  stable-security InRelease
Hit http://debian.mirror.lrz.de/debian  oldstable-updates InRelease                                                                                        
Hit http://debian.mirror.lrz.de/debian  stable InRelease                                                                                                   
Hit http://debian.mirror.lrz.de/debian  stable-updates InRelease                                                 
Ign https://repo.vivaldi.com/stable/deb  stable InRelease
Hit https://repo.vivaldi.com/stable/deb  stable Release              
Hit https://updates.signal.org/desktop/apt  xenial InRelease          
Get: 1 https://packages.microsoft.com/ubuntu/22.04/mssql-server-2022  jammy InRelease [3,624 B]
Hit https://packages.microsoft.com/ubuntu/22.04/prod  jammy InRelease
Get: 2 https://packages.microsoft.com/repos/code  stable InRelease [3,590 B]
Ign https://repo.skype.com/deb  stable InRelease                                                                                                           
Ign https://repo.skype.com/deb  stable InRelease
Ign https://repo.skype.com/deb  stable InRelease
Err https://repo.skype.com/deb  stable InRelease
  504  Gateway Time-out [IP: 2a02:26f0:12d:5ac::1263 443]
Fetched 7,214 B in 1min 48s (66 B/s)
W: Failed to fetch https://repo.skype.com/deb/dists/stable/InRelease : 504  Gateway Time-out [IP: 2a02:26f0:12d:5ac::1263 443]
W: Some index files failed to download. They have been ignored, or old ones used instead.
                                         
# ping4 repo.skype.com
PING  (104.108.144.148) 56(84) bytes of data.
64 bytes from a104-108-144-148.deploy.static.akamaitechnologies.com (104.108.144.148): icmp_seq=1 ttl=54 time=22.4 ms
64 bytes from a104-108-144-148.deploy.static.akamaitechnologies.com (104.108.144.148): icmp_seq=2 ttl=54 time=22.6 ms
64 bytes from a104-108-144-148.deploy.static.akamaitechnologies.com (104.108.144.148): icmp_seq=3 ttl=54 time=22.7 ms
64 bytes from a104-108-144-148.deploy.static.akamaitechnologies.com (104.108.144.148): icmp_seq=4 ttl=54 time=22.4 ms
^C
---  ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 22.361/22.520/22.684/0.132 ms
# ping6 repo.skype.com
PING repo.skype.com(g2a02-26f0-012d-05aa-0000-0000-0000-1263.deploy.static.akamaitechnologies.com (2a02:26f0:12d:5aa::1263)) 56 data bytes
64 bytes from g2a02-26f0-012d-05aa-0000-0000-0000-1263.deploy.static.akamaitechnologies.com (2a02:26f0:12d:5aa::1263): icmp_seq=1 ttl=59 time=22.0 ms
64 bytes from g2a02-26f0-012d-05aa-0000-0000-0000-1263.deploy.static.akamaitechnologies.com (2a02:26f0:12d:5aa::1263): icmp_seq=2 ttl=59 time=22.4 ms
64 bytes from g2a02-26f0-012d-05aa-0000-0000-0000-1263.deploy.static.akamaitechnologies.com (2a02:26f0:12d:5aa::1263): icmp_seq=3 ttl=59 time=22.6 ms
64 bytes from g2a02-26f0-012d-05aa-0000-0000-0000-1263.deploy.static.akamaitechnologies.com (2a02:26f0:12d:5aa::1263): icmp_seq=4 ttl=59 time=22.4 ms
^C
--- repo.skype.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 22.049/22.364/22.628/0.206 ms
Why (has Skype or its Debian repository been discontinued?), who is the culprit, and what to do?
AlMa1r (1 rep)
Jun 1, 2024, 01:34 PM • Last activity: Mar 6, 2025, 04:43 PM
0 votes
3 answers
616 views
I just installed Debian. I was trying to install ProtonVpn but I can't pull the deb file with wget
I just installed Debian. I was trying to install ProtonVpn but I can't pull the deb file with wget. My system clock is up to date. I also tried adding different servers in the resolve.conf file but the problem persists.The connection is established but the rest does not work. My system information:...
I just installed Debian. I was trying to install ProtonVpn but I can't pull the deb file with wget. My system clock is up to date. I also tried adding different servers in the resolve.conf file but the problem persists.The connection is established but the rest does not work. My system information:
uname -a
    Linux localhost 6.1.0-31-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.128-1 (2025-02-07) x86_64 GNU/Linux

    wget https://repo.protonvpn.com/debian/dists/stable/main/binary-all/protonvpn-stable-release_1.0.6_all.deb     
    --2025-02-13 19:07:36--  https://repo.protonvpn.com/debian/dists/stable/main/binary-all/protonvpn-stable-release_1.0.6_all.deb 
    Resolving repo.protonvpn.com (repo.protonvpn.com)... 104.26.4.35, 172.67.70.114, 104.26.5.35, ...
    Connecting to repo.protonvpn.com (repo.protonvpn.com)|104.26.4.35|:443... connected.
    GnuTLS: Error in the pull function.
    Unable to establish SSL connection.
I tried to reinstall my certificates but it didn't work. Moreover
add-apt-repository 'deb https://protonvpn.com/download/debian  stable main'
is not working, here is same error:
...
Ign:4 https://protonvpn.com/download/debian  stable InRelease
Err:1 https://protonvpn.com/download/debi  stable InRelease
  Could not handshake: Error in the pull function. [IP: 185.159.159.140 443]
Err:4 https://protonvpn.com/download/debian  stable InRelease
  Could not handshake: Error in the pull function. [IP: 185.159.159.140 443]
Reading package lists... Done
W: Failed to fetch https://protonvpn.com/download/debi/dists/stable/InRelease   Could not handshake: Error in the pull function. [IP: 185.159.159.140 443]
W: Failed to fetch https://protonvpn.com/download/debian/dists/stable/InRelease   Could not handshake: Error in the pull function. [IP: 185.159.159.140 443]
W: Some index files failed to download. They have been ignored, or old ones used instead.
and
$ openssl s_client -connect repo.protonvpn.com:443
 
CONNECTED(00000003)
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 324 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
also for gnu tls client:
~$ gnutls-cli -p 443 repo.protonvpn.com
 
Processed 140 CA certificate(s).
Resolving 'repo.protonvpn.com:443'...
Connecting to '104.26.5.35:443'...
*** Fatal error: Error in the pull function.
here is same error: error in the pull function. is my isp blocking protonvpn? because i can capture handshake for google.com and check out this :
curl -L -o windscribe-cli.deb https://windscribe.com/download/linux  
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (35) OpenSSL/3.0.15: error:0A00010B:SSL routines::wrong version number
any@localhost:~/Downloads$ curl --tlsv1.2 -L -o windscribe-cli.deb https://windscribe.com/download/linux  
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (35) OpenSSL/3.0.15: error:0A00010B:SSL routines::wrong version number
any@localhost:~/Downloads$
and browser too can not open: browser can not connect I don't have any problems installing any other package, for example.
$ sudo apt install tmux
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages were automatically installed and are no longer required:
  libb2-1 librsync2
Use 'sudo apt autoremove' to remove them.
The following NEW packages will be installed:
  tmux
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 455 kB of archives.
After this operation, 1,133 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian  bookworm/main amd64 tmux amd64 3.3a-3 [455 kB]
Fetched 455 kB in 1s (833 kB/s)
Selecting previously unselected package tmux.
(Reading database ... 358542 files and directories currently installed.)
Preparing to unpack .../archives/tmux_3.3a-3_amd64.deb ...
Unpacking tmux (3.3a-3) ...
Setting up tmux (3.3a-3) ...
Processing triggers for man-db (2.11.2-2) ...
emmet (1 rep)
Feb 13, 2025, 04:34 PM • Last activity: Mar 3, 2025, 11:07 AM
0 votes
1 answers
118 views
Tunnel all https traffic from server through remote to bypass firewal
I have a remote machine I ssh where I'm running code that needs to access a specific https url (`https://api.trustedservices.intel.com/sgx/certification/v4/qe/identity` for example, or any other endpoint with the same api prefix). I can `curl` this url and get a response on my local machine, but, pr...
I have a remote machine I ssh where I'm running code that needs to access a specific https url (https://api.trustedservices.intel.com/sgx/certification/v4/qe/identity for example, or any other endpoint with the same api prefix). I can curl this url and get a response on my local machine, but, presumably, the firewall that the remote machine is behind results in the same curl command timing out. I'm trying to figure out a way to bypass the firewall restriction, and the best way that I can think of is to ssh tunnel all https traffic (port 443) from remote through local and back. Existing posts I've seen here, say there are issues with certificates being for the server machine but the requests originating from the local machine. Any tips on the easiest way to do this? Maybe even IP spoofing the requests that the code on remote makes? Not sure how to tunnel all traffic and have the certificate issue be okay.
Surya Bakshi (1 rep)
Feb 9, 2025, 07:09 PM • Last activity: Feb 9, 2025, 07:29 PM
1 votes
0 answers
132 views
http/https monitoring from terminal with top-like interface
I'm looking for a network monitor in a top-like interface, but I would like to be able to inspect all requests / responses, not just the IP addresses that are being connected to. I've found several tools [1] that seems to sort of fit what I'm looking for, with `iptraf` seeming to be closest in ui an...
I'm looking for a network monitor in a top-like interface, but I would like to be able to inspect all requests / responses, not just the IP addresses that are being connected to. I've found several tools that seems to sort of fit what I'm looking for, with iptraf seeming to be closest in ui and httpry seeming to be the closest in functionality. The problem with httpry (apart from interface) is that it can only show http requests, not https -- which is intuitive, because of what the s in https stands for. But, for example, in my browser, I can monitor all requests that are made and their responses, regardless of whether they are https or not. Presumably, this is because the browser is the owner of the key used in the request, so they can decrypt the messages, while outside the browser one can't. But at least in theory, it seems like with root access on my machine I should be able to do this outside the browser, too, for obvious reasons. Is the only reason other tools aren't able to do this that they don't know the location of the keys the browser is using, in the case of responses, and that they don't see the request prior to encryption and sending, in the case of requests? ie a pragmatic issue. My intuition is that I could set up a proxy on my own computer that provides a singular key for encrypting requests made to the proxy from within my computer, then can decrypt the request, make it available to me to read, and re-encrypt and pass along to the actual target server, then do more-or-less the same in reverse. Does something like this exist that can be monitored from a TUI and isn't a glaring security vulnerability? If the tool is also able to monitor all types of traffic on the interface and present it in some readable / usable format, that is a plus -- something like ssh bob@bobserver | ip 123.456.789.101 | port 22 | proc 12345 | user bob | 2024-01-01 12:24:46. nethogs jnettop chitose iftop iptstate nagios iptraf httpry
David Anderson (11 rep)
Oct 17, 2024, 07:16 PM • Last activity: Oct 18, 2024, 02:18 PM
0 votes
1 answers
131 views
NGINX x-forwarded-proto not working
I have an ASP.NET app hosted in a Docker container, with a NGINX reverse proxy, hosted on a VPS. When running in production, the x-forwarded-proto header isn't being passed. From what I understand, this should return the x-forwarded-proto header curl -I https://awaken.hanumaninstitute.com The result...
I have an ASP.NET app hosted in a Docker container, with a NGINX reverse proxy, hosted on a VPS. When running in production, the x-forwarded-proto header isn't being passed. From what I understand, this should return the x-forwarded-proto header curl -I https://awaken.hanumaninstitute.com The result is
HTTP/1.1 200 OK
Server: nginx/1.22.1
Date: Sun, 01 Sep 2024 02:35:33 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Vary: Accept-Encoding
The NGINX server block is this
server {
server_name   awaken.hanumaninstitute.com;

location /.well-known/acme-challenge/ {
root /var/www/certbot;
}

location / {
proxy_pass         http://127.0.0.1:5009/ ;
proxy_http_version 1.1;
proxy_set_header   Upgrade $http_upgrade;
proxy_set_header   Connection $connection_upgrade;
proxy_set_header   Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header   X-Forwarded-Proto $scheme;
}

listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/awaken.hanumaninstitute.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/awaken.hanumaninstitute.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
The ASP.NET app has this
var app = builder.Build();
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
    ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
});
Server is Debian. What am I missing? Why isn't x-forwarded-proto working?
Etienne Charland (101 rep)
Sep 1, 2024, 04:54 AM • Last activity: Sep 4, 2024, 02:22 PM
0 votes
1 answers
513 views
How do I tell curl to try other protocols?
I tried downloading an Aerom&#233;xico news item using curl "https://www.aeromexico.com/en-us/am-news/new-Rome-route" -s --trace-ascii - But it reports == Info: HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2) Which options do I need to supply to curl to have it try all available proto...
I tried downloading an Aeroméxico news item using curl "https://www.aeromexico.com/en-us/am-news/new-Rome-route " -s --trace-ascii - But it reports == Info: HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2) Which options do I need to supply to curl to have it try all available protocols? HTTP2 doesn't seem to be working.
Janus Troelsen (1515 rep)
Jul 21, 2024, 02:28 PM • Last activity: Jul 21, 2024, 06:39 PM
3 votes
2 answers
3332 views
How to block keywords in https URL using squid proxy?
I want to block keyword "import" in any URL, including https websites. Example: http://www.abc.com/import/dfdsf https://xyz.com/import/hdovh How to create acl to do it? Thanks
I want to block keyword "import" in any URL, including https websites. Example: http://www.abc.com/import/dfdsf https://xyz.com/import/hdovh How to create acl to do it? Thanks
flake (31 rep)
Nov 20, 2015, 03:36 PM • Last activity: Jul 1, 2024, 11:08 PM
2 votes
0 answers
588 views
How to SNI filter with nftables v1.0.8?
eg: permit classroom and block youtube which share an IP: dig www.youtube.com +short | grep "$(dig classroom.google.com +short)" 142.251.32.78 https://serverfault.com/questions/988309/filter-on-bytes-in-udp-payload-using-nftables#988614 , https://stackoverflow.com/questions/70760516/bpf-verifier-fai...
user1133275 (5723 rep)
Mar 4, 2024, 08:01 PM • Last activity: Jun 25, 2024, 05:28 PM
1 votes
1 answers
100 views
Why is my web server serving HTTPS content on port 80?
Apache webserver on Rocky Linux 9, with SSL certs obtained from LetsEncrypt. This is the config of a specific virtual host "myvhost", but the problem arises for all vhosts on my server: `/etc/httpd/conf.d/myvhost.conf`: ServerName myvhost.example.org DocumentRoot "/var/www/html/myvhost" RewriteEngin...
Apache webserver on Rocky Linux 9, with SSL certs obtained from LetsEncrypt. This is the config of a specific virtual host "myvhost", but the problem arises for all vhosts on my server: /etc/httpd/conf.d/myvhost.conf: ServerName myvhost.example.org DocumentRoot "/var/www/html/myvhost" RewriteEngine on RewriteCond %{SERVER_NAME} =myvhost.example.org RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent] /etc/httpd/conf.d/myvhost-le-ssl.conf (autogenerated by LetsEncrypt): ServerName myvhost.example.org DocumentRoot "/var/www/html/myvhost" Include /etc/letsencrypt/options-ssl-apache.conf Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains" TraceEnable off SSLCertificateFile /etc/letsencrypt/live/example.org-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.org-0001/privkey.pem The command curl -i http://myvhost.example.org returns: HTTP/1.1 400 Bad Request Date: Wed, 19 Jun 2024 12:39:10 GMT Server: Apache Content-Length: 362 Connection: close Content-Type: text/html; charset=iso-8859-1 400 Bad Request

Bad Request

Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.

Why is it doing that? Amongst other things, HTTP Error 400 prevents certbot renew from verifying the domain and renewing the certificate. It is worth noting that the exact same configuration on CentOS Stream 8 did not result in this problem. EDIT: output of the command for f in $(grep -l -e SSLCertificate -e :80 /etc/httpd/conf.d/*.conf); do printf '\n== %s ==\n' "$f"; grep -hE 'SSLCertificate|VirtualHost|Server(Name|Alias)' "$f" | sed -e 's/#.*//' -e '/^[[:space:]]*$/d'; done | less: == /etc/httpd/conf.d/main-le-ssl.conf == ServerName example.org SSLCertificateFile /etc/letsencrypt/live/example.org-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.org-0001/privkey.pem == /etc/httpd/conf.d/main.conf == ServerName example.org == /etc/httpd/conf.d/myvhost-le-ssl.conf == ServerName myvhost.example.org SSLCertificateFile /etc/letsencrypt/live/example.org-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.org-0001/privkey.pem == /etc/httpd/conf.d/myvhost.conf == ServerName myvhost.example.org == /etc/httpd/conf.d/anothervhost-le-ssl.conf == ServerName anothervhost.example.org SSLCertificateFile /etc/letsencrypt/live/example.org-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.org-0001/privkey.pem == /etc/httpd/conf.d/anothervhost.conf == ServerName anothervhost.example.org == /etc/httpd/conf.d/ssl.conf == SSLCertificateFile /etc/letsencrypt/live/example.org-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/example.org-0001/privkey.pem
dr_ (32068 rep)
Jun 19, 2024, 12:51 PM • Last activity: Jun 19, 2024, 04:58 PM
0 votes
1 answers
122 views
Bot crawling getting 301/redirects instead of 404 so it's hiding from fail2ban. How is it getting 301 intead of 404?
I have fail2ban setup and it's working great for most scanning. It triggers off any 4xx in the nginx error log. However, note the following bot scan. Somehow THIS bot is triggering my server to return 301 instead of 404, like all the others. How could it be doing this? Since it's a 301 and not a 4xx...
I have fail2ban setup and it's working great for most scanning. It triggers off any 4xx in the nginx error log. However, note the following bot scan. Somehow THIS bot is triggering my server to return 301 instead of 404, like all the others. How could it be doing this? Since it's a 301 and not a 4xx, it walked right past my fail2ban and never got banned. I'd like to detect and prevent this. Any suggestion on how this was done and how to prevent it? 178.20.44.82 - - [30/May/2024:21:28:48 +0000] "GET / HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0" 178.20.44.82 - - [30/May/2024:21:28:49 +0000] "GET / HTTP/1.1" 301 178 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0" 178.20.44.82 - - [30/May/2024:21:28:49 +0000] "GET /.DS_Store HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.0 Safari/605.1.15" 178.20.44.82 - - [30/May/2024:21:28:49 +0000] "GET /.env HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:49 +0000] "POST /.env HTTP/1.1" 301 178 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0" 178.20.44.82 - - [30/May/2024:21:28:50 +0000] "GET /.env.prod HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:50 +0000] "POST /.env.prod HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0" 178.20.44.82 - - [30/May/2024:21:28:50 +0000] "GET /.env.production HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:51 +0000] "POST /.env.production HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:51 +0000] "GET /redmine/.env HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:51 +0000] "POST /redmine/.env HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:52 +0000] "GET /__tests__/test-become/.env HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:52 +0000] "POST /__tests__/test-become/.env HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:52 +0000] "GET / HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0" 178.20.44.82 - - [30/May/2024:21:28:52 +0000] "POST / HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:53 +0000] "GET /debug/default/view?panel=config HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:53 +0000] "GET /debug/default/view.html HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:53 +0000] "GET /debug/default/view HTTP/1.1" 301 178 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0" 178.20.44.82 - - [30/May/2024:21:28:54 +0000] "GET /frontend/web/debug/default/view HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:54 +0000] "GET /web/debug/default/view HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:54 +0000] "GET /sapi/debug/default/view HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:54 +0000] "GET /_profiler/phpinfo HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:55 +0000] "GET /app_dev.php/_profiler/phpinfo HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:55 +0000] "GET /phpinfo.php HTTP/1.1" 301 178 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:55 +0000] "GET /owncloud/apps/graphapi/vendor/microsoft/microsoft-graph/tests/GetPhpInfo.php HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" 178.20.44.82 - - [30/May/2024:21:28:56 +0000] "GET /info.php HTTP/1.1" 301 178 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0" 178.20.44.82 - - [30/May/2024:21:28:56 +0000] "GET / HTTP/1.1" 301 178 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" My only 301 redirects are the ones certbot set up: server { if ($host = www.mydomainname.com) { return 301 https://$host$request_uri ; } # managed by Certbot if ($host = mydomainname.com) { return 301 https://$host$request_uri ; } # managed by Certbot
Chris (103 rep)
Jun 1, 2024, 03:55 PM • Last activity: Jun 3, 2024, 09:16 AM
3 votes
3 answers
1155 views
How to get HTTPS response from a Website using OpenBSD base tools?
Using tools like `curl` or `wget` it's easy to "get" the response of an HTTP GET request, but both tools aren't installed by default on OpenBSD, and writing a portable shell script, it cannot be assumed that they are installed on ones another machine. I want a "secure" way to get the server response...
Using tools like curl or wget it's easy to "get" the response of an HTTP GET request, but both tools aren't installed by default on OpenBSD, and writing a portable shell script, it cannot be assumed that they are installed on ones another machine. I want a "secure" way to get the server response (for example for wikipedia.org ) onto my terminal using tools which **are** installed by default. Secure means the response should not be plaintext but encrypted with current standards like HTTP/2 and TLS 1.3/TLS 1.2 (if supported by the server, of course) on the way to my machine.
user505038
Feb 21, 2022, 11:11 PM • Last activity: May 5, 2024, 02:23 AM
13 votes
3 answers
39865 views
Download Windows ISO from microsoft using wget or curl
**Objective: download the official Win10_1909 iso into linux directly via command line** Source: After selecting edition and language you get 64bit links like as follows, valid for 24 hours This works fine via any gui browser or any platforms, but we need this in a linux box - linux server is ofcour...
**Objective: download the official Win10_1909 iso into linux directly via command line** Source: After selecting edition and language you get 64bit links like as follows, valid for 24 hours This works fine via any gui browser or any platforms, but we need this in a linux box - linux server is ofcourse gui less (no intend to install gui) - wget/curl results in an forbidden html file download - Changing user agent to firefox in wget/curl or adding custom payload as curl -d '{"t": 77e897e2-a32c-419c-8f18-54770dbb5a15,"e": 1583942596,"h": 595f691df8f7e4088d24b6cc37077d1a}' and requesting the .iso file returns a forbidden page - tried linux based download managers like aria and axel it fails as well, forbidden How to accomplish this via command line only?
Curi0usM3 (143 rep)
Mar 12, 2020, 06:47 AM • Last activity: Apr 19, 2024, 03:32 AM
0 votes
1 answers
180 views
NGINX HTTPS not redirecting properly
I followed [Cerbot's instructions](https://certbot.eff.org/instructions?ws=nginx&os=debianbuster) to get a HTTPS certificate for NGINX in my Debian server for a domain, but the HTTPS is not redirecting properly. I got the following in `etc/nginx/conf.d/app.conf` from Certbot's automatic generation:...
I followed [Cerbot's instructions](https://certbot.eff.org/instructions?ws=nginx&os=debianbuster) to get a HTTPS certificate for NGINX in my Debian server for a domain, but the HTTPS is not redirecting properly. I got the following in etc/nginx/conf.d/app.conf from Certbot's automatic generation:
server {
    server_name mnpd.khkm.dev www.mnpd.khkm.dev;
    # listen 8080;
    server_tokens off;
    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }
    location / {
    	# return 301 https://mnpd.khkm.dev$request_uri ;
    	proxy_pass http://mnpd.khkm.dev ;
    }

    listen [::]:443 ssl ipv6only=on; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/mnpd.khkm.dev/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/mnpd.khkm.dev/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}
server {
    if ($host = mnpd.khkm.dev) {
        return 301 https://$host$request_uri ;
    } # managed by Certbot


    listen 80;
    listen [::]:80;
    server_name mnpd.khkm.dev www.mnpd.khkm.dev;
    return 404; # managed by Certbot
}
In Chrome, when I go to [https://mnpd.khkm.dev/](https://mnpd.khkm.dev/) , I get:
page isn’t working.
mnpd.khkm.dev redirected you too many times.
Try deleting your cookies.
ERR_TOO_MANY_REDIRECTS
I found this [Stack Overflow answer](https://stackoverflow.com/a/51715058/8811872) where I looked at the "Network" tab in the web console and saw that the page is constantly being redirected to https://mnpd.khkm.dev/ . The NGINX configuration should be listening to port 443 for the HTTPS, so why isn't it loading and constantly being redirected? (I expect the default NGINX page to be loaded.)
Kevin (139 rep)
Apr 16, 2024, 05:36 PM • Last activity: Apr 16, 2024, 06:27 PM
Showing page 1 of 20 total questions