Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
1
answers
2114
views
Apache server sometimes gets stuck for minutes with requests getting backlogged and waiting too much to be processed
I've got a production server with **Apache 2.4.38** on **Debian 10** and sometimes the web server doesn't function properly and doesn't immediately send a response to the HTTP requests it gets (All virtual hosts requests on it are completely unresponsive (no matter what they reverse proxy to)). Afte...
I've got a production server with **Apache 2.4.38** on **Debian 10** and sometimes the web server doesn't function properly and doesn't immediately send a response to the HTTP requests it gets (All virtual hosts requests on it are completely unresponsive (no matter what they reverse proxy to)). After a restart it immediately fixes itself or after being like this a while (seconds or even minutes) and starts sending A LOT of HTTP responses all of a sudden.
CPU and RAM usage seem to be fine, so it's definitely not that. I don't know what exactly is going on and why it's doing this. I've also changed mpm_event.conf settings, they currently are set to this:
CPU and RAM usage seem to be fine, so it's definitely not that. I don't know what exactly is going on and why it's doing this. I've also changed mpm_event.conf settings, they currently are set to this:
StartServers 2
ServerLimit 100
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 128
ThreadsPerChild 25
MaxRequestWorkers 400
MaxConnectionsPerChild 5000
There are some errors I've seen in the Apache error log though:
[Tue Mar 22 19:53:38.339703 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 29595 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.339777 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 26190 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.339825 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 27903 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.339889 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 16907 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.339933 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 26880 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.340000 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 15384 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.340041 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 24971 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.340091 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 9780 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.340130 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 26317 still did not exit, sending a SIGKILL
What settings can I change that would fix this issue?
BitMonster
(35 rep)
Mar 22, 2022, 06:32 PM
• Last activity: Aug 2, 2025, 01:01 AM
4
votes
2
answers
234
views
is there anything like this web browser in the debian/ubuntu repositories?
It's been a while since my web-browsing has really suited me. What I would really like is: A javascript-enabled web browser with a tab-based browsing system that can be controlled simultaneously using a console and a GUI. For example, I'd like to be able to... 1) open a bunch of tabs 2) go to the co...
It's been a while since my web-browsing has really suited me.
What I would really like is:
A javascript-enabled web browser with a tab-based browsing system that can be controlled simultaneously using a console and a GUI.
For example, I'd like to be able to...
1) open a bunch of tabs
2) go to the console and tell it something like 'copy the url open in each tab and write it, along with the html, to a file, for each open tab'
in other words, I want to be able to browse with tabs at my leisure, and then write scripts that iterate over each tab. Does anything like that exist?
ixtmixilix
(13520 rep)
Jul 19, 2011, 12:43 PM
• Last activity: May 28, 2025, 11:21 PM
1
votes
0
answers
65
views
How to send files over WebSocket with websocat tool?
I brought a `wss://` server up using `websocat`: ``` websocat -E -t -v --pkcs12-der=q.pkcs12 wss-listen:0.0.0.0:8443 mirror: ``` And on the client I am running this command to establish a secure WebSocket connection: ``` websocat wss:// :8443 ``` I can send text using this setup and whatever I send...
I brought a
wss://
server up using websocat
:
websocat -E -t -v --pkcs12-der=q.pkcs12 wss-listen:0.0.0.0:8443 mirror:
And on the client I am running this command to establish a secure WebSocket connection:
websocat wss://:8443
I can send text using this setup and whatever I send gets echoed back. But now I want to send files on the wss://
connection. How do I do that?
This is how I am sending files using ws://
and it's working fine.
Server:
websocat -s -v 0.0.0.0:8765 > received.txt
Client:
websocat -b ws://:8765 < testfile.txt
I tried something like this:
Server:
websocat -E -t -v --pkcs12-der=q.pkcs12 wss-listen:0.0.0.0:8443 writefile:received.txt
Client:
websocat -E -t wss://echo.websocket.org:8443 < testfile.txt
But I am getting this error:
websocat: WebSocketError: I/O failure
websocat: error running
Any help regarding how do I do it with wss://
connection will really help. Thanks.
If it's not achievable using websocat, then what other tool I can use to bring up a wss://
server and send files to it?
Aashish Aggarwal
(11 rep)
Apr 23, 2025, 02:36 AM
• Last activity: Apr 23, 2025, 07:26 AM
0
votes
2
answers
42
views
Web page login automation
I have a linux server whose only purpose is to display a full screen web page ("kiosk mode") that has status information regarding other servers in the network. I would now like to add to that (perhaps on a second display), my Wyze cameras, which can all be seen on a nice webpage at [Wyze Live View...
I have a linux server whose only purpose is to display a full screen web page ("kiosk mode") that has status information regarding other servers in the network. I would now like to add to that (perhaps on a second display), my Wyze cameras, which can all be seen on a nice webpage at Wyze Live View site . The issue I have, is that that site requires login, so I do not see a way to automatically open that page at system startup, in headless mode.
Back in the day, when Windows was my primary environment, a tool was available that actually served as the "hands-on" for a browser, and it was capable of this and other automation tasks. Maybe a chromium plugin would be available. Looks like browser plugins may do this, but I am not sure I'm willing to entrust security parameters to a tool like that.
Is there a way to hook into the login process, perhaps with something like snoopy php, still allowing "normal" web page functionality?
Dennis
(101 rep)
Mar 15, 2025, 06:58 PM
• Last activity: Mar 16, 2025, 02:58 PM
47
votes
3
answers
27159
views
Is there an online/web interface to search and list apt-get packages and see summaries and recommendations?
I'm looking for sort of an 'app-store' or Google Play store type functionality for apt-get packages. What I'd really like to do is select a category, like 'Music' or 'Internet' and see the list of available packages in that category with their summaries. It'd be even better if the packages had ratin...
I'm looking for sort of an 'app-store' or Google Play store type functionality for apt-get packages. What I'd really like to do is select a category, like 'Music' or 'Internet' and see the list of available packages in that category with their summaries.
It'd be even better if the packages had ratings or reviews. Does anything like this exist?
Ehryk
(1892 rep)
Oct 30, 2013, 04:13 AM
• Last activity: Mar 14, 2025, 03:53 PM
0
votes
0
answers
52
views
I'm trying to load my site in the browser but I am receiving connect refused
I've a VPS in one hosted in an host in this vps I installed Java, Tomcat and MySQL for my web app. After the installation of the packages I made the deploy of my app, if inside of the tomcat/manager I call my app the browser load my web app normaly but when I type my domain in the browser I receive...
I've a VPS in one hosted in an host in this vps I installed Java, Tomcat and MySQL for my web app. After the installation of the packages I made the deploy of my app, if inside of the tomcat/manager I call my app the browser load my web app normaly but when I type my domain in the browser I receive "This site can’t be reached itcmedbr.com refused to connect." I don't have data to work at this moment it's not a problem, what I whant to do for now is to load my login page to see if my domain load in the browser. I don't have any other type of error to report here only this.
I don't have Apache and Nginx installed.
------------------------------------------------------------
- url: itcmedbr.com
- distribution running: Centos 9
If I go to https://mxtoolbox.com/emailhealth/itcmedbr.com/ it shows me the error message
http itcmedbr.com Unable to connect to the remote server (http://itcmedbr.com)
but I have:
firewall-cmd --zone=public --list-services
cockpit dhcpv6-client http https mysql ssh
Cezar Apulchro
(1 rep)
Jan 13, 2025, 03:01 PM
• Last activity: Jan 13, 2025, 09:34 PM
68
votes
13
answers
43829
views
How to retrieve a webpage's title using the command-line?
I'm looking for a command-line program that prints the title of a webpage. For instance: ```lang-shell title-fetcher 'https://www.youtube.com/watch?v=Dd7dQh8u4Hc' ``` ...should give: ```none Why Are Bad Words Bad? ``` You give the URL, the program prints out the title.
I'm looking for a command-line program that prints the title of a webpage.
For instance:
-shell
title-fetcher 'https://www.youtube.com/watch?v=Dd7dQh8u4Hc '
...should give:
Why Are Bad Words Bad?
You give the URL, the program prints out the title.
Ufoguy
(1278 rep)
Dec 1, 2013, 11:12 AM
• Last activity: Jan 4, 2025, 10:12 PM
0
votes
3
answers
758
views
regex, remove everything before first underscore
i have this string `grafana-stack_alloy` and require everything after the first underscore `alloy`. I need the result as a group `$1`. Tried this one, but it failed. `(?:)?([^_]+)*$`. Can anyone help me to solve this problem? I am testing this: rule { action = "replace" source_labels = [ "__meta_doc...
i have this string
grafana-stack_alloy
and require everything after the first underscore
alloy
.
I need the result as a group $1
. Tried this one, but it failed. (?:)?([^_]+)*$
.
Can anyone help me to solve this problem?
I am testing this:
rule {
action = "replace"
source_labels = [
"__meta_docker_container_label_com_docker_swarm_service_name",
]
regex = "^(?:;*)?([^;]+).*$" work but wrong
//regex = "[^_]+.$" ----> not work
//regex = "([^_]+)$" ----> not work
replacement = argument.namespace.value + "/$1"
target_label = "job"
}
It is about the Grafana Agent with relabel-regex:
Grafana prometheus.relabel which use [Google's RE2](https://github.com/google/re2/wiki/Syntax) as its regex engine.
Thanks in advance
user2931829
(3 rep)
Dec 23, 2024, 10:23 AM
• Last activity: Dec 24, 2024, 09:48 AM
0
votes
1
answers
166
views
Download full web page and save without a deep directory structure? Also, bypass paywall?
So, I want to be able to download a web page in a way similar to what https://archive.is does. Using `wget -p -E -k` usually produces a decent result - but this result is somewhat hard to handle. For example, after `wget -p -E -k https://news.sky.com/story/yazidi-woman-kidnapped-by-islamic-state-fre...
So, I want to be able to download a web page in a way similar to what https://archive.is does.
Using
wget -p -E -k
usually produces a decent result - but this result is somewhat hard to handle. For example, after wget -p -E -k https://news.sky.com/story/yazidi-woman-kidnapped-by-islamic-state-freed-from-gaza-after-decade-in-captivity-13227540
I got a directory names news.sky.com
and the page was available as news.sky.com/story/yazidi-woman-kidnapped-by-islamic-state-freed-from-gaza-after-decade-in-captivity-13227540.html
while other necessary files for the page were scattered around in this same news.sky.com
directory.
I'd prefer to have something similar to how browsers can "save a page" - the page file in the current directory plus a "something_files" subdirectory where the necessities are. I understand I can kinda do that by moving the site directory structure into that files subdirectory and creating a redirect page next to it, but I'd prefer to do it properly if possible.
There are also cases pf paywalls that archive.is successfully bypasses but wget -p -E -k
does not. For example, with https://www.nytimes.com/2014/10/28/magazine/theo-padnos-american-journalist-on-being-kidnapped-tortured-and-released-in-syria.html
, archive.is produced a perfect paywall-less copy, while wget -p -E -k
produced the start of the article hanging on "verifying access". I'd like to be doing what archive.is does.
Advice on how to change these things would be much appreciated.
Mikhail Ramendik
(538 rep)
Oct 14, 2024, 02:02 PM
• Last activity: Nov 8, 2024, 04:59 PM
5
votes
8
answers
3021
views
WebGUI for Virtualization?
Are there any virtualizations that can be accessed via a WebGUI too? I mean the administration part is enough on a terminal, but we need a virtualization/webgui solution for "customers". So that they could for ex.: reboot their guest machine or reach it when it's "frozen" [bsod/kernel panic] or mayb...
Are there any virtualizations that can be accessed via a WebGUI too?
I mean the administration part is enough on a terminal, but we need a virtualization/webgui solution for "customers".
So that they could for ex.: reboot their guest machine or reach it when it's "frozen" [bsod/kernel panic] or maybe clone it, etc.
There is Xen with the xen-shell, but it's not good because it's only available via terminal.
Any solutions?
LanceBaynes
(41465 rep)
Dec 23, 2011, 05:14 PM
• Last activity: Oct 17, 2024, 08:29 AM
75
votes
9
answers
98096
views
how to download a file using just bash and nothing else (no curl, wget, perl, etc.)
I have a minimal headless *nix which **does not have** any command line utilities for downloading files (e.g. no curl, wget, etc). I only have bash. How can I download a file? Ideally, I would like a solution that would work across a wide range of *nix.
I have a minimal headless *nix which **does not have** any command line utilities for downloading files (e.g. no curl, wget, etc). I only have bash.
How can I download a file?
Ideally, I would like a solution that would work across a wide range of *nix.
Chris Snow
(4296 rep)
Jul 22, 2013, 07:43 AM
• Last activity: Oct 7, 2024, 04:24 PM
3
votes
4
answers
3834
views
Command line tool to check when a URL was updated?
It would certainly be possible to whip together something in Python to query a URL to see when it was last modified, using the HTTP headers, but I wondered if there is an existing tool that can do that for me? I'd imagine something like: % checkurl http://unix.stackexchange.com/questions/247445/ Fri...
It would certainly be possible to whip together something in Python to query a URL to see when it was last modified, using the HTTP headers, but I wondered if there is an existing tool that can do that for me? I'd imagine something like:
% checkurl http://unix.stackexchange.com/questions/247445/
Fri Dec 4 16:59:28 EST 2015
or maybe:
% checkurl "+%Y%m%d" http://unix.stackexchange.com/questions/247445/
20151204
as a bell and/or whistle. I don't think that wget or curl have what I need, but I wouldn't be surprised to be proven wrong. Is there anything like this out there?
Scott Deerwester
(411 rep)
Dec 4, 2015, 10:02 PM
• Last activity: Jan 29, 2024, 06:55 PM
35
votes
6
answers
200688
views
How to monitor incoming http requests
How can I monitor incoming `HTTP` requests to port `80`? I have set up web hosting on my local machine using `DynDNS` and `Nginx`. **I wanted to know how many request are made on my server every day.** Currently I'm using this command: netstat -an | grep 80
How can I monitor incoming
HTTP
requests to port 80
? I have set up web hosting on my local machine using DynDNS
and Nginx
. **I wanted to know how many request are made on my server every day.**
Currently I'm using this command:
netstat -an | grep 80
user7044
(741 rep)
Aug 26, 2011, 12:26 PM
• Last activity: Dec 27, 2023, 02:21 PM
0
votes
1
answers
364
views
Big Blue Button installation error: "Challenge failed for domain", thereafter "500 Internal Server Error -- nginx"
**Context:** - I wanted to install Big Blue Button on a Ubuntu virtual machine via SSH; - I followed correctly the official tutorial up to [the "Install" section][1]; - I entered the following command on the terminal: `wget -qO- https://raw.githubusercontent.com/bigbluebutton/bbb-install/v2.7.x-rele...
**Context:**
- I wanted to install Big Blue Button on a Ubuntu virtual machine via SSH;
- I followed correctly the official tutorial up to the "Install" section ;
- I entered the following command on the terminal:
wget -qO- https://raw.githubusercontent.com/bigbluebutton/bbb-install/v2.7.x-release/bbb-install.sh | bash -s -- -w -v focal-270 -s -e -g -k
using real data on "here the domain name" and "here the email";
- I got **the instalation error** detailed below;
- Accessing the real "here the domain name" by HTTP gives me "500 Internal Server Error -- nginx";
- It is not accessible by HTTPS: "ERR_CONNECTION_REFUSED".
**The installation error** in its full glory:
A instalar certbot (0.40.0-1ubuntu0.1) ...
Created symlink /etc/systemd/system/timers.target.wants/certbot.timer → /lib/systemd/system/certbot.timer.
A processar 'triggers' para man-db (2.9.1-1) ...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for
Using the webroot path /var/www/bigbluebutton-default/assets for all unmatched domains.
Waiting for verification...
Challenge failed for domain
http-01 challenge for
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain:
Type: caa
Detail: CAA record for prevents issuance
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
bbb-install: Let's Encrypt SSL request for did not succeed - exiting
**The Let's Encrypt log** in its full glory:
- https://sprunge.us/pQGlat
**Questions:**
1. since the problem appears to have started with the SSL certificate during installation, disabling/removing it could solve the "500 Internal Server Error"?
2. if yes, how to do it without uninstalling and reinstalling Big Blue Button? (I suppose uninstalling and reinstalling would be more traumatic due to leaving residues behind to create brand new errors)
3. if no, how to troubleshoot the "500 Internal Server Error", finding its cause and solution?
BsAxUbx5KoQDEpCAqSffwGy554PSah
(203 rep)
Nov 8, 2023, 12:57 PM
• Last activity: Nov 21, 2023, 11:02 AM
1
votes
0
answers
152
views
Is it possible to use webassembly in QNX
I’m trying to figure out if I’m able to port a webassembly based webapplication to QNX. But it is hard to find any information about what versions of browsers and web engine libraries that are shipped with what version of QNX and what features they support. SDP 7.x (which I believe is used with QNX...
I’m trying to figure out if I’m able to port a webassembly based webapplication to QNX. But it is hard to find any information about what versions of browsers and web engine libraries that are shipped with what version of QNX and what features they support.
SDP 7.x (which I believe is used with QNX 7 and CAR 3?) seems to be shipped together with a modernized browser based on the Blink engine (instead of the webkit lib used in QNX 6).
[It uses uses the V8 JavaScript engine](https://blackberry.qnx.com/content/dam/qnx/products/qnxcar/QNX_WebBrowser_ProductBrief_FINAL.pdf)
[which supports webassembly](https://v8.dev/)
So the browser in it self should support webassembly, right?
I guess the webassembly code created with Emscriptens should be able to be compiled on any platform and transferred to the QNX system. Cause I believe Emscriptens wont run correctly on QNX? Or can I compile Emscripten with q++ on QNX and create the webassemby directly on the QNX system? I'm not sure I understood it correctly. But it seems like Emscripten should be able to compile and run on any system that can use LLVM, which QNX does.
Can anyone with a bit more QNX knowledge confirm if this might be possible or not?
mattsson
(113 rep)
Oct 12, 2023, 01:59 PM
1
votes
0
answers
113
views
Simple webpage analytics from CLI
I have a website that I put together using a website builder, as opposed to hosting it on a private server. In simple terms, I'm wondering if there's a way to use terminal to keep an eye on how many people are visiting the website. Doesn't need to be anything fancy, just a simple count of the visito...
I have a website that I put together using a website builder, as opposed to hosting it on a private server.
In simple terms, I'm wondering if there's a way to use terminal to keep an eye on how many people are visiting the website. Doesn't need to be anything fancy, just a simple count of the visitors to the domain.
Lee
(135 rep)
Jun 22, 2023, 12:46 AM
2
votes
3
answers
530
views
Is using cookies from a web-browser a sane rationale for desktop application development?
I am looking at making an application that would make OpenID authentication with desktop clients easy. The rationale is to steal the cookies from the web-browser, so as to avoid having to hard-code authentication to every possible OpenID provider. Assuming the user has already logged on to the OpenI...
I am looking at making an application that would make OpenID authentication with desktop clients easy. The rationale is to steal the cookies from the web-browser, so as to avoid having to hard-code authentication to every possible OpenID provider.
Assuming the user has already logged on to the OpenID provider, the application clones the cookies from the default browser, and requests authentication to the desired service with the appropriate OpenID URL.
To make this application usable, I need to know what are the most commonly used web browsers used on Linux, possibly with statistical evidence. I assume that Firefox and Chromium are the two most popular at the moment.
*NB: the title of this question was edited in view of the emphasis by respondents on security and standards.*
neydroydrec
(3887 rep)
Jan 12, 2012, 12:54 PM
• Last activity: May 5, 2023, 01:53 AM
0
votes
1
answers
465
views
Website doesn't work under Linux, but works under Windows/Mac
I'm trying to buy a product on https://qinao.de/ with my (Arch) Linux laptop and different Browsers (Chrome, Firefox). But I can't add products to the Cart (and some images don't load). It works fine on my mobile phone, and on my friends' laptops with Windows and Mac. I've never seen anything like t...
I'm trying to buy a product on https://qinao.de/ with my (Arch) Linux laptop and different Browsers (Chrome, Firefox). But I can't add products to the Cart (and some images don't load).
It works fine on my mobile phone, and on my friends' laptops with Windows and Mac.
I've never seen anything like this. Can anybody with Linux confirm this, and perhaps can decode the error messages on the Developer console? Shouldn't the functioning of a website depend entirely on the browser and not the operating system?
Thank you!
anon
Apr 18, 2023, 04:22 PM
• Last activity: Apr 18, 2023, 05:05 PM
0
votes
0
answers
271
views
Download entire website directory structure, without downloading any file contents
Need to download the entire website tree structure with directory/files names, but without download files - just structure. like this: ```none www.example.com | dir1 | file1.txt file2.txt dir2 | file3.txt | file4.txt ``` When I use `wget --mirror` it downloas files ``` wget --mirror http://www.examp...
Need to download the entire website tree structure with directory/files names, but without download files - just structure.
like this:
www.example.com
|
dir1
| file1.txt
file2.txt
dir2
| file3.txt
| file4.txt
When I use wget --mirror
it downloas files
wget --mirror http://www.example.com
tuytuy20
(115 rep)
Mar 27, 2023, 07:31 AM
• Last activity: Mar 27, 2023, 07:35 AM
5
votes
1
answers
3506
views
Should I install a custom webapp in /opt or /srv?
My understanding is that custom/non-distro software should be installed in `/opt` . However in a Django deployment tutorial [[1]] I found a suggestion to install a Django webapp to `/srv` which is described as containing site-specific data which is served by the system . Should non-distro webapps be...
My understanding is that custom/non-distro software should be installed in
/opt
. However in a Django deployment tutorial [[1] ] I found a suggestion to install a Django webapp to /srv
which is described as containing site-specific data which is served by the system.
Should non-distro webapps be installed in /opt
or /srv
?
lofidevops
(3349 rep)
Mar 27, 2017, 11:52 AM
• Last activity: Mar 9, 2023, 06:06 PM
Showing page 1 of 20 total questions