Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

18 votes
6 answers
27026 views
Is there parallel wget? Something like fping but only for downloading?
I've found only puf (Parallel URL fetcher) but I couldn't get it to read urls from a file; something like puf < urls.txt does not work either. The operating system installed on the server is Ubuntu.
I've found only puf (Parallel URL fetcher) but I couldn't get it to read urls from a file; something like puf < urls.txt does not work either. The operating system installed on the server is Ubuntu.
Moonwalker (333 rep)
Apr 7, 2012, 04:18 PM • Last activity: Jul 24, 2025, 12:51 PM
11 votes
2 answers
26800 views
How to run the HTTP request without using CURL
I have ARM cpu based BusyBox v1.8.1 (Embedded Linux) with limited binaries. How can I http post or put without using curl? I have wget available: # wget BusyBox v1.8.1 (2015-04-06 16:22:12 IDT) multi-call binary Usage: wget [-c|--continue] [-s|--spider] [-q|--quiet] [-O|--output-document file] [--he...
I have ARM cpu based BusyBox v1.8.1 (Embedded Linux) with limited binaries. How can I http post or put without using curl? I have wget available: # wget BusyBox v1.8.1 (2015-04-06 16:22:12 IDT) multi-call binary Usage: wget [-c|--continue] [-s|--spider] [-q|--quiet] [-O|--output-document file] [--header 'header: value'] [-Y|--proxy on/off] [-P DIR] [-U|--user-agent agent] url Retrieve files via HTTP or FTP Options: -s Spider mode - only check file existence -c Continue retrieval of aborted transfer -q Quiet -P Set directory prefix to DIR -O Save to filename ('-' for stdout) -U Adjust 'User-Agent' field -Y Use proxy ('on' or 'off') CPU info... # cat /proc/cpuinfo Processor : ARM926EJ-S rev 1 (v5l)
irom (533 rep)
Oct 5, 2015, 06:15 PM • Last activity: Jul 23, 2025, 08:18 AM
0 votes
2 answers
2027 views
how to wget a file on a remote server connected using ssh?
I am connected to my university computer using SSH as follows : ``` sudo ssh server@A.B.C.D ``` However once connected, I'm not able to download any file on the server using wget like this : ``` server_user@server:~$ wget http://example.com/somefile ``` If I do that, I get following error: ``` --202...
I am connected to my university computer using SSH as follows :
sudo ssh server@A.B.C.D
However once connected, I'm not able to download any file on the server using wget like this :
server_user@server:~$ wget http://example.com/somefile 
If I do that, I get following error:
--2020-07-28 10:14:46--  https://example.com/somefile 
Resolving example.com (example.com)... failed: Name or service not known.
wget: unable to resolve host address ‘example.com’
Also I'm not able to install any package using pip on the remote server. How shoud I do these things? Thanks in advance :) PS : of course the address is not the example.com. I've just used it temporarily here for the example purpose. It is a real http address from which I can download file locally using wget, but doing it on server raises the error reported above.
Ruchit (101 rep)
Jul 28, 2020, 04:44 AM • Last activity: Jul 4, 2025, 04:02 AM
0 votes
1 answers
167 views
Is it possible to use wget or curl to download documents from the FCC ECFS site?
I'm trying to use the FCC's Electronic Comment Filing System (ECFS) to bulk download filings in individual proceedings. They have an API that will return every filing in a proceeding. It returns a URL for individual documents in the format: https://www.fcc.gov/ecfs/document/10809709027819/1 However,...
I'm trying to use the FCC's Electronic Comment Filing System (ECFS) to bulk download filings in individual proceedings. They have an API that will return every filing in a proceeding. It returns a URL for individual documents in the format: https://www.fcc.gov/ecfs/document/10809709027819/1 However, while this works in the browser, it only downloads a placeholder HTML file saying JavaScript it required when I use wget or curl. I tried examining the page in my browser but couldn't find anything like a source URL for the actual PDF. Is there a way to use wget or curl to get at the actual PDF?
Sam (1 rep)
Aug 13, 2023, 10:01 PM • Last activity: Jul 3, 2025, 05:33 PM
2 votes
1 answers
3158 views
using wget to download all audio files (over 100,000 pages on wikia)
I am trying to download all audio files in Wookiepedia, the Star Wars wiki. My first thought is something like this wget -r -A -nd .mp3 .ogg http://starwars.wikia.com/wiki/ This should download all .mp3 and .ogg from the wiki while preventing creation of a directory. However, when I run this in term...
I am trying to download all audio files in Wookiepedia, the Star Wars wiki. My first thought is something like this wget -r -A -nd .mp3 .ogg http://starwars.wikia.com/wiki/ This should download all .mp3 and .ogg from the wiki while preventing creation of a directory. However, when I run this in terminal I get: >bash: http://starwars.wikia.com/wiki/ : No such file or directory The problem is that I can't use for loops since the URLs are unique to each wiki page. For example: http://starwars.wikia.com/wiki/Retcon http://starwars.wikia.com/wiki/C-3PX http://starwars.wikia.com/wiki/Star_Wars_Legends Is it possible to download URLs in this structure? EDIT: This is the message I get back using the answer. >--2016-02-10 16:21:26-- http://starwars.wikia.com/wiki/ Resolving starwars.wikia.com (starwars.wikia.com)... 23.235.33.194, 23.235.37.194, 104.156.81.194, ... Connecting to starwars.wikia.com (starwars.wikia.com)|23.235.33.194|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://starwars.wikia.com/wiki/Main_Page [following] --2016-02-10 16:21:26-- http://starwars.wikia.com/wiki/Main_Page Reusing existing connection to starwars.wikia.com:80. HTTP request sent, awaiting response... 200 OK Length: 569628 (556K) [text/html] Saving to: ‘index.html’ >100%[========================>] 569,628 217KB/s in 2.6s >2016-02-10 16:21:29 (217 KB/s) - ‘index.html’ saved [569628/569628] >Removing index.html since it should be rejected. >FINISHED --2016-02-10 16:21:29-- Total wall clock time: 2.7s Downloaded: 1 files, 556K in 2.6s (217 KB/s) sl gives me nothing, there are no files in the working directory.
user147855
Feb 6, 2016, 05:40 PM • Last activity: Jul 3, 2025, 01:03 PM
74 votes
5 answers
76751 views
Resume failed download using Linux command line tool
How do I resume a partially downloaded file using a Linux commandline tool? I downloaded a large file partially, i.e. 400 MB out of 900 MB due to power interruption, but when I start downloading again it resumes from scratch. How do I start from 400 MB itself?
How do I resume a partially downloaded file using a Linux commandline tool? I downloaded a large file partially, i.e. 400 MB out of 900 MB due to power interruption, but when I start downloading again it resumes from scratch. How do I start from 400 MB itself?
amolveer (979 rep)
Nov 4, 2014, 10:29 AM • Last activity: Jun 24, 2025, 06:49 PM
1 votes
1 answers
5629 views
curl, wget do not return anything
I am trying this `curl -I zomato.com | head -n 1` and I am not getting any response. % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:- -:-- 0:05:29 --:- -:-- 0 Is the site protected by firewalls? Even `wget` is not working on the...
I am trying this curl -I zomato.com | head -n 1 and I am not getting any response. % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:- -:-- 0:05:29 --:- -:-- 0 Is the site protected by firewalls? Even wget is not working on the site as well. Other sites like google.com are returning 200 response as expected.
Alex Kasina (175 rep)
Nov 26, 2016, 10:44 PM • Last activity: Jun 2, 2025, 03:03 PM
6 votes
2 answers
1362 views
Deleting a file before it has finished downloading
Today I downloaded a large file using `wget`, but I accidentally deleted it before it had finished downloading. However `wget` continued to download the file until it had finished, but there was no file saved at the end. I wonder what happened to rest of the file that Wget downloaded after I deleted...
Today I downloaded a large file using wget, but I accidentally deleted it before it had finished downloading. However wget continued to download the file until it had finished, but there was no file saved at the end. I wonder what happened to rest of the file that Wget downloaded after I deleted it?
EmmaV (4359 rep)
May 20, 2025, 02:30 PM • Last activity: May 25, 2025, 08:04 PM
0 votes
1 answers
207 views
unable to update wget version
I want to update my wget version to 1.22 which is currently 1.19 using the command : curl -O https://ftp.gnu.org/gnu/wget/wget-1.21.tar.gz but getting following error: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-...
I want to update my wget version to 1.22 which is currently 1.19 using the command : curl -O https://ftp.gnu.org/gnu/wget/wget-1.21.tar.gz but getting following error: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to ftp.gnu.org:443 Doesn't seem to work at all.
Aviator (127 rep)
Oct 18, 2023, 01:53 PM • Last activity: May 23, 2025, 09:00 PM
0 votes
1 answers
3898 views
How can I download the entire contents of a directory using wget, but excluding files with a particular suffix with the use of a wildcard?
I have a locally-hosted server, and am attempting to download all files to my remote Ubuntu-based machine via wget. I need to download all files from my HTTP server in a single directory, ensuring that everything apart from files with a suffix of "_test" is obtained - so in other words, I need to ma...
I have a locally-hosted server, and am attempting to download all files to my remote Ubuntu-based machine via wget. I need to download all files from my HTTP server in a single directory, ensuring that everything apart from files with a suffix of "_test" is obtained - so in other words, I need to make sure that any file with that suffix isn't grabbed. I've tried the following command: ~~~ wget -r http://my-server-ip/data -R '*_test' ~~~ The above command results in wget fetching everything from the server - including files with the "_test" suffix. I realise that multiple examples of how to use this command correctly exist, but none appear to suit my use case. I should also note that I'm using the bash shell.
elliott94 (153 rep)
Dec 6, 2019, 10:59 AM • Last activity: May 23, 2025, 08:03 PM
30 votes
9 answers
46563 views
command-line tool for a single download of a torrent (like wget or curl)
I'm interested in a single command that would download the contents of a torrent (and perhaps participate as a seed following the download, until I stop it). Usually, there is a torrent-client daemon which should be started separately beforehand, and a client to control (like `transmission-remote`)....
I'm interested in a single command that would download the contents of a torrent (and perhaps participate as a seed following the download, until I stop it). Usually, there is a torrent-client daemon which should be started separately beforehand, and a client to control (like transmission-remote). But I'm looking for the simplicity of wget or curl: give one command, get the result after a while.
imz -- Ivan Zakharyaschev (15862 rep)
May 5, 2015, 12:01 PM • Last activity: May 18, 2025, 09:24 PM
3 votes
1 answers
871 views
How do you fetch a large file over http in parallel?
**Question:** Since HTTP supports resuming at an offset, are there any tools (or existing options for commands like wget or curl) that will launch multiple threads to fetch the file in parallel with multiple requests at different file offsets? This could help with performance of each socket is throt...
**Question:** Since HTTP supports resuming at an offset, are there any tools (or existing options for commands like wget or curl) that will launch multiple threads to fetch the file in parallel with multiple requests at different file offsets? This could help with performance of each socket is throttled separately. I could write a program to do this, but I'm wondering if the tooling already exists. **Background:** Recently I wanted to download a large iso, **but!** ... Somewhere between the server and my internet provider the transfer rate was limited to 100 kilobit! However, I noticed that the first 5 to 10 seconds had great throughput, hundreds of megabits. So I wrote a small bash script to restart after a few seconds: while ! timeout 8 wget -c http://example.com/bigfile.iso ; do true; done (I hope it was not my provider . . . But maybe it was. Someone please bring back net neutrality!)
KJ7LNW (525 rep)
Feb 3, 2023, 01:28 AM • Last activity: May 9, 2025, 08:03 PM
1 votes
1 answers
112 views
wget can't seem to download renderable result especially embedded images
I want to spider **plants.usda.gov** before it disappears due to budget cuts. However every wget combo I try results in an ultimately blank result. I also checked on archive.org and there too, the entries for **plants.usda.gov** are all blank. One random example of many: - https://web.archive.org/we...
I want to spider **plants.usda.gov** before it disappears due to budget cuts. However every wget combo I try results in an ultimately blank result. I also checked on archive.org and there too, the entries for **plants.usda.gov** are all blank. One random example of many: - https://web.archive.org/web/20250203022537/https://plants.usda.gov/ I watched the network tab in Chrome to look for all possible servers that might be involved in fulfilling requests and added them to the --domains argument Here is my example wget command:
wget --adjust-extension --continue --convert-links \
  --header 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:71.0) Gecko/20100101 Firefox/71.0' \
  --mirror --no-clobber --page-requisites --random-wait --recursive --rejected-log=./rejected.txt \
  --span-hosts --timestamping --verbose --wait 1  \
  -D=dap.digitalgov.gov,gis.sc.egov.usda.gov,js.arcgis.com,nrcsgeoservices.sc.egov.usda.gov,plants.sc.egov.usda.gov,plants.usda.gov,plantsservices.sc.egov.usda.gov,server.arcgisonline.com \
  -e robots=off \
  https://plants.usda.gov/plant-profile/ARHI3 
I can see it downloading fonts and javascript and such so it can render the page, but the resulting local copy renders as blank. Also, the actual image of the plant does not appear anywhwere in the downloaded files. For the example, this image should be present in the result: - Image But it is missing. image of some kind of wheat
slashdottir (169 rep)
Apr 28, 2025, 05:51 PM • Last activity: May 8, 2025, 09:02 AM
1 votes
1 answers
5610 views
Installing node 14 on centos 7, how does it work?
The code for installing node 14 on centos7 is: RUN curl -sL https://rpm.nodesource.com/setup_14.x | bash - RUN yum -y install nodejs How does it work? The first command downloads the package. Where does it get stored? How does the second command install nodejs from the downloaded package? Thanks. Wh...
The code for installing node 14 on centos7 is: RUN curl -sL https://rpm.nodesource.com/setup_14.x | bash - RUN yum -y install nodejs How does it work? The first command downloads the package. Where does it get stored? How does the second command install nodejs from the downloaded package? Thanks. What is the difference between putting bash and bash - **UPDATE:** If someone came to this question searching for installing node in centos7, here is a code snippet that gets the exact version from NodeJS website.
RUN wget https://nodejs.org/download/release/v14.17.0/node-v14.17.0-linux-x64.tar.gz  && \
    tar xf node-v14.17.0-linux-x64.tar.gz -C /opt/ && \
    rm node-v14.17.0-linux-x64.tar.gz
ENV PATH=/opt/node-v14.17.0-linux-x64/bin:$PATH
RUN npm config set cache /tmp --global
vijayst (111 rep)
Apr 21, 2022, 05:07 PM • Last activity: Apr 22, 2025, 05:02 PM
4 votes
1 answers
3329 views
How to tell wget to download files with url encoded names?
I'm trying to download an entire website using `wget` and this is the command I use: wget --recursive --no-clobber --page-requisites --convert-links --domains example.com --no-parent http://www.example.com/en/ It's working just fine but there is one problem. There files (mainly images) that their na...
I'm trying to download an entire website using wget and this is the command I use: wget --recursive --no-clobber --page-requisites --convert-links --domains example.com --no-parent http://www.example.com/en/ It's working just fine but there is one problem. There files (mainly images) that their name contains Chinese characters like this: > Image After downloading the file has been save with this name: > ??%96页主KV3.jpg And it's addressed in the html page like this and therefore issuing a 404 error: > �%2596页主KV3.jpg I wonder how can I prevent this inconsistency?!
2hamed (441 rep)
Dec 28, 2015, 10:19 AM • Last activity: Apr 18, 2025, 01:08 AM
14 votes
5 answers
74301 views
How do I use wget to download all links from my site and save to a text file?
I am trying to download all links from aligajani.com. There are 7 of them, excluding the domain facebook.com–which I want to ignore. I don't want to download from links that start with facebook.com domain. Also, I want them saved in a .txt file, line by line. So there would be 7 lines. Here's what I...
I am trying to download all links from aligajani.com. There are 7 of them, excluding the domain facebook.com–which I want to ignore. I don't want to download from links that start with facebook.com domain. Also, I want them saved in a .txt file, line by line. So there would be 7 lines. Here's what I've tried so far. This just downloads everything. Don't want that. wget -r -l 1 http://aligajani.com
Ali Gajani (295 rep)
Feb 26, 2014, 06:35 AM • Last activity: Apr 12, 2025, 02:37 AM
0 votes
2 answers
111 views
Downloading HTML files of a website with wget just give me a index.html
I am trying to download 1000 HTML pages from a specific site (https://isna.ir/) with wget in a recursive way (it is part of our course assignment) but it just downloads an index.html file. I tried a lot of options that wget provides, but none of them work, also I tried `--reject="index.html"` The co...
I am trying to download 1000 HTML pages from a specific site (https://isna.ir/) with wget in a recursive way (it is part of our course assignment) but it just downloads an index.html file. I tried a lot of options that wget provides, but none of them work, also I tried --reject="index.html" The command: wget --recursive -nd -np --random-wait -U Googlebot -P ./isna_crawl https://isna.ir/
Amirali (5 rep)
Mar 12, 2025, 07:47 AM • Last activity: Mar 13, 2025, 12:34 AM
0 votes
1 answers
61 views
Wget download wrong content
I'm trying to download a specific sitemap.xml (https://www.irna.ir/sitemap/all/sitemap.xml). The problem is that when you load the specific sitemap.xml for a few seconds one white page with a header on it (you are redirecting...) appears and then disappears. When I read the downloaded sitemap.xml it...
I'm trying to download a specific sitemap.xml (https://www.irna.ir/sitemap/all/sitemap.xml) . The problem is that when you load the specific sitemap.xml for a few seconds one white page with a header on it (you are redirecting...) appears and then disappears. When I read the downloaded sitemap.xml it was just a HTML file with the details of the redirect page, not the exact sitemap.xml that I wanted. **Part of the downloaded file(sitemap.xml)**:
var _this = this;
**Used command:** wget https://www.irna.ir/sitemap/all/sitemap.xml **Part of hat I want(sitemap.xml):**
https://www.irna.ir/sitemap/1403/12/22/sitemap.xml 


https://www.irna.ir/sitemap/1403/12/21/sitemap.xml 
I want to download the XML contents of the sitemap.xml, not the initial page (which two of them have the same URL)
Amirali (5 rep)
Mar 12, 2025, 03:17 PM • Last activity: Mar 12, 2025, 04:08 PM
1 votes
2 answers
456 views
How to Wget with Subset Condition + generate CHM/... e-book?
I want to generate a CHM/... e-book by wgetting with a subset condition: download a subset of data recursively in the [website][1] that is within HTML class `.container` for a CHM book. Pseudocode 0. wget recursively all links of chapters # TODO returns only index.html wget --random-wait -r -p -nd -...
I want to generate a CHM/... e-book by wgetting with a subset condition: download a subset of data recursively in the website that is within HTML class .container for a CHM book. Pseudocode 0. wget recursively all links of chapters # TODO returns only index.html wget --random-wait -r -p -nd -e robots=off -A".html" \ -U mozilla https://wwwnc.cdc.gov/travel/yellowbook/2018/table-of-contents 1. Contents in the current main page in .container of Fig. 1 and contents in the daughter pages of links. 2. create CHM e-book and/or other format Fig. 1 Inspection of CDC Yellow Book .container enter image description here Output: just index.html Expected output: e-book CHM and/or other format Wget Proposals 1. TimS wget -w5 --random-wait -r -nd -e robots=off -A".html" -U mozilla https://wwwnc.cdc.gov/travel/yellowbook/2018/table-of-contents Output: same as with the first code. 2. With Rejection List wget -w5 --random-wait -r -nd -e robots=off -A".html" \ -U mozilla -R css https://wwwnc.cdc.gov/travel/yellowbook/2018/table-of-contents Output: same as without rejection lists. 3. Another variant wget -w5 --random-wait -r -nd -e robots=off -A".html" \ -U mozilla https://wwwnc.cdc.gov/travel/yellowbook/2018/table-of-contents Output: similar as before. The tool www.html2pdf.it gives > Cannot get http://wwwnc.cdc.gov/travel/yellowbook/2016/table-of-contents : http status code 404 OS: Debian 8.7
L&#233;o L&#233;opold Hertz 준영 (7138 rep)
Apr 19, 2016, 04:01 PM • Last activity: Mar 7, 2025, 10:36 PM
0 votes
3 answers
616 views
I just installed Debian. I was trying to install ProtonVpn but I can't pull the deb file with wget
I just installed Debian. I was trying to install ProtonVpn but I can't pull the deb file with wget. My system clock is up to date. I also tried adding different servers in the resolve.conf file but the problem persists.The connection is established but the rest does not work. My system information:...
I just installed Debian. I was trying to install ProtonVpn but I can't pull the deb file with wget. My system clock is up to date. I also tried adding different servers in the resolve.conf file but the problem persists.The connection is established but the rest does not work. My system information:
uname -a
    Linux localhost 6.1.0-31-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.128-1 (2025-02-07) x86_64 GNU/Linux

    wget https://repo.protonvpn.com/debian/dists/stable/main/binary-all/protonvpn-stable-release_1.0.6_all.deb     
    --2025-02-13 19:07:36--  https://repo.protonvpn.com/debian/dists/stable/main/binary-all/protonvpn-stable-release_1.0.6_all.deb 
    Resolving repo.protonvpn.com (repo.protonvpn.com)... 104.26.4.35, 172.67.70.114, 104.26.5.35, ...
    Connecting to repo.protonvpn.com (repo.protonvpn.com)|104.26.4.35|:443... connected.
    GnuTLS: Error in the pull function.
    Unable to establish SSL connection.
I tried to reinstall my certificates but it didn't work. Moreover
add-apt-repository 'deb https://protonvpn.com/download/debian  stable main'
is not working, here is same error:
...
Ign:4 https://protonvpn.com/download/debian  stable InRelease
Err:1 https://protonvpn.com/download/debi  stable InRelease
  Could not handshake: Error in the pull function. [IP: 185.159.159.140 443]
Err:4 https://protonvpn.com/download/debian  stable InRelease
  Could not handshake: Error in the pull function. [IP: 185.159.159.140 443]
Reading package lists... Done
W: Failed to fetch https://protonvpn.com/download/debi/dists/stable/InRelease   Could not handshake: Error in the pull function. [IP: 185.159.159.140 443]
W: Failed to fetch https://protonvpn.com/download/debian/dists/stable/InRelease   Could not handshake: Error in the pull function. [IP: 185.159.159.140 443]
W: Some index files failed to download. They have been ignored, or old ones used instead.
and
$ openssl s_client -connect repo.protonvpn.com:443
 
CONNECTED(00000003)
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 324 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
also for gnu tls client:
~$ gnutls-cli -p 443 repo.protonvpn.com
 
Processed 140 CA certificate(s).
Resolving 'repo.protonvpn.com:443'...
Connecting to '104.26.5.35:443'...
*** Fatal error: Error in the pull function.
here is same error: error in the pull function. is my isp blocking protonvpn? because i can capture handshake for google.com and check out this :
curl -L -o windscribe-cli.deb https://windscribe.com/download/linux  
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (35) OpenSSL/3.0.15: error:0A00010B:SSL routines::wrong version number
any@localhost:~/Downloads$ curl --tlsv1.2 -L -o windscribe-cli.deb https://windscribe.com/download/linux  
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (35) OpenSSL/3.0.15: error:0A00010B:SSL routines::wrong version number
any@localhost:~/Downloads$
and browser too can not open: browser can not connect I don't have any problems installing any other package, for example.
$ sudo apt install tmux
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages were automatically installed and are no longer required:
  libb2-1 librsync2
Use 'sudo apt autoremove' to remove them.
The following NEW packages will be installed:
  tmux
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 455 kB of archives.
After this operation, 1,133 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian  bookworm/main amd64 tmux amd64 3.3a-3 [455 kB]
Fetched 455 kB in 1s (833 kB/s)
Selecting previously unselected package tmux.
(Reading database ... 358542 files and directories currently installed.)
Preparing to unpack .../archives/tmux_3.3a-3_amd64.deb ...
Unpacking tmux (3.3a-3) ...
Setting up tmux (3.3a-3) ...
Processing triggers for man-db (2.11.2-2) ...
emmet (1 rep)
Feb 13, 2025, 04:34 PM • Last activity: Mar 3, 2025, 11:07 AM
Showing page 1 of 20 total questions