Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
1
answers
15
views
InfluxQL query returns a "partial" answer over http. Can I curl the whole thing?
What’s the right way to pull a _complete_ answer to an InfluxQL query over http? I’m using the [`acct_gather` plugin](https://slurm.schedmd.com/acct_gather.conf.html) for a slurm cluster. It sends resource usage data to an [influxdb v1](https://docs.influxdata.com/influxdb/v1/) database. So if I wri...
What’s the right way to pull a _complete_ answer to an InfluxQL query over http?
I’m using the [
acct_gather
plugin](https://slurm.schedmd.com/acct_gather.conf.html) for a slurm cluster. It sends resource usage data to an [influxdb v1](https://docs.influxdata.com/influxdb/v1/) database. So if I write
#SBATCH --profile=Task
in an sbatch file, it records things like memory, I/O, and CPU usage to the database.
But if I try to ask for that data as a json file, e.g.,...
jobid=12345
curl -G 'http://:/query ?' \
--data-urlencode "db=myslurmdatabase" \
--data-urlencode 'q=select "value" from /./ where "job"='"'$jobid'"
...then I get a **partial** response with only one type of measurement ("CPUFrequency"):
{
"results": [
{
"statement_id": 0,
"series": [
{
"name": "CPUFrequency",
"columns": [
"time",
"value"
],
"values": [
...
],
"partial": true
}
]
}
]
}
I think this happens for jobs that have run past a certain number of data points.
## What I've found
- In [this thread on github](https://github.com/grafana/grafana/issues/7380) somebody asked:
> So how does it work? Do you get a url with the second chunk or does the http response contain multiple json bodies? Is this compatible with the json decoders in browser?
People replied to the effect that modern browsers can handle it, but I don’t think they answered the question directly.
- [There’s a “chunked” parameter](https://docs.influxdata.com/influxdb/v1/tools/api/#query-http-endpoint) for the /query
endpoint. The options are either true
(in which case it chunks based on series or 10,000 data points), or a specific number of points (in which case it chunks based on that number). Chunking happens either way. But it’s not clear to me how to get the _next_ chunk.
- It looks like somebody has written [a third-party program](https://github.com/matthewdowney/influxdb-stream) that can stream the chunked results from a query. But is it possible with curl, or would I have to use something like this?
wobtax
(1125 rep)
Aug 5, 2025, 08:39 PM
• Last activity: Aug 6, 2025, 03:59 PM
1
votes
1
answers
10584
views
curl 7.58 under proxy issue ssl wrong version
I just installed an Arch based distribution Antergos. Then I installed few packages with `pacman`. Now after a restart I am getting ssl errors while trying to clone git. fatal: unable to access 'https://xxxx@bitbucket.org/xxx/yyyy.git/': error:1408F10B:SSL routines:ssl3_get_record:wrong version numb...
I just installed an Arch based distribution Antergos. Then I installed few packages with
pacman
. Now after a restart I am getting ssl errors while trying to clone git.
fatal: unable to access 'https://xxxx@bitbucket.org/xxx/yyyy.git/ ': error:1408F10B:SSL routines:ssl3_get_record:wrong version number
also curl to any https doesn't work.
curl https://google.com
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
curl looks latest.
$ curl --version
curl 7.58.0 (x86_64-pc-linux-gnu) libcurl/7.58.0 OpenSSL/1.1.0g zlib/1.2.11 libidn2/2.0.4 libpsl/0.19.1 (+libidn2/2.0.4) nghttp2/1.30.0
Release-Date: 2018-01-24
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy PSL
$ pacman -Q | egrep 'ssl|curl'
curl 7.58.0-1
openssl 1.1.0.g-1
openssl-1.0 1.0.2.n-1
python-pycurl 7.43.0.1-1
$ ldd which curl
linux-vdso.so.1 (0x00007ffdccee9000)
libcurl.so.4 => /usr/lib/libcurl.so.4 (0x00007fe06a5a5000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fe06a387000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007fe069fd0000)
libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x00007fe069dab000)
libidn2.so.0 => /usr/lib/libidn2.so.0 (0x00007fe069b8e000)
libpsl.so.5 => /usr/lib/libpsl.so.5 (0x00007fe069980000)
libssl.so.1.1 => /usr/lib/libssl.so.1.1 (0x00007fe069716000)
libcrypto.so.1.1 => /usr/lib/libcrypto.so.1.1 (0x00007fe069299000)
libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007fe06904b000)
libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00007fe068d63000)
libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00007fe068b30000)
libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x00007fe06892c000)
libz.so.1 => /usr/lib/libz.so.1 (0x00007fe068715000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fe06aa4a000)
libunistring.so.2 => /usr/lib/libunistring.so.2 (0x00007fe068393000)
libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fe06818f000)
libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00007fe067f82000)
libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x00007fe067d7e000)
libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007fe067b67000)
I am behind proxy
$ proxytunnel -p PROXY_IP:PROXY_PORT -d www.google.com:443 -a 7000
$ openssl s_client -connect localhost:7000
CONNECTED(00000003)
depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA
verify return:1
depth=1 C = US, O = Google Inc, CN = Google Internet Authority G2
verify return:1
depth=0 C = US, ST = California, L = Mountain View, O = Google Inc, CN = www.google.com
verify return:1
---
Certificate chain
0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
i:/C=US/O=Google Inc/CN=Google Internet Authority G2
1 s:/C=US/O=Google Inc/CN=Google Internet Authority G2
i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIEdjCCA16gAwIBAgIINC+Y7yLd9OswDQYJKoZIhvcNAQELBQAwSTELMAkGA1UE
BhMCVVMxEzARBgNVBAoTCkdvb2dsZSBJbmMxJTAjBgNVBAMTHEdvb2dsZSBJbnRl
cm5ldCBBdXRob3JpdHkgRzIwHhcNMTgwMjA3MjExMzI5WhcNMTgwNTAyMjExMTAw
WjBoMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwN
TW91bnRhaW4gVmlldzETMBEGA1UECgwKR29vZ2xlIEluYzEXMBUGA1UEAwwOd3d3
Lmdvb2dsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC7lAOc
gsUECzoiJfpnAtq9qxAeTWBS8KYCd3ESvd7255YXW8FUiGTj9MYSSJ3OlYQvvU1I
NmnIXNU7BnhUBbY1kW4+GXc5RimwiIW5VsWftt1XOVZh5mR08DhYQjdQqI3IhK6r
FTS6/6BvFcjWMT/rVQv59XDaQLqWXSomEzOr1vDRXZSbAPr+YAGKUj+K0TjgZNW1
8xo8Lyp8kDjFxrWaThfwFMosbFw5HnnzpT1WSHfmXmF1mvvk4cJ+U2m3+K2pRki8
nNnWafLPdT408XoXrbWLVeEVSIQQH5z93uoj5lESal05pnOY5yYUJ+vmHdY7jOBh
sT9HaGzl3kD2J+1BAgMBAAGjggFBMIIBPTATBgNVHSUEDDAKBggrBgEFBQcDATAZ
BgNVHREEEjAQgg53d3cuZ29vZ2xlLmNvbTBoBggrBgEFBQcBAQRcMFowKwYIKwYB
BQUHMAKGH2h0dHA6Ly9wa2kuZ29vZ2xlLmNvbS9HSUFHMi5jcnQwKwYIKwYBBQUH
MAGGH2h0dHA6Ly9jbGllbnRzMS5nb29nbGUuY29tL29jc3AwHQYDVR0OBBYEFNGB
jzGWH9WkzeHj88QOo3gBTBs+MAwGA1UdEwEB/wQCMAAwHwYDVR0jBBgwFoAUSt0G
Fhu89mi1dvWBtrtiGrpagS8wIQYDVR0gBBowGDAMBgorBgEEAdZ5AgUBMAgGBmeB
DAECAjAwBgNVHR8EKTAnMCWgI6Ahhh9odHRwOi8vcGtpLmdvb2dsZS5jb20vR0lB
RzIuY3JsMA0GCSqGSIb3DQEBCwUAA4IBAQBxOxsCFg7RIa0zVDI0N9rTNaPopqX9
yrIlK1u+C2ohrg5iF5XlTEzTuH43D/J0Lz550D9Cft4s6lWaNKpVDhNivEy2nzK5
ekuQKYtoQlIyfUnD5GnGZyr3m2AcMFnAAhlXVbyiJk0VNLDGCMVBaOuL/yT8X5dQ
j8MrKSvZRaUt2oixE7fKGNv5nhs0wuHu1TEU/8R5UMxbJs8knMZsRcfsvzjXpEHC
guA54xPnLFiU0QTw4GIFi5nDvfR5cF2UAJZNIF4o4sr4DB8+X7DWtBmMNHuR4Cpn
HEdlVzOA7BAGx8yO6AddwJo8AlxviCaPol1xPB8uJCGh/U0/7XhtR93S
-----END CERTIFICATE-----
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
issuer=/C=US/O=Google Inc/CN=Google Internet Authority G2
---
No client certificate CA names sent
Peer signing digest: SHA256
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 3790 bytes and written 261 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-RSA-CHACHA20-POLY1305
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-CHACHA20-POLY1305
Session-ID: BEE4D8162570B4AB0C8121DEC5756B6DC063DB3E7321BB58FD12D566482AD99A
Session-ID-ctx:
Master-Key: B050C78AAC1A0DF5063263DDCD3437CD3A4029E7D5431E236936D2D88AAAD2555A18D92318C9E2E31A550E339D4C26A8
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 100800 (seconds)
TLS session ticket:
0000 - 00 41 04 37 20 26 a1 bc-2b d0 86 8c 6b a5 74 ef .A.7 &..+...k.t.
0010 - 5c 82 0e d3 ec f7 97 0f-a9 9c cb e8 69 a8 0d 67 \...........i..g
0020 - 13 10 87 ec 22 da 60 d3-9b 98 f2 a4 ce 93 95 1c ....".`.........
0030 - 8f fa 71 57 b9 d9 9b 9f-14 9e 37 95 e5 70 e8 70 ..qW......7..p.p
0040 - 4b f5 ff c4 79 b6 f8 9c-32 f2 2a 13 81 1c 5b 9c K...y...2.*...[.
0050 - f3 52 26 df e6 8c db bd-23 c9 24 3e 46 8c 99 9a .R&.....#.$>F...
0060 - 13 53 69 5e 5d 2c c1 0f-e4 6d de df a9 33 af d9 .Si^],...m...3..
0070 - 1f 89 e7 c1 d9 8a d1 05-1a 88 c2 27 e2 0a 56 0f ...........'..V.
0080 - 40 ec 5c ed a3 ca f4 1e-f8 83 85 3b 7e 22 7d f5 @.\........;~"}.
0090 - b4 b7 96 a5 ca 27 4b 40-61 88 9d 58 d3 d6 e9 e7 .....'K@a..X....
00a0 - 1f 72 7c bf 25 24 f6 ab-83 a1 90 ae 97 92 d8 40 .r|.%$.........@
00b0 - 14 3b 5d 07 cd 5a 79 bc-eb 6b ae 66 f1 42 0c 11 .;]..Zy..k.f.B..
00c0 - a5 7e 68 f9 c1 51 6f 3d-7e f9 28 79 2a 32 d5 ea .~h..Qo=~.(y*2..
00d0 - 90 4f ee 2c 84 ac 66 0b-8d dc .O.,..f...
Start Time: 1519286347
Timeout : 7200 (sec)
Verify return code: 0 (ok)
Extended master secret: yes
---
read:errno=0
What is the solution ?
**Update**
Confirming this is necessarily a curl issue. I turn off proxy and connect directly curl https works. I set any other proxy server ip and port from https://free-proxy-list.net/ and then try to connect curl through proxy. I get the same error. So either this curl version has a bug or so many proxy servers are wrongly configured.
**Update**
I think the issue is related to Deepin
DE. I switched from Deeping Desktop Environment to Standard Gnome and curl started working fine. Possibly this is a bug related to Deepin's Network Settings. Although it sets the environment variables correctly.
Neel Basu
(321 rep)
Feb 21, 2018, 05:06 PM
• Last activity: Aug 5, 2025, 08:02 PM
5
votes
2
answers
4397
views
curl not able to write to /tmp directory owned by user
I tried running the script as instructed in https://docs.docker.com/engine/security/rootless/: $ curl -fsSL https://get.docker.com/rootless | sh But the script crashed in the following line: curl -L -o docker.tgz "$STATIC_RELEASE_URL" With the message: Warning: Failed to create the file docker.tgz:...
I tried running the script as instructed in https://docs.docker.com/engine/security/rootless/ :
$ curl -fsSL https://get.docker.com/rootless | sh
But the script crashed in the following line:
curl -L -o docker.tgz "$STATIC_RELEASE_URL"
With the message:
Warning: Failed to create the file docker.tgz: Permission denied
curl: (23) Failure writing output to destination
I narrowed down the problem to
curl
trying to write to the tmp
folder created by mktemp -d
, but I don't understand why it fails.
Some context:
$ whoami
thiago
$ uname -a
Linux thiago-acer 5.8.0-55-generic #62~20.04.1-Ubuntu SMP Wed Jun 2 08:55:04 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$ mktemp -d
/tmp/tmp.U1nPTN5dlS
$ cd /tmp/tmp.U1nPTN5dlS
$ ls -la
total 8
drwx------ 2 thiago thiago 4096 Jun 17 18:20 .
drwxrwxrwt 25 root root 4096 Jun 17 18:20 ..
After running the commands above, I tried:
# this fails with the same message as above
curl https://download.docker.com/linux/static/stable/x86_64/docker-20.10.7.tgz -O
# this works just fine
curl https://download.docker.com/linux/static/stable/x86_64/docker-20.10.7.tgz -o - > docker-20.10.7.tgz
# this also works
wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.7.tgz
The curl -O
command also works if I try it on some other folder, like my home folder.
Any help is appreciated.
Thiago Barcala
(151 rep)
Jun 17, 2021, 04:26 PM
• Last activity: Aug 2, 2025, 03:05 PM
0
votes
1
answers
1891
views
Including cURL in makefile
I'm using curl in my code and running through Makefile. but while running with "make" command its giving error like "curl/curl.h: No such file or directory". Here below is my makefile content. ` CXX = /home/directory/Documents/xyz/l4t-gcc/bin/aarch64-buildroot-linux-gnu-cc path = /home/directory/Doc...
I'm using curl in my code and running through Makefile. but while running with "make" command its giving error like "curl/curl.h: No such file or directory". Here below is my makefile content.
`
CXX = /home/directory/Documents/xyz/l4t-gcc/bin/aarch64-buildroot-linux-gnu-cc
path = /home/directory/Documents/xyz/l4t-gcc/bin/
CFLAGS = -Wall
#INCLUDE = -I/usr/local/include -I/usr/include -Iinclude
#LDFLAGS = -L/usr/local/lib -I/usr/lib
LDLIBS = -lcurl
SOURCES = src/sms_wrapper.c src/twilio.c
OUT = bin/sms_wrapper
all: build
build: $(SOURCES)
$(CXX) -o $(OUT) $(CFLAGS) $(SOURCES) $(LDLIBS)
clean:
rm -rf bin/sms_wrapper
`
I installed curl and added all things in Makefile which is needed for curl library. Does anyone have any suggestions or idea for resolving this thing !
nima
(1 rep)
Feb 13, 2023, 10:21 AM
• Last activity: Jul 29, 2025, 02:02 PM
2
votes
1
answers
3034
views
how to make cURL send AUTH command for FTPS?
this is causing headache for a long time now. Situation: i have a virtual machine that sits behind a proxy that shields it off the internet. The Proxy seems to work fine. I want to use cURL to transfer (-T) a ZIP file to a FTP on the internet. The FTP Server requires FTPS i.e. SSL/TLS to be used. Fr...
this is causing headache for a long time now.
Situation:
i have a virtual machine that sits behind a proxy that shields it off the internet. The Proxy seems to work fine.
I want to use cURL to transfer (-T) a ZIP file to a FTP on the internet. The FTP Server requires FTPS i.e. SSL/TLS to be used.
From my Windows machine i can use the same Proxy (and same proxy credentials) within TotalCommander to succesfully establish a FTPS session to the FTP Server i want to use.
So i think FTP server as well as Proxy Server are configured OKAY for the purpose.
What i try:
Basically i try variations of the follow commandline:
curl -vvv --ssl -T /path/to/my/file.zip ftp://my.ftpserver.com:21 --user USER:PASSWORD -x https://proxyuser:proxypass@local.proxyserver.mycompany.com:3128
I see in the verbose output, that cURL can authenticate against the proxy and tries to connect to the FTP Server. However, it keeps getting an error from SQUID (the proxy) that
Squid sent the following FTP command:
USER myftp user
The server responded with:
You must issue the AUTH command to change to an encrypted session before you can attempt to login.
550-This server requires encryption
When i change the destination **to use ftps:// instead of ftp://** it keeps telling me
curl: (35) gnutls_handshake() failed: An unexpected TLS packet was received.
When i look into the log of the Totalcommander FTP session, i see that it authenticates against the Proxy. Then connects to the FTP Server and then changes to secure mode for Authentication.
220-Welcome to Company-FTP
220 Company-FTP Server ready!
AUTH TLS
234 Changing to secure mode...
(cert stuff)
USER myftpuser
331 Username OK. Need password.
PASS **********
230 Password oK. Connected. logged in
....
What could i possibly do wrong here?
I researched this a bit and found a post in the curl mailing list that "--tlsv1" option should send "AUTH TLS" first before "AUTH SSL", however i do not think it is done, because even if i use the --tlsv1 switch, Squid will comeback with the same error message saying that it send "USER myftpuser" and got that 550 error back.
Is there a way to force cURL to send this "AUTH TLS" that Totalcommander seems to send and then auths succesfully?
Thanks a lot
aslmx
aslmx
(21 rep)
Apr 9, 2018, 08:10 AM
• Last activity: Jul 15, 2025, 09:03 AM
0
votes
1
answers
3261
views
HTTP version for CURL command on server and docker image
When I'm doing a CURL call ```curl https://example.com``` from a docker container, I got the error ```curl: (92) HTTP/2 stream 0 was not closed cleanly: HTTP_1_1_REQUIRED (err 13)```. But when I'm running the same command from the host server (RHEL) where docker container is running, it is working f...
When I'm doing a CURL call
https://example.com
from a docker container, I got the error : (92) HTTP/2 stream 0 was not closed cleanly: HTTP_1_1_REQUIRED (err 13)
. But when I'm running the same command from the host server (RHEL) where docker container is running, it is working fine.
So, I have added --http1.1 to the command in docker container, then it is working fine. But when I run the same command with --http1.1 on host server, then I got error : option --http1.1: is unknown
.
1. How the curl picks up the http version while making the call? Is there any setting that we can define to use a specific version by default?
2. Why --http1.1 is not working on server, but working in docker container?
Curl version on server is 7.29.0. Curl version on docker container is 7.64.0
noonenine
(27 rep)
May 29, 2023, 07:09 PM
• Last activity: Jul 11, 2025, 07:05 PM
0
votes
1
answers
160
views
Speeding up curl in bash scripts
I'm using a Bash script to retrieve the Spotify album thumbnail from whatever I'm listening at the moment to show it as an image in `Hyprlock`. For this particular case, I'm using the command `curl` to retrieve the album cover image and store it in a separate directory. ```lang-sh ... if [ ! -f "sto...
I'm using a Bash script to retrieve the Spotify album thumbnail from whatever I'm listening at the moment to show it as an image in
Hyprlock
.
For this particular case, I'm using the command curl
to retrieve the album cover image and store it in a separate directory.
-sh
...
if [ ! -f "stored_file.jpg" ]; then
curl $url -so stored_file.jpg
echo "stored_file.jpg"
...
The thing is, whenever this condition is met, curl
downloads the image, but it causes a lag spike, affecting all of the other widgets I implemented, which is not ideal.
I wanted to know if there was a way to optimize curl
, or use another similar command to download the image from the URL without having any performance issues. What I've already just managed to do is limit the use of curl
as much as possible to not have lag constantly. But it doesn't help that it lags everything else so frequently.
ItsFireStorm
(1 rep)
Dec 13, 2024, 10:11 AM
• Last activity: Jul 11, 2025, 04:27 AM
0
votes
1
answers
3210
views
Why does curl -k -I https://host.example.com not return response headers if certificate is not valid?
Does anyone know why curl with -k (--insecure) option and -I for show headers still shows the html response and not the headers as expected? Working as expected: $ curl -I https://validsslcert.example.com HTTP/1.1 302 Moved Temporarily Server: Apache-Coyote/1.1 ... $ curl -k -I https://validsslcert....
Does anyone know why curl with -k (--insecure) option and -I for show headers still shows the html response and not the headers as expected?
Working as expected:
$ curl -I https://validsslcert.example.com
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
...
$ curl -k -I https://validsslcert.example.com
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
...
$ curl -k https://invalidcert.example.com
... NOT working as expected: $ curl -k -I https://invalidcert.example.com
... NOT working as expected: $ curl -k -I https://invalidcert.example.com
Maintenance
It doesn't really matter here what I'm doing, but I'm testing what headers get set to identify different backend acl logic on haproxy. I would expect curl to allow me to make an insecure connection (invalid certificate) and still return the headers?
Peter Hubberstey
(36 rep)
Jan 11, 2021, 12:20 PM
• Last activity: Jul 9, 2025, 02:05 PM
0
votes
1
answers
167
views
Is it possible to use wget or curl to download documents from the FCC ECFS site?
I'm trying to use the FCC's Electronic Comment Filing System (ECFS) to bulk download filings in individual proceedings. They have an API that will return every filing in a proceeding. It returns a URL for individual documents in the format: https://www.fcc.gov/ecfs/document/10809709027819/1 However,...
I'm trying to use the FCC's Electronic Comment Filing System (ECFS) to bulk download filings in individual proceedings. They have an API that will return every filing in a proceeding. It returns a URL for individual documents in the format:
https://www.fcc.gov/ecfs/document/10809709027819/1
However, while this works in the browser, it only downloads a placeholder HTML file saying JavaScript it required when I use wget or curl. I tried examining the page in my browser but couldn't find anything like a source URL for the actual PDF.
Is there a way to use wget or curl to get at the actual PDF?
Sam
(1 rep)
Aug 13, 2023, 10:01 PM
• Last activity: Jul 3, 2025, 05:33 PM
1
votes
2
answers
3425
views
Unable to save Session ID using curl
With the request: curl -i -u pvserver:XXXXXXX 'http://192.168.2.42/api/login.json' I have this output {"salt":"uTxYWQDc9lWwsuHBRfkuTzJYG5M=","session":{"sessionId":2748768190,"roleId":0},"status":{"code":0}} Now I want to send the following request to the server: ``` curl -X POST \ 'http://192.168.2...
With the request:
curl -i -u pvserver:XXXXXXX 'http://192.168.2.42/api/login.json '
I have this output
{"salt":"uTxYWQDc9lWwsuHBRfkuTzJYG5M=","session":{"sessionId":2748768190,"roleId":0},"status":{"code":0}}
Now I want to send the following request to the server:
curl -X POST \
'http://192.168.2.42/api/dxs.json ?' \
-H 'accept: application/json, text/plain, */*' \
-H 'accept-encoding: gzip, deflate' \
-H 'accept-language: en-US,en;q=0.9' \
-H 'authorization: Basic cHZzZXJ2ZXI6VjZUNUJYSDI=' \
-H 'cache-control: no-cache' \
-H 'content-type: text/plain' \
-H 'cookie: language=en_GB' \
-H 'origin: http://192.168.2.42 ' \
-H 'referer: http://192.168.2.42/ ' \
-H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/71.0.3578.98 Chrome/71.0.3578.98 Safari/537.36' \
-b language=en_GB \
-d '{"dxsEntries":[{"dxsId":33556247,"value":95}]}'
which will work only if I include the received session ID, but I cannot copy and paste the ID as this is an automated process, part of a script, running every 6 seconds and getting the data "value" value from another server.
I have tried the curl -c and -b options but it appears to me they are not working as, using the browser development tool shows, the session ID does come in as a cookie.
jspv
(19 rep)
Apr 4, 2019, 12:20 PM
• Last activity: Jun 26, 2025, 07:04 AM
-1
votes
3
answers
4203
views
How to get the json data used in each curl statement while using xargs and maps it to its corresponding result?
I have text file that has arguments of curl command. This is how the file looks ``` 'https://example.com/tl/' -X POST -H 'Content-Type: application/json' --data-raw '{"email":"username2",}' 'https://example.com/tl/' -X POST -H 'Content-Type: application/json' --data-raw '{"email":"username3",}' 'htt...
I have text file that has arguments of curl command. This is how the file looks
'https://example.com/tl/ ' -X POST -H 'Content-Type: application/json' --data-raw '{"email":"username2",}'
'https://example.com/tl/ ' -X POST -H 'Content-Type: application/json' --data-raw '{"email":"username3",}'
'https://example.com/tl/ ' -X POST -H 'Content-Type: application/json' --data-raw '{"email":"username4",}'
'https://example.com/tl/ ' -X POST -H 'Content-Type: application/json' --data-raw '{"email":"username5",}'
'https://example.com/tl/ ' -X POST -H 'Content-Type: application/json' --data-raw '{"email":"username6",}'
This is the command I use
/AbsolutePath/Inputfile.txt | xargs -P 10000 -n 10 curl -s | jq '.message'
I'm using jq , to parse json in command line
What I want is,
1) Pipe or send output of above command to another command, so that if message
has certain text, grab the email **value** used in that corresponding curl command and write to a log file or create a filename with usernameX.txt
For example, only if username2 and username5 cURL command's message = 'success', these two usernames should be written to a log file or two files username2.txt and username5.txt should be created.
sofs1
(117 rep)
Jan 4, 2022, 11:26 AM
• Last activity: Jun 26, 2025, 01:07 AM
74
votes
5
answers
76723
views
Resume failed download using Linux command line tool
How do I resume a partially downloaded file using a Linux commandline tool? I downloaded a large file partially, i.e. 400 MB out of 900 MB due to power interruption, but when I start downloading again it resumes from scratch. How do I start from 400 MB itself?
How do I resume a partially downloaded file using a Linux commandline tool?
I downloaded a large file partially, i.e. 400 MB out of 900 MB due to power interruption, but when I start downloading again it resumes from scratch.
How do I start from 400 MB itself?
amolveer
(979 rep)
Nov 4, 2014, 10:29 AM
• Last activity: Jun 24, 2025, 06:49 PM
-1
votes
2
answers
146
views
PHP-FPM status page returns curl: (56) Recv failure
I have `PHP-FPM` pool with such configuration blocks: ... listen = 0.0.0.0:9000 ... pm.status_path = /status ... I'm getting curl: (56) Recv failure: Connection reset by peer while trying curl 0.0.0.0:9000/status . `PHP-FPM` instance I try to interact with is containerized and `curl` action I do fro...
I have
PHP-FPM
pool with such configuration blocks:
...
listen = 0.0.0.0:9000
...
pm.status_path = /status
...
I'm getting
curl: (56) Recv failure: Connection reset by peer
while trying
curl 0.0.0.0:9000/status
.
PHP-FPM
instance I try to interact with is containerized and curl
action I do from docker container.
Aleksey
(57 rep)
Jun 10, 2025, 08:51 AM
• Last activity: Jun 11, 2025, 10:25 AM
3
votes
3
answers
2132
views
curl progress in dialog
how can i properly display the curl progress in the dialog window? curl http://mysite.corp/image/root_21.tar.bz2 | tar -C /mnt/dest/ -jxf - [![enter image description here][1]][1] i tried this command but as you can see it does not display it correctly. curl -f -x '' -L http://mysite.corp/image/root...
how can i properly display the curl progress in the dialog window?
curl http://mysite.corp/image/root_21.tar.bz2 | tar -C /mnt/dest/ -jxf -
i tried this command but as you can see it does not display it correctly.
curl -f -x '' -L http://mysite.corp/image/root_21.tar.bz2 | tar -C /mnt/dest -xjpf - --exclude='dev/*' | dialog --backtitle "dialog" --stderr --title 'Linux Image' --textbox /tmp/log 30 80
this command almost helps me but i want it to overwrite itself and not show me new line progress in each line. basically i want it to be the same as the original command shows it but in the dialog.
(curl -f -x '' -L http://mysite.corp/image/root_21.tar.bz2 | tar -C /mnt/dest -xjpf - --exclude='dev/*' ) 2>&1 | dialog --progressbox 20 120



Asaf Magen
(547 rep)
Sep 7, 2015, 09:12 AM
• Last activity: Jun 8, 2025, 02:01 PM
1
votes
1
answers
4805
views
Posting to socket using curl
I'm struggling to get `curl` and `socat` to play nicely together. The situation is the following: 1. I post XML to log in to a system. 2. The returned message contains an authentication token. 3. I post subsequent requests with the token. Caveat: if the connection is broken, the token expires, so I...
I'm struggling to get
curl
and socat
to play nicely together.
The situation is the following:
1. I post XML to log in to a system.
2. The returned message contains an authentication token.
3. I post subsequent requests with the token.
Caveat: if the connection is broken, the token expires, so I can't use plain curl
.
I need this to run in Linux. Since I need the connection to persist, I decided to use socat
.
If I run this to POST the XML:
curl http://$target_ip -d @./xml/login.xml
... I get a proper answer from the system, but the connection is closed, so I can't reuse the token.
However, if I try this (of course, after socat
):
curl --unix-socket /tmp/$target_ip.sock -d @./xml/login.xml
Curl complains that I don't have the URL set.
Shiunbird
(63 rep)
Jan 15, 2018, 03:34 PM
• Last activity: Jun 5, 2025, 07:05 AM
0
votes
1
answers
2680
views
How to send with curl JSON from another curl command output
I want to get JSON with curl command, so with below command I am getting output: curl -GET http://localhost:9200/oldindex/_mapping?pretty { "gl-events_1" : { "mappings" : { "message" : { "dynamic" : "false", "dynamic_templates" : [ { "fields" : { "path_match" : "fields.*", "mapping" : { "doc_values"...
I want to get JSON with curl command, so with below command I am getting output:
curl -GET http://localhost:9200/oldindex/_mapping?pretty
{
"gl-events_1" : {
"mappings" : {
"message" : {
"dynamic" : "false",
"dynamic_templates" : [
{
"fields" : {
"path_match" : "fields.*",
"mapping" : {
"doc_values" : true,
"index" : true,
"type" : "keyword"
}
}
}
],
"properties" : {
"alert" : {
"type" : "boolean"
},
"event_definition_id" : {
"type" : "keyword"
},
"event_definition_type" : {
"type" : "keyword"
},
"fields" : {
"type" : "object",
"dynamic" : "true"
},
"id" : {
"type" : "keyword"
},
"key" : {
"type" : "keyword"
},
"key_tuple" : {
"type" : "keyword"
},
"message" : {
"type" : "text",
"norms" : false,
"fields" : {
"keyword" : {
"type" : "keyword"
}
},
"analyzer" : "standard"
},
"origin_context" : {
"type" : "keyword"
},
"priority" : {
"type" : "long"
},
"source" : {
"type" : "keyword"
},
"source_streams" : {
"type" : "keyword"
},
"streams" : {
"type" : "keyword"
},
"timerange_end" : {
"type" : "date",
"format" : "yyyy-MM-dd HH:mm:ss.SSS"
},
"timerange_start" : {
"type" : "date",
"format" : "yyyy-MM-dd HH:mm:ss.SSS"
},
"timestamp" : {
"type" : "date",
"format" : "yyyy-MM-dd HH:mm:ss.SSS"
},
"timestamp_processing" : {
"type" : "date",
"format" : "yyyy-MM-dd HH:mm:ss.SSS"
},
"triggered_jobs" : {
"type" : "keyword"
}
}
}
}
}
}
Now i want to store this output as json file so I copied it in file and gave extension as
.json
But when i try to put with curl I am getting below error
curl -X PUT http://localhost:9200/new_good -H 'Content-Type: application/json' -d sampl.json
{"error":{"root_cause":[{"type":"not_x_content_exception","reason":"Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"}],"type":"not_x_content_exception","reason":"Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"},"status":500}
But when i run below command with same json format directly it works,
curl -X PUT \
http://localhost:9200/new_good \
-H 'Content-Type: application/json' \
-d '{"mappings" : {
"message" : {
"dynamic_templates" : [
{
"internal_fields" : {
"match" : "gl2_*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "keyword"
}
}
},
{
"store_generic" : {
"match_mapping_type" : "string",
"mapping" : {
"type" : "keyword"
}
}
}
],
"properties" : {
"LoggerName" : {
"type" : "keyword"
},
"MessageParam0" : {
"type" : "keyword"
},
"MessageParam1" : {
"type" : "long"
},
"MessageParam2" : {
"type" : "keyword"
},
"MessageParam3" : {
"type" : "keyword"
},
"MessageParam4" : {
"type" : "keyword"
},
"MessageParam5" : {
"type" : "keyword"
},
"MessageParam6" : {
"type" : "keyword"
},
"MessageParam7" : {
"type" : "keyword"
},
"MessageParam8" : {
"type" : "keyword"
},
"Severity" : {
"type" : "keyword"
},
"SourceClassName" : {
"type" : "keyword"
},
"SourceMethodName" : {
"type" : "keyword"
},
"SourceSimpleClassName" : {
"type" : "keyword"
},
"StackTrace" : {
"type" : "keyword"
},
"Thread" : {
"type" : "keyword"
},
"Time" : {
"type" : "keyword"
},
"facility" : {
"type" : "keyword"
},
"full_message" : {
"type" : "text",
"analyzer" : "standard"
},
"gl2_accounted_message_size" : {
"type" : "long"
},
"gl2_message_id" : {
"type" : "keyword"
},
"gl2_processing_timestamp" : {
"type" : "date",
"format" : "yyyy-MM-dd HH:mm:ss.SSS"
},
"gl2_receive_timestamp" : {
"type" : "date",
"format" : "yyyy-MM-dd HH:mm:ss.SSS"
},
"gl2_remote_ip" : {
"type" : "keyword"
},
"gl2_remote_port" : {
"type" : "long"
},
"gl2_source_input" : {
"type" : "keyword"
},
"gl2_source_node" : {
"type" : "keyword"
},
"level" : {
"type" : "long"
},
"message" : {
"type" : "text",
"analyzer" : "standard"
},
"source" : {
"type" : "text",
"analyzer" : "analyzer_keyword",
"fielddata" : true
},
"streams" : {
"type" : "keyword"
},
"timestamp" : {
"type" : "date",
"format" : "yyyy-MM-dd HH:mm:ss.SSS"
}
}
}
}
}
}'
What I want is store curl GET command output as valid json which I can use in curl PUT,
curl get > some.json
curl put -d some.json
I am new to this and i tried several options with jq as well but that also didn't workd for me.
Please guide me here.
Regards
SAM
Samurai
(95 rep)
Jun 1, 2022, 06:23 AM
• Last activity: Jun 4, 2025, 12:03 PM
1
votes
1
answers
5629
views
curl, wget do not return anything
I am trying this `curl -I zomato.com | head -n 1` and I am not getting any response. % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:- -:-- 0:05:29 --:- -:-- 0 Is the site protected by firewalls? Even `wget` is not working on the...
I am trying this
curl -I zomato.com | head -n 1
and I am not getting any response.
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:- -:-- 0:05:29 --:- -:-- 0
Is the site protected by firewalls?
Even wget
is not working on the site as well. Other sites like google.com
are returning 200
response as expected.
Alex Kasina
(175 rep)
Nov 26, 2016, 10:44 PM
• Last activity: Jun 2, 2025, 03:03 PM
0
votes
1
answers
207
views
unable to update wget version
I want to update my wget version to 1.22 which is currently 1.19 using the command : curl -O https://ftp.gnu.org/gnu/wget/wget-1.21.tar.gz but getting following error: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-...
I want to update my wget version to 1.22 which is currently 1.19 using the command :
curl -O https://ftp.gnu.org/gnu/wget/wget-1.21.tar.gz
but getting following error:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to ftp.gnu.org:443
Doesn't seem to work at all.
Aviator
(127 rep)
Oct 18, 2023, 01:53 PM
• Last activity: May 23, 2025, 09:00 PM
1
votes
3
answers
4492
views
how to get full URL from shortened URLs like bit.ly?
* I have a shortened URL from some documentation * I would like to open the URL because I believe the URL has good information * but I do not want to click the link until I can see the full URL ### how to get full URL from shortened URLs like bit.ly?
* I have a shortened URL from some documentation
* I would like to open the URL because I believe the URL has good information
* but I do not want to click the link until I can see the full URL
### how to get full URL from shortened URLs like bit.ly?
Trevor Boyd Smith
(4181 rep)
Feb 20, 2023, 06:41 PM
• Last activity: May 16, 2025, 11:56 PM
5
votes
1
answers
2080
views
TOR hidden service not always accessible through cURL. Takes multiple tries
When I try to access a hidden service on TOR using cURL, for some reason I'm not getting access to the site 100% of the time. Many times it returns `"curl: (7) Can't complete SOCKS5 connection to 0.0.0.0:0. (5)"` Is there something I can do to configure cURL to work better with TOR? Here is the outp...
When I try to access a hidden service on TOR using cURL, for some reason I'm not getting access to the site 100% of the time. Many times it returns
"curl: (7) Can't complete SOCKS5 connection to 0.0.0.0:0. (5)"
Is there something I can do to configure cURL to work better with TOR? Here is the output I'm getting:
root@Dexter:~# curl --proxy socks5h://localhost:9050 http://5ztppjwojkuslibm.onion/
curl: (7) Can't complete SOCKS5 connection to 0.0.0.0:0. (5)
root@Dexter:~# curl --proxy socks5h://localhost:9050 http://5ztppjwojkuslibm.onion/
curl: (7) Can't complete SOCKS5 connection to 0.0.0.0:0. (5)
root@Dexter:~# curl --proxy socks5h://localhost:9050 http://5ztppjwojkuslibm.onion/
curl: (18) transfer closed with 1 bytes remaining to read
This is a test page to see if I can run a hidden tor service!
Looks like it's working!
root@Dexter:~# root@Dexter:~# curl --proxy socks5h://localhost:9050 http://5ztppjwojkuslibm.onion/ curl: (18) transfer closed with 1 bytes remaining to readThis is a test page to see if I can run a hidden tor service!
Looks like it's working!
root@Dexter:~#
I like to code
(197 rep)
Oct 6, 2015, 09:25 PM
• Last activity: May 9, 2025, 01:00 PM
Showing page 1 of 20 total questions