Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
0
answers
2
views
Why can't I increase my screen brightness in Debian 12 basic install on my old Packard Bell Easynote Laptop?
I'm trying to increase the brightness of my screen in order to see well what I'm doing (because the set brightness is so low thay I can't see anything if I have some light behind me). Before asking here I have investigated and found some possible ways that didn't work for me: I tried to increase the...
I'm trying to increase the brightness of my screen in order to see well what I'm doing (because the set brightness is so low thay I can't see anything if I have some light behind me). Before asking here I have investigated and found some possible ways that didn't work for me:
I tried to increase the backlight of my laptop (since it's too dark) by using
brightnessctl
(which teorically worked but didn't actually change anything, since the brightness didn't change) or xbacklight
(which didn't work because, as I have understood, it's for intel GPUs), so I tried to change it manually in /sys/class/backlight/[device]
.
I could find a device called **radeon_bl0** inside the mentioned path, and some subdirectories inside radeon_bl0 such as: actual_brightness, bl_power (set to 0), brightness, device, max_brightness (255), power, scale, subsystem, type and uevent. Seeing that, I tried to do it with echo 200 > /sys/class/backlight/radeon_bl0/brightness
. Actually, it changed the number inside the file (the cat
command showed the new value), but the actual (physical) backlight was the same.
I don't know why does this happen, but I think it could be related to brightnessctl
not working.
Thanks for reading this, I hope someone can help me :)
PD: My GPU is an AMD ATI Mobility Radeon 9550
bueno467
Aug 5, 2025, 04:11 PM
• Last activity: Aug 6, 2025, 12:20 AM
0
votes
4
answers
1868
views
create variables from CSV with varying number of fields
Looking for some help turning a CSV into variables. I tried using IFS, but seems you need to define the number of fields. I need something that can handle varying number of fields. *I am modifying my original question with the current code I'm using (taken from the answer provided by hschou) which i...
Looking for some help turning a CSV into variables. I tried using IFS, but seems you need to define the number of fields. I need something that can handle varying number of fields.
*I am modifying my original question with the current code I'm using (taken from the answer provided by hschou) which includes updated variable names using type instead of row, section etc.
I'm sure you can tell by my code, but I am pretty green with scripting, so I am looking for help to determine if and how I should add another loop or take a different approach to parsing the typeC data because although they follow the same format, there is only one entry for each of the typeA and typeB data, and there can be between 1-15 entries for the typeC data. The goal being only 3 files, one for each of the data types.
Data format:
Container: PL[1-100]
TypeA: [1-20].[1-100].[1-1000].[1-100]-[1-100]
TypeB: [1-20].[1-100].[1-1000].[1-100]-[1-100]
TypeC (1 to 15 entries): [1-20].[1-100].[1-1000].[1-100]-[1-100]
*There is no header in the CSV, but if there were it would look like this (Container, typeA, and typeB data always being in position 1,2,3, and typeC data being all that follow): Container,typeA,typeB,typeC,tycpeC,typeC,typeC,typeC,..
CSV:
PL3,12.1.4.5-77,13.6.4.5-20,17.3.577.9-29,17.3.779.12-33,17.3.802.12-60,17.3.917.12-45,17.3.956.12-63,17.3.993.12-42
PL4,12.1.4.5-78,13.6.4.5-21,17.3.577.9-30,17.3.779.12-34
PL5,12.1.4.5-79,13.6.4.5-22,17.3.577.9-31,17.3.779.12-35,17.3.802.12-62,17.3.917.12-47
PL6,12.1.4.5-80,13.6.4.5-23,17.3.577.9-32,17.3.779.12-36,17.3.802.12-63,17.3.917.12-48,17.3.956.12-66
PL7,12.1.4.5-81,13.6.4.5-24,17.3.577.9-33,17.3.779.12-37,17.3.802.12-64,17.3.917.12-49,17.3.956.12-67,17.3.993.12-46
PL8,12.1.4.5-82,13.6.4.5-25,17.3.577.9-34
Code:
#!/bin/bash
#Set input file
_input="input.csv"
# Pull variables in from csv
# read file using while loop
while read; do
declare -a COL=( ${REPLY//,/ } )
echo -e "containerID=${COL}\ntypeA=${COL}\ntypeB=${COL}" >/tmp/typelist.txt
idx=1
while [ $idx -lt 10 ]; do
echo "typeC$idx=${COL[$((idx+2))]}" >>/tmp/typelist.txt
let idx=idx+1
#whack off empty variables
sed '/\=$/d' /tmp/typelist.txt > /tmp/typelist2.txt && mv /tmp/typelist2.txt /tmp/typelist.txt
#set variables from temp file
. /tmp/typelist.txt
done
sleep 1
#Parse data in this loop.#
echo -e "\n"
echo "Begin Processing for $container"
#echo $typeA
#echo $typeB
#echo $typeC
#echo -e "\n"
#Strip - from sub data for extra parsing
typeAsub="$(echo "$typeA" | sed 's/\-.*$//')"
typeBsub="$(echo "$typeB" | sed 's/\-.*$//')"
typeCsub1="$(echo "$typeC1" | sed 's/\-.*$//')"
#strip out first two decimils for extra parsing
typeAprefix="$(echo "$typeA" | cut -d "." -f1-2)"
typeBprefix="$(echo "$typeB" | cut -d "." -f1-2)"
typeCprefix1="$(echo "$typeC1" | cut -d "." -f1-2)"
#echo $typeAsub
#echo $typeBsub
#echo $typeCsub1
#echo -e "\n"
#echo $typeAprefix
#echo $typeBprefix
#echo $typeCprefix1
#echo -e "\n"
echo "Getting typeA dataset for $typeA"
#call api script to pull data ; echo out for test
echo "API-gather -option -b "$typeAsub" -g all > "$container"typeA-dataset"
sleep 1
echo "Getting typeB dataset for $typeB"
#call api script to pull data ; echo out for test
echo "API-gather -option -b "$typeBsub" -g all > "$container"typeB-dataset"
sleep 1
echo "Getting typeC dataset for $typeC1"
#call api script to pull data ; echo out for test
echo "API-gather -option -b "$typeCsub" -g all > "$container"typeC-dataset"
sleep 1
echo "Getting additional typeC datasets for $typeC2-15"
#call api script to pull data ; echo out for test
echo "API-gather -option -b "$typeCsub2-15" -g all >> "$container"typeC-dataset"
sleep 1
echo -e "\n"
done < "$_input"
exit 0
Speed isnt a concern, but if I've done anything really stupid up there, feel free to slap me in the right direction. :)
Jdubyas
(45 rep)
Jul 12, 2017, 05:09 AM
• Last activity: Aug 6, 2025, 12:04 AM
1
votes
1
answers
38
views
`echo 2 > /proc/sys/vm/drop_caches` decreases active pages
Before running `echo 2 > /proc/sys/vm/drop_caches` the active page count was: ``` Active(anon): 36185016 kB ``` after the flush, active page count became: ``` Active(anon): 26430472 kB ``` every other meminfo metrics stayed mostly the same and the system shows that 10GB of ram was freed. These 10GB...
Before running
echo 2 > /proc/sys/vm/drop_caches
the active page count was:
Active(anon): 36185016 kB
after the flush, active page count became:
Active(anon): 26430472 kB
every other meminfo metrics stayed mostly the same and the system shows that 10GB of ram was freed. These 10GB are not attributed to any of the processes.
What could explain such behavior? I am running on kernel 6.8.
Update:
This happens when **madvise(MADV_DONTNEED)** is used to decommit RSS. If I use *MADV_FREE* instead then surpringly I do not see **hidden** kernel memory and total used memory is tightly correlated with the RSS of the main process.
Roman
(111 rep)
Aug 3, 2025, 06:56 PM
• Last activity: Aug 5, 2025, 11:07 PM
2
votes
1
answers
1869
views
mount: overlapping loop device
I getting these problems when I try to mount this filesystem DOS/MBR #file bag.vhd bag.vhd: DOS/MBR boot sector; partition 1 : ID=0x83, active, start-CHS (0x0,1,1), end-CHS (0x9,254,63), startsector 63, 160587 sectors #sudo mount -t auto -o ro,loop,offset=82252288 bag.vhd /mnt/floppy/ mount: /mnt/fl...
I getting these problems when I try to mount this filesystem DOS/MBR
#file bag.vhd
bag.vhd: DOS/MBR boot sector; partition 1 : ID=0x83, active, start-CHS (0x0,1,1), end-CHS (0x9,254,63), startsector 63, 160587 sectors
#sudo mount -t auto -o ro,loop,offset=82252288 bag.vhd /mnt/floppy/
mount: /mnt/floppy/: overlapping loop device exists for /home/ffha/Documents/descon/bag.vhd.
I have been created /mnt/floppy.
fica
(21 rep)
Oct 6, 2019, 01:42 PM
• Last activity: Aug 5, 2025, 11:04 PM
88
votes
8
answers
91080
views
List the files accessed by a program
`time` is a brilliant command if you want to figure out how much CPU time a given command takes. I am looking for something similar that can list the files being accessed by a program and its children. Either in real time or as a report afterwards. Currently I use: #!/bin/bash strace -ff -e trace=fi...
time
is a brilliant command if you want to figure out how much CPU time a given command takes.
I am looking for something similar that can list the files being accessed by a program and its children. Either in real time or as a report afterwards.
Currently I use:
#!/bin/bash
strace -ff -e trace=file "$@" 2>&1 | perl -ne 's/^[^"]+"(([^\\"]|\\[\\"nt])*)".*/$1/ && print'
but its fails if the command to run involves sudo
. It is not very intelligent (it would be nice if it could only list files existing or that had permission problems or group them into files that are read and files that are written). Also strace
is slow, so it would be good with a faster choice.
Ole Tange
(37348 rep)
Aug 16, 2011, 02:51 PM
• Last activity: Aug 5, 2025, 10:58 PM
0
votes
1
answers
44
views
Linux user changed permissions and ownership of shared folder file not created by him
I am using Linux Mint and my workmates are using Windows. We've got a local, shared server (also Linux) for documentation files and a weird thing happened yesterday: a windows user created a file (`.odm`) and after I changed it, the ownership of the file changed to me and all the other users, includ...
I am using Linux Mint and my workmates are using Windows. We've got a local, shared server (also Linux) for documentation files and a weird thing happened yesterday: a windows user created a file (
.odm
) and after I changed it, the ownership of the file changed to me and all the other users, including the one who created it, had permission only to read it, although, initially (before I edited it) everyone could read, write and execute.
I don't know what information I need to give to make context clearer, but I'd like to understand how that happened. I mean, it seems very weird for a different user to be able to change permissions and ownership of a shared server's file.
The server is running samba, and all the clients are using that to access the files.
Bernardo Benini Fantin
(101 rep)
Aug 5, 2025, 11:07 AM
• Last activity: Aug 5, 2025, 10:31 PM
19
votes
3
answers
10223
views
How can I create a swap file?
I know how to create and use a swap partition but can I also use a file instead? How can I create a swap file on a Linux system?
I know how to create and use a swap partition but can I also use a file instead?
How can I create a swap file on a Linux system?
Vlastimil Burián
(30495 rep)
Oct 26, 2015, 09:47 PM
• Last activity: Aug 5, 2025, 10:19 PM
1
votes
1
answers
1871
views
pfsense mount root error after disk clone
I went the lazy route and cloned my SSD that runs my current pfsense (2.1.5) machine to create a backup machine with the same config. Instead of doing a fresh reinstall and copying the config. Both machines have the exact same hardware and BIOS settings. Both SSD's I used, main and clone are the sam...
I went the lazy route and cloned my SSD that runs my current pfsense (2.1.5) machine to create a backup machine with the same config.
Instead of doing a fresh reinstall and copying the config.
Both machines have the exact same hardware and BIOS settings.
Both SSD's I used, main and clone are the same (size and brand).
I used clonezilla to create the clone.
During the boot of my "backup" machine I got the error:
Trying to mount root from ufs:/dev/ad4s1a
ROOT MOUNT ERROR
Following the ?:
It's so weird that this happend as it was a 1:1 clone.
Also /dev/ad4s1a exists
Anyone have any ideas how to:
1. Solve my current problem?
2. Avoid this during a clone?
Thanks


gelleby
(51 rep)
May 23, 2016, 11:01 PM
• Last activity: Aug 5, 2025, 10:07 PM
0
votes
0
answers
3
views
podman ps takes a long time (5+ minutes) to detect a killed container & its 'conmon' OCI runtime wrapper, can it be tweaked to be more responsive?
I am running podman version 5.4.0 on Rocky Linux 9.6. I notice that when a container is killed along with its '[conmon][1]' OCI runtime wrapper, say by issuing a kill -9, the [podman ps][2] command does not detect the dead container for a good 5 minutes+. In the intervening time, the command lists t...
I am running podman version 5.4.0 on Rocky Linux 9.6.
I notice that when a container is killed along with its 'conmon ' OCI runtime wrapper, say by issuing a kill -9, the podman ps command does not detect the dead container for a good 5 minutes+.
In the intervening time, the command lists the container as being up even as other commands like podman stats , podman exec all fail pointing correctly to the container as being dead in the error message!
$ podman ps -a | grep kafka
7fd65b99d2a0 localhost/****/cp-kafka:*.*.* /etc/confluent/do... 39 hours ago Up 37 hours 9092/tcp kafka
$ podman exec -it 7fd65b99d2a0 bash
Error: OCI runtime error: crun: the container
7fd65b99d2a06252078fc85d3c9832d4c1410e0d185bb9cde08c6641aca31334
is not running
$ podman stats 7fd65b99d2a0
Error: cannot get cgroup path unless container 7fd65b99d2a06252078fc85d3c9832d4c1410e0d185bb9cde08c6641aca31334 is running: container is stopped
I understand the parent runtime monitor is also killed but i am not sure if that justifies reporting an incorrect status in the podman ps
command. Is that the expected behavior? Can this be tweaked in some way to be more responsive?
Thanks!
lmk
(101 rep)
Aug 5, 2025, 09:52 PM
91
votes
5
answers
147597
views
How to scroll the screen using the middle click?
On Windows, most programs with large, scrollable text containers (e.g. all browsers, most word processors and IDEs) let you press the middle mouse button and then move the mouse to scroll. This scrolling is smooth and allows you to scroll very quickly using just the mouse. When I've used Linux on *l...
On Windows, most programs with large, scrollable text containers (e.g. all browsers, most word processors and IDEs) let you press the middle mouse button and then move the mouse to scroll. This scrolling is smooth and allows you to scroll very quickly using just the mouse.
When I've used Linux on *laptops*, two-finger scrolling performs roughly the same function; it's easy to scroll down a page quickly (much more quickly than one can by scrolling a mouse wheel) but the scrolling remains smooth enough to allow precise positioning.
I am unsure how to achieve the same thing when running Linux on a Desktop with a mouse. As far as I can tell after a whole bunch of Googling, there are neither application-specific settings to swap to Windows-style middle mouse button behaviour, nor any system-wide settings to achieve the same effect.
Just to make this concrete, let's say - if it's relevant - that I'm asking in the context of Firefox, Google Chrome, Gedit and Eclipse on a recent version of either Mint (what I use at home) or Ubuntu (what I use at work). I suspect this is a fairly distro-agnostic and application-agnostic question, though.
As far as I can tell, my options for scrolling are:
* Scroll with the mousewheel - slow!
* Use the PgUp / PgDn keys - jumps a huge distance at a time so can't be used for precise positioning, and is less comfortable than using the mouse
* Drag the scroll bar at the right hand side of the screen up and down like I used to do on old Windows PCs with two-button mice. This is what I do in practice, but it's just plain less comfortable than Windows-style middle-mouse scrolling; on a huge widescreen, it takes me most of a second just to move the cursor over from the middle of the screen to the scrollbar, and most of a second to move it back again, and I have to take my eyes off the content I'm actually scrolling to do this.
None of these satisfy me! This UI issue is the single thing that poisons my enjoyment of Linux on desktops and almost makes me wish I was using a laptop touchpad instead of a mouse. It irritates me enough that I've concluded that either I'm missing some basic Linux UI feature that solves this problem, or I'm just an oversensitive freak and it doesn't even bother anyone else - but I'm not sure which.
So my questions are:
1. Does Windows-style middle mouse button scrolling exist anywhere in the Linux world, or is it really purely a Windows thing? In particular, do any Linux web browsers let you use Windows-style scrolling?
2. Are there any mechanisms for scrolling pages that exist in Linux but not in Windows, especially ones that perform the role I've described?
3. Any other solutions that I'm missing?
Mark Amery
(3220 rep)
Dec 19, 2012, 04:01 PM
• Last activity: Aug 5, 2025, 09:26 PM
1
votes
1
answers
2040
views
Where does udev get the model and vendor strings?
I am creating a udev rule which simply logs the usb storage devices. I have a usb flash disc with `ID_MODEL_ID==1234` and `ID_VENDOR_ID==abcd`. `udev` shows that this is: ID_MODEL=UDisk ID_VENDOR=General But I don't get where it gets this information. According to what I see in `usb.ids` of latest `...
I am creating a udev rule which simply logs the usb storage devices. I have a usb flash disc with
ID_MODEL_ID==1234
and ID_VENDOR_ID==abcd
. udev
shows that this is:
ID_MODEL=UDisk
ID_VENDOR=General
But I don't get where it gets this information.
According to what I see in usb.ids
of latest hwdata
:
$ cat /usr/share/hwdata/usb.ids | grep abcd
abcd Unknown
$ cat /usr/share/hwdata/usb.ids | grep 1234
1234 IronLogic RFID Adapter [Z-2 USB]
1234 Bluetooth Device
1234 Typhoon Redfun Modem V90 56k
1234 Flash Drive
1234 Cruzer Mini Flash Drive
1234 USB to ATAPI
1234 BACKPACK
1234 Storage Device
1234 Fastrack Xtend FXT001 Modem
1234 Brain Actuated Technologies
1234 PDS6062T Oscilloscope
1234 ATAPI Bridge
1234 Prototype Reader/Writer
My goal is to simply log ID_VENDOR_ID
and ID_MODEL_ID
instead of strings ID_VENDOR
and ID_MODEL
and to get these strings later when I need by looking up the hwdata
's usb.ids file. It seems to me that udev
gets these strings from somewhere else but where?
VP.
(111 rep)
May 25, 2018, 07:31 AM
• Last activity: Aug 5, 2025, 09:04 PM
0
votes
0
answers
12
views
How is an overlayfs different from just mounting another disk/partition over a directory?
I have OpenWRT installed on some of my routers and to add additional storage for settings as well as programs that might be installed on the router and maybe logs, OpenWRT recommends you plug storage into it and use an overlayfs. I also have a SBC where I just mount an external drive overtop of my h...
I have OpenWRT installed on some of my routers and to add additional storage for settings as well as programs that might be installed on the router and maybe logs, OpenWRT recommends you plug storage into it and use an overlayfs.
I also have a SBC where I just mount an external drive overtop of my home directory on boot to store the home directory externally off of the SD Card that the bootloader and OS are installed on; since the storage on the external drive is more reliable than the SD Card, despite running slower.
What is the difference between these two strategies? They are both basically Single Board computers with Linux, and when the external drive fails to mount, in both cases we're left with a directory full of the content of the original directory, where the drive would have been mounted before.
The only think I can think of that is different, is that the settings directory for OpenWRT (
/etc
) is being mounted on the external drive, where this is not the case on the SBC.
leeand00
(4927 rep)
Aug 5, 2025, 08:58 PM
0
votes
0
answers
6
views
InfluxQL query returns a "partial" answer over http. Can I curl the whole thing?
What’s the right way to pull a _complete_ answer to an InfluxQL query over http? I’m using the [`acct_gather` plugin](https://slurm.schedmd.com/acct_gather.conf.html) for a slurm cluster. It sends resource usage data to an [influxdb v1](https://docs.influxdata.com/influxdb/v1/) database. So if I wri...
What’s the right way to pull a _complete_ answer to an InfluxQL query over http?
I’m using the [
acct_gather
plugin](https://slurm.schedmd.com/acct_gather.conf.html) for a slurm cluster. It sends resource usage data to an [influxdb v1](https://docs.influxdata.com/influxdb/v1/) database. So if I write
#SBATCH --profile=Task
in an sbatch file, it records things like memory, I/O, and CPU usage to the database.
But if I try to ask for that data as a json file, e.g.,...
jobid=12345
curl -G 'http://:/query ?' \
--data-urlencode "db=myslurmdatabase" \
--data-urlencode 'q=select "value" from /./ where "job"='"'"$jobid"'"
...then I get a **partial** response with only one type of measurement ("CPUFrequency"):
{
"results": [
{
"statement_id": 0,
"series": [
{
"name": "CPUFrequency",
"columns": [
"time",
"value"
],
"values": [
...
],
"partial": true
}
]
}
]
}
I think this happens for jobs that have run past a certain number of data points.
## What I've found
- In [this thread on github](https://github.com/grafana/grafana/issues/7380) somebody asked:
> So how does it work? Do you get a url with the second chunk or does the http response contain multiple json bodies? Is this compatible with the json decoders in browser?
People replied to the effect that modern browsers can handle it, but I don’t think they answered the question directly.
- [There’s a “chunked” parameter](https://docs.influxdata.com/influxdb/v1/tools/api/#query-http-endpoint) for the /query
endpoint. The options are either true
(in which case it chunks based on series or 10,000 data points), or a specific number of points (in which case it chunks based on that number). Chunking happens either way. But it’s not clear to me how to get the _next_ chunk.
- It looks like somebody has written [a third-party program](https://github.com/matthewdowney/influxdb-stream) that can stream the chunked results from a query. But is it possible with curl, or would I have to use something like this?
wobtax
(1125 rep)
Aug 5, 2025, 08:39 PM
-2
votes
2
answers
166
views
Can I stream Amazon Prime Video movies on a Fedora laptop?
When I try to play Amazon Prime videos on my Fedora laptop, they never start. Instead, the screen gets stuck on a loading animation (spinning circle) indefinitely. How can I fix this? Is there a step-by-step guide or checklist I can follow? [Edit July 31] This appears to be hardware dependent. It wo...
When I try to play Amazon Prime videos on my Fedora laptop, they never start.
Instead, the screen gets stuck on a loading animation (spinning circle) indefinitely.
How can I fix this?
Is there a step-by-step guide or checklist I can follow?
[Edit July 31]
This appears to be hardware dependent. It works okay on my desktop (Intel i7-4790 with GeForce GT 730) but fails on my laptop (Intel Celeron N4500 with Jasperlake embedded GPU) despite having followed the checklist on the Fedora website.
Just today, I found the
utility.
On both systems, it says in the properties section:
SUBSYSTEM drm
DEVTYPE drm-minor
Lars Poulsen
(347 rep)
Jun 16, 2025, 11:15 PM
• Last activity: Aug 5, 2025, 08:32 PM
0
votes
1
answers
9
views
Location of LXC user-owned containers
I am setting up some LXC containers as a normal user. I have followed all the steps in the manual for user-owned unprivileged containers, and the instances I created are running fine. I used a template config to create the containers: ``` ### template.conf lxc.include = default.conf # Increment by 1...
I am setting up some LXC containers as a normal user. I have followed all the steps in the manual for user-owned unprivileged containers, and the instances I created are running fine.
I used a template config to create the containers:
### template.conf
lxc.include = default.conf
# Increment by 10000 for every new container created, to generate unique UIDs and GIDs.
lxc.idmap = u 0 240000 8192
lxc.idmap = g 0 240000 8192
lxc-create -n mycontainer -f ~/.config/lxc/template.conf -t download -- -d archlinux -r current -a amd64
Now, I'd like to customize each container, e.g. mounting volumes. But I can't see any configuration files created when the containers were created. For privileged containers that I created earlier while experimenting, I can see config files in /var/lib/lxc/config
.
From readding the LXC manual, I got the impression that the config used for creating a container is not also used at runtime. Where does LXC store config files for live containers? Or if it doesn't, can I create one that is automatically identified and used on start?
user3758232
(101 rep)
Aug 5, 2025, 08:07 PM
• Last activity: Aug 5, 2025, 08:29 PM
1
votes
1
answers
10584
views
curl 7.58 under proxy issue ssl wrong version
I just installed an Arch based distribution Antergos. Then I installed few packages with `pacman`. Now after a restart I am getting ssl errors while trying to clone git. fatal: unable to access 'https://xxxx@bitbucket.org/xxx/yyyy.git/': error:1408F10B:SSL routines:ssl3_get_record:wrong version numb...
I just installed an Arch based distribution Antergos. Then I installed few packages with
pacman
. Now after a restart I am getting ssl errors while trying to clone git.
fatal: unable to access 'https://xxxx@bitbucket.org/xxx/yyyy.git/ ': error:1408F10B:SSL routines:ssl3_get_record:wrong version number
also curl to any https doesn't work.
curl https://google.com
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
curl looks latest.
$ curl --version
curl 7.58.0 (x86_64-pc-linux-gnu) libcurl/7.58.0 OpenSSL/1.1.0g zlib/1.2.11 libidn2/2.0.4 libpsl/0.19.1 (+libidn2/2.0.4) nghttp2/1.30.0
Release-Date: 2018-01-24
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy PSL
$ pacman -Q | egrep 'ssl|curl'
curl 7.58.0-1
openssl 1.1.0.g-1
openssl-1.0 1.0.2.n-1
python-pycurl 7.43.0.1-1
$ ldd which curl
linux-vdso.so.1 (0x00007ffdccee9000)
libcurl.so.4 => /usr/lib/libcurl.so.4 (0x00007fe06a5a5000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fe06a387000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007fe069fd0000)
libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x00007fe069dab000)
libidn2.so.0 => /usr/lib/libidn2.so.0 (0x00007fe069b8e000)
libpsl.so.5 => /usr/lib/libpsl.so.5 (0x00007fe069980000)
libssl.so.1.1 => /usr/lib/libssl.so.1.1 (0x00007fe069716000)
libcrypto.so.1.1 => /usr/lib/libcrypto.so.1.1 (0x00007fe069299000)
libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007fe06904b000)
libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00007fe068d63000)
libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00007fe068b30000)
libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x00007fe06892c000)
libz.so.1 => /usr/lib/libz.so.1 (0x00007fe068715000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fe06aa4a000)
libunistring.so.2 => /usr/lib/libunistring.so.2 (0x00007fe068393000)
libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fe06818f000)
libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00007fe067f82000)
libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x00007fe067d7e000)
libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007fe067b67000)
I am behind proxy
$ proxytunnel -p PROXY_IP:PROXY_PORT -d www.google.com:443 -a 7000
$ openssl s_client -connect localhost:7000
CONNECTED(00000003)
depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA
verify return:1
depth=1 C = US, O = Google Inc, CN = Google Internet Authority G2
verify return:1
depth=0 C = US, ST = California, L = Mountain View, O = Google Inc, CN = www.google.com
verify return:1
---
Certificate chain
0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
i:/C=US/O=Google Inc/CN=Google Internet Authority G2
1 s:/C=US/O=Google Inc/CN=Google Internet Authority G2
i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIEdjCCA16gAwIBAgIINC+Y7yLd9OswDQYJKoZIhvcNAQELBQAwSTELMAkGA1UE
BhMCVVMxEzARBgNVBAoTCkdvb2dsZSBJbmMxJTAjBgNVBAMTHEdvb2dsZSBJbnRl
cm5ldCBBdXRob3JpdHkgRzIwHhcNMTgwMjA3MjExMzI5WhcNMTgwNTAyMjExMTAw
WjBoMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwN
TW91bnRhaW4gVmlldzETMBEGA1UECgwKR29vZ2xlIEluYzEXMBUGA1UEAwwOd3d3
Lmdvb2dsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC7lAOc
gsUECzoiJfpnAtq9qxAeTWBS8KYCd3ESvd7255YXW8FUiGTj9MYSSJ3OlYQvvU1I
NmnIXNU7BnhUBbY1kW4+GXc5RimwiIW5VsWftt1XOVZh5mR08DhYQjdQqI3IhK6r
FTS6/6BvFcjWMT/rVQv59XDaQLqWXSomEzOr1vDRXZSbAPr+YAGKUj+K0TjgZNW1
8xo8Lyp8kDjFxrWaThfwFMosbFw5HnnzpT1WSHfmXmF1mvvk4cJ+U2m3+K2pRki8
nNnWafLPdT408XoXrbWLVeEVSIQQH5z93uoj5lESal05pnOY5yYUJ+vmHdY7jOBh
sT9HaGzl3kD2J+1BAgMBAAGjggFBMIIBPTATBgNVHSUEDDAKBggrBgEFBQcDATAZ
BgNVHREEEjAQgg53d3cuZ29vZ2xlLmNvbTBoBggrBgEFBQcBAQRcMFowKwYIKwYB
BQUHMAKGH2h0dHA6Ly9wa2kuZ29vZ2xlLmNvbS9HSUFHMi5jcnQwKwYIKwYBBQUH
MAGGH2h0dHA6Ly9jbGllbnRzMS5nb29nbGUuY29tL29jc3AwHQYDVR0OBBYEFNGB
jzGWH9WkzeHj88QOo3gBTBs+MAwGA1UdEwEB/wQCMAAwHwYDVR0jBBgwFoAUSt0G
Fhu89mi1dvWBtrtiGrpagS8wIQYDVR0gBBowGDAMBgorBgEEAdZ5AgUBMAgGBmeB
DAECAjAwBgNVHR8EKTAnMCWgI6Ahhh9odHRwOi8vcGtpLmdvb2dsZS5jb20vR0lB
RzIuY3JsMA0GCSqGSIb3DQEBCwUAA4IBAQBxOxsCFg7RIa0zVDI0N9rTNaPopqX9
yrIlK1u+C2ohrg5iF5XlTEzTuH43D/J0Lz550D9Cft4s6lWaNKpVDhNivEy2nzK5
ekuQKYtoQlIyfUnD5GnGZyr3m2AcMFnAAhlXVbyiJk0VNLDGCMVBaOuL/yT8X5dQ
j8MrKSvZRaUt2oixE7fKGNv5nhs0wuHu1TEU/8R5UMxbJs8knMZsRcfsvzjXpEHC
guA54xPnLFiU0QTw4GIFi5nDvfR5cF2UAJZNIF4o4sr4DB8+X7DWtBmMNHuR4Cpn
HEdlVzOA7BAGx8yO6AddwJo8AlxviCaPol1xPB8uJCGh/U0/7XhtR93S
-----END CERTIFICATE-----
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
issuer=/C=US/O=Google Inc/CN=Google Internet Authority G2
---
No client certificate CA names sent
Peer signing digest: SHA256
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 3790 bytes and written 261 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-RSA-CHACHA20-POLY1305
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-CHACHA20-POLY1305
Session-ID: BEE4D8162570B4AB0C8121DEC5756B6DC063DB3E7321BB58FD12D566482AD99A
Session-ID-ctx:
Master-Key: B050C78AAC1A0DF5063263DDCD3437CD3A4029E7D5431E236936D2D88AAAD2555A18D92318C9E2E31A550E339D4C26A8
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 100800 (seconds)
TLS session ticket:
0000 - 00 41 04 37 20 26 a1 bc-2b d0 86 8c 6b a5 74 ef .A.7 &..+...k.t.
0010 - 5c 82 0e d3 ec f7 97 0f-a9 9c cb e8 69 a8 0d 67 \...........i..g
0020 - 13 10 87 ec 22 da 60 d3-9b 98 f2 a4 ce 93 95 1c ....".`.........
0030 - 8f fa 71 57 b9 d9 9b 9f-14 9e 37 95 e5 70 e8 70 ..qW......7..p.p
0040 - 4b f5 ff c4 79 b6 f8 9c-32 f2 2a 13 81 1c 5b 9c K...y...2.*...[.
0050 - f3 52 26 df e6 8c db bd-23 c9 24 3e 46 8c 99 9a .R&.....#.$>F...
0060 - 13 53 69 5e 5d 2c c1 0f-e4 6d de df a9 33 af d9 .Si^],...m...3..
0070 - 1f 89 e7 c1 d9 8a d1 05-1a 88 c2 27 e2 0a 56 0f ...........'..V.
0080 - 40 ec 5c ed a3 ca f4 1e-f8 83 85 3b 7e 22 7d f5 @.\........;~"}.
0090 - b4 b7 96 a5 ca 27 4b 40-61 88 9d 58 d3 d6 e9 e7 .....'K@a..X....
00a0 - 1f 72 7c bf 25 24 f6 ab-83 a1 90 ae 97 92 d8 40 .r|.%$.........@
00b0 - 14 3b 5d 07 cd 5a 79 bc-eb 6b ae 66 f1 42 0c 11 .;]..Zy..k.f.B..
00c0 - a5 7e 68 f9 c1 51 6f 3d-7e f9 28 79 2a 32 d5 ea .~h..Qo=~.(y*2..
00d0 - 90 4f ee 2c 84 ac 66 0b-8d dc .O.,..f...
Start Time: 1519286347
Timeout : 7200 (sec)
Verify return code: 0 (ok)
Extended master secret: yes
---
read:errno=0
What is the solution ?
**Update**
Confirming this is necessarily a curl issue. I turn off proxy and connect directly curl https works. I set any other proxy server ip and port from https://free-proxy-list.net/ and then try to connect curl through proxy. I get the same error. So either this curl version has a bug or so many proxy servers are wrongly configured.
**Update**
I think the issue is related to Deepin
DE. I switched from Deeping Desktop Environment to Standard Gnome and curl started working fine. Possibly this is a bug related to Deepin's Network Settings. Although it sets the environment variables correctly.
Neel Basu
(321 rep)
Feb 21, 2018, 05:06 PM
• Last activity: Aug 5, 2025, 08:02 PM
0
votes
0
answers
15
views
Unpredictable freezing with Debian 12 on Dell Latitude 7490
I'm having freezing problems with my Debian 12 install on a Dell Latitude 7490 laptop. The screen will freeze and become unresponsive. Moving the mouse during the freeze will cause the keyboard backlight to turn back on if it's off, but nothing happens on the screen. This happens both mid-use and be...
I'm having freezing problems with my Debian 12 install on a Dell Latitude 7490 laptop.
The screen will freeze and become unresponsive. Moving the mouse during the freeze will cause the keyboard backlight to turn back on if it's off, but nothing happens on the screen. This happens both mid-use and before login, whether in TTY or GUI, and whether in normal mode or recovery mode. I have LUKS, and I haven't noticed it happening before I enter encryption key, though. The only way around seems to be a force shutdown -> reboot, but the problem will almost always just happen again shortly after next boot. My system is honestly more or less unusable right now because of this.
At first, I didn't see this happening when I was plugged into AC at work or at home, but since some of the changes that I've tried (below), freezing now seems to happen regardless of battery or AC.
I've tried a number of things:
1) Update kernel from 6.1.* -> 6.12.33 via backports.
2) Update firmware drivers similarly.
3) Change BIOS settings (disabled C-States, disabled Intel SpeedShift, disabled SpeedStep, disabled TurboBoost; basically disabled every intel performance option besides multi core and hyper threading). I also have secure boot turned disabled, but that was to solve a different problem.
4) Set
intel_idle.max_cstate=0
also tried setting to 1) and intel_iommu=off
in GRUB_CMDLINE_LINUX_DEFAULT
.
Right now, I'm using bookworm with the 6.12.33 kernel from backports, Wayland, and iwd + Network manager.
Here's a link to a journalctl dump from a hang:
[https://paste.c-net.org/GoldbergLined](https://paste.c-net.org/GoldbergLined)
The problem in [this Mint forum thread](https://forums.linuxmint.com/viewtopic.php?t=432757) seems to be quite similar to mine, but no resolution.
Thanks for your help!
user1507246
(1 rep)
Aug 5, 2025, 07:52 PM
0
votes
2
answers
7766
views
Cannot change ViewportOut in Nvidia X Server Settings
I have 2 monitors on Nvidia Card: - First: 1440x900 - Second: 1280x1024 The first one works great.  The second one has a resolution of 640x480.  In Nvidia settings, I can choose only 640x480.  If I change ViewportOut, it drops to 640x480. It would be very cool...
I have 2 monitors on Nvidia Card:
- First: 1440x900
- Second: 1280x1024
The first one works great.
The second one has a resolution of 640x480.
In Nvidia settings, I can choose only 640x480.
If I change ViewportOut, it drops to 640x480.
It would be very cool if I could change resolution in
xorg.conf
.
Alex
(21 rep)
Mar 27, 2015, 02:53 PM
• Last activity: Aug 5, 2025, 07:02 PM
1
votes
2
answers
7542
views
CentOS custom ISO installation - /dev/root does not exist
I am building a custom ISO for CentOS 7 and for now I am just intending for this to be an absolute minimal install (a proof of concept basically). I am re-creating the ISO via using mkisofs. The command I entered is: Mkisofs –o custom.iso –b isolinux.bin –c boot.cat –no-emul-boot –V ‘CentOS’ –boot-l...
I am building a custom ISO for CentOS 7 and for now I am just intending for this to be an absolute minimal install (a proof of concept basically).
I am re-creating the ISO via using mkisofs.
The command I entered is:
Mkisofs –o custom.iso –b isolinux.bin –c boot.cat –no-emul-boot –V ‘CentOS’ –boot-load-size 4 –boot-info-table –R –J –v –T isolinux/
This successfully created the iso and allowed me to mount it in the optical drive of VirtualBox. Upon installation I am receiving an error within the rdsosreport.txt that says:
> localhost dracut-initqueue: Warning: Could not boot.
>
> localhost dracut-initqueue: Warning: /dev/root does not exist
So far I have:
copied .treeinfo, .discinfo into the root directory of where I am making the iso. Created subdirectory /isolinux with all the /isolinux data from the latest CentOS-7-x86_64-Minimal-1503-01 as well as the /images and /LiveOS directories. I have also copied over the repo .xml file into the root directory.
I have tried a multitude of kickstart files, but the current version I am using is ultra-minimalistic just to get this to work at some point.
install
cdrom
text
keyboard us
lang en_US.UTF-8
rootpw --iscrypted $6$XRIetvtFyLXRFVzZ$jX7xRxsN6M.DIqwJ9DQui9ytaqK3IAzauSqB4zeRNvMKJo6xCJQAk90XIaxh.SBn0IBtyZM7ZlHK8eSk55VnG0
timezone America/New_York --isUtc
clearpart --none --initlabel
%packages
@core
%end
My ks.cfg is located in isolinux/ks/ks.cfg and when I boot into the system I am running is
linux inst.ks=cdrom:/dev/cdrom:/ks/ks.cfg
I'm a little lost on where to investigate further as all I am trying to do is load a very simple kickstart file to get Linux to do a one-button install. I don't necessarily need to be told, just to be pointed in the right direction as I've tried quite a few different kickstart configurations and have come up with the same error.
edit:
I have gotten this working by editing the isolinux.cfg file and changing the volume ID to my -V volume ID set in my mkisofs and then selecting this menu option when doing my installation.
append initrd=initrd.img inst.stage2=hd:LABEL=CentOS quiet inst.ks=cdrom:/dev/cdrom:/ks/ks.cfg
I have other errors within the iso that I'm investigating now due to the fact my kickstart file is so barren.
Chudbrochil
(31 rep)
Oct 1, 2015, 09:55 PM
• Last activity: Aug 5, 2025, 06:04 PM
0
votes
0
answers
13
views
Unexpected packet loss on 10Gbps NIC even under low traffic (~10Mbps)
I'm experiencing unexpected packet loss on a 10Gbps Intel NIC (ixgbe driver) even when traffic is only around 10Mbps. The setup is a test environment using `tcpdump` to capture packets on Ubuntu 22.04 with kernel 6.2.0. Of course, I replaced cable, nic port (I used another port), nic but the result...
I'm experiencing unexpected packet loss on a 10Gbps Intel NIC (ixgbe driver) even when traffic is only around 10Mbps. The setup is a test environment using
tcpdump
to capture packets on Ubuntu 22.04 with kernel 6.2.0.
Of course, I replaced cable, nic port (I used another port), nic but the result is the same.
### Observations:
- Traffic generator sent 46,759 packets.
- Only 37,676 packets were captured by tcpdump
.
- ethtool -S
shows rx_dropped: 7
and rx_packets: 37,676
.
- tcpdump
reports: 0 packets dropped by kernel
.
- NIC driver: ixgbe, firmware 1.808.0
- rx-usecs
is set to 20, RX ring
set to 8192.
- RPS enabled: echo ffff > /sys/class/net/ens4f1/queues/rx-0/rps_cpus
- IRQs are spread: all affinities set to 0-31
- NUMA node for NIC is 0, CPUs in node0 are 0-13,28-41.
- /proc/softirqs
shows NET_RX is active on multiple cores.
### Tried so far:
- Reloading ixgbe driver (modprobe -r ixgbe && modprobe ixgbe
)
- Increasing RX ring buffer (ethtool -G ens4f1 rx 8192
)
- Disabling ntuple filtering and re-enabling (ethtool -K ens4f1 ntuple on
)
- Enabling rxhash (ethtool -K ens4f1 rxhash on
)
- Testing different rx-usecs
(20, 50, 100)
- Ensured IRQ and RPS distribution
### Questions:
1. Could the NIC drop packets even if rx_dropped
is low and no kernel drops are shown?
2. Is there any known ixgbe behavior or firmware bug that could cause packet loss in such low load?
3. How can I confirm whether packet loss is really happening at the NIC or somewhere else in the kernel path?
Any suggestions on further debugging or known limitations would be greatly appreciated.
y. ktr
(1 rep)
Aug 5, 2025, 06:02 PM
Showing page 1 of 20 total questions