Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
2
answers
2343
views
How does dmenu_run work?
My system is ```Debian 9.4``` which uses ```linux kernel 4.9.0-8-amd64``` and ```echo $SHELL``` on my system gives ```/bin/bash``` and ```/bin/sh``` is a link to ```/bin/dash```. I was curious why every time I run an application with ```dmenu_run``` from ```dwm``` there is an additional ```/bin/bash...
My system is
9.4
which uses kernel 4.9.0-8-amd64
and $SHELL
on my system gives /bin/bash
and /bin/sh
is a link to /bin/dash
.
I was curious why every time I run an application with
from
there is an additional /bin/bash
process that runs as the parent, so I dug further into the script of
:
#!/bin/sh
dmenu_path | dmenu "$@" | ${SHELL:-"/bin/sh"} &
I can't understand why my computer has /bin/bash
instead of /bin/sh
. I also read the correspond source code in
. It shows that it just simply
and
. There is no reason for /bin/bash
to run instead of /bin/sh
.
JiaHao Xu
(248 rep)
Nov 10, 2018, 08:22 AM
• Last activity: Aug 8, 2025, 07:10 PM
0
votes
1
answers
2216
views
How to install apt-get or yum for cmder?
I have cmder installed, because I want to use the unix command line on a windows machine. I would like to install apt-get so that I can install more things using cmder. I could also use yum Currently I receive the following λ apt-get 'apt-get' is not recognized as an internal or external command, op...
I have cmder installed, because I want to use the unix command line on a windows machine. I would like to install apt-get so that I can install more things using cmder.
I could also use yum
Currently I receive the following
λ apt-get
'apt-get' is not recognized as an internal or external command,
operable program or batch file.
λ apt
'apt' is not recognized as an internal or external command,
operable program or batch file.
λ yum
'yum' is not recognized as an internal or external command,
operable program or batch file.
Sam
(33 rep)
Sep 21, 2020, 06:44 PM
• Last activity: Aug 8, 2025, 07:06 PM
3
votes
1
answers
73
views
Is there a way to link or mount a directory such that writing to the target goes to one destination, but reading from it grabs from another?
In my case, there are two NASes that I want some very large files written to, one with a fast connection and small storage, and another with a slower connection but large storage (let’s call them NAS A and NAS B respectively). I would like to write the files to the faster NAS A and then let NAS A tr...
In my case, there are two NASes that I want some very large files written to, one with a fast connection and small storage, and another with a slower connection but large storage (let’s call them NAS A and NAS B respectively). I would like to write the files to the faster NAS A and then let NAS A transfer the files to NAS B in the background while the client computer can go to sleep. After the files are sent to NAS B, NAS A can delete the local copy to save space. I do not want to keep a backup of the files on NAS A as I’m basically using it as a write cache for NAS B.
However, the program that writes the files from the client needs to be able to read all the files that have been sent to NAS B every time, and so far I haven’t found a way to separately specify a write and read path in the program itself so these two situations have to point to the same path.
Is there thus a way to expose a single path but allow the writes to go to one destination and the reads to pull from another?
For additional context, the client machine is running Windows (for now anyway) and NAS A is a Raspberry Pi. If possible, I’d like the client to just mount a path from NAS A and let NAS A handle the linking.
I have some workaround alternatives in mind but am curious if this is possible to do, or if there are other alternatives I haven’t thought of yet.
Owlguy
(33 rep)
Aug 8, 2025, 01:36 AM
• Last activity: Aug 8, 2025, 06:51 PM
2
votes
0
answers
15
views
Slurm allocates job requesting entire GPU to same GPU as jobs requesting shards already running on that GPU
I'm working on a "cluster" that currently has only one computenode, with 8x H100 GPUs. Slurm is configured such that each GPU is available either as a whole GPU, or as 20 shards. The (from my understanding relevant part of) `slurm.conf` says: ``` NodeName=computenode01 RealMemory=773630 Boards=1 Soc...
I'm working on a "cluster" that currently has only one computenode, with 8x H100 GPUs.
Slurm is configured such that each GPU is available either as a whole GPU, or as 20 shards. The (from my understanding relevant part of)
slurm.conf
says:
NodeName=computenode01 RealMemory=773630 Boards=1 SocketsPerBoard=2 CoresPerSocket=96 ThreadsPerCore=2 Gres=gpu:8,shard:160 Feature=location=local
PartitionName="defq" Default=YES MinNodes=1 DefaultTime=UNLIMITED MaxTime=UNLIMITED AllowGroups=ALL PriorityJobFactor=1 PriorityTier=1 OverSubscribe=NO PreemptMode=OFF AllowAccounts=ALL AllowQos=ALL Nodes=computenode01
SchedulerType=sched/backfill
GresTypes=gpu,shard
SelectType=select/cons_tres
SelectTypeParameters=CR_Core
AccountingStorageTRES=gres/gpu,gres/shard
and the gres.conf
looks like this:
AutoDetect=NVML
NodeName=computenode01 Name=gpu Count=1 File=/dev/nvidia0
NodeName=computenode01 Name=gpu Count=1 File=/dev/nvidia1
NodeName=computenode01 Name=gpu Count=1 File=/dev/nvidia2
NodeName=computenode01 Name=gpu Count=1 File=/dev/nvidia3
NodeName=computenode01 Name=gpu Count=1 File=/dev/nvidia4
NodeName=computenode01 Name=gpu Count=1 File=/dev/nvidia5
NodeName=computenode01 Name=gpu Count=1 File=/dev/nvidia6
NodeName=computenode01 Name=gpu Count=1 File=/dev/nvidia7
NodeName=computenode01 Name=shard Count=20 File=/dev/nvidia0
NodeName=computenode01 Name=shard Count=20 File=/dev/nvidia1
NodeName=computenode01 Name=shard Count=20 File=/dev/nvidia2
NodeName=computenode01 Name=shard Count=20 File=/dev/nvidia3
NodeName=computenode01 Name=shard Count=20 File=/dev/nvidia4
NodeName=computenode01 Name=shard Count=20 File=/dev/nvidia5
NodeName=computenode01 Name=shard Count=20 File=/dev/nvidia6
NodeName=computenode01 Name=shard Count=20 File=/dev/nvidia7
Now if I submit a job using sbatch script.sh
, and my resource request inside script.sh
is
#SBATCH --gres=gpu:1
this job might be assigned (depending on availability) to the same physical GPU as jobs that were submitted previously and are already running, and which had requested, for example,
#SBATCH --gres=shard:4
even though the [Slurm documentation](https://slurm.schedmd.com/gres.html#Sharding) says that should not happen:
> Note the same GPU can be allocated either as a GPU type of GRES or as a shard type of GRES, but not both. In other words, once a GPU has been allocated as a gres/gpu resource it will not be available as a gres/shard. Likewise, once a GPU has been allocated as a gres/shard resource it will not be available as a gres/gpu.
This happens regardless of whether I use srun
inside the script (without any further parameters, e.g. srun python my_job.py
) or just dummy commands like echo $CUDA_VISIBLE_DEVICES
(which lets me check which device the job is assigned to).
PS: I found this [mailing list post from 2023](https://groups.google.com/g/slurm-users/c/OwvGEAmdrMg) describing the same problem, but there was never any response.
Raketenolli
(121 rep)
Aug 8, 2025, 06:11 PM
0
votes
2
answers
38
views
Epsonscan 2 not working with a regular user (non-admin or not in the sudoers list) on Ubuntu 20.04LTS
my father's notebook runs Ubuntu 20.04 LTS. There are two users: me as Admin (with sudo privileges), my father who plays a Standard (aka non-admin, non sudo priviles) user. He recently bought an Epson printer (ET-2785) and I have just installed the Epson Scan 2 utility; I noted the following issue:...
my father's notebook runs Ubuntu 20.04 LTS. There are two users: me as Admin (with sudo privileges), my father who plays a Standard (aka non-admin, non sudo priviles) user.
He recently bought an Epson printer (ET-2785) and I have just installed the Epson Scan 2 utility; I noted the following issue:
when I am logged in with my user and launch the Epson Scan 2 application through the classic 9-dot of the Gnome interface, it opens up and I am able to configure the scanning settings (output folder, dpi, quality of scan...) and start the scan;
when he is logged with his credentials the Epson Scan 2 does not show up and I cannot understand if it is locked, failed or so.
Please, consider that I do not start the application with the sudo command in the terminal, but I launch the application as any other software program, by clicking the application icon.
The printer/scanner is seen by WiFi network, because I was not able to get the USB working (this is another weird issue).
I looked already in the official user manual but nothing is mentioned about the matter.
Was anyone in the same situation and was able to fix the problem. I strongly need some support.
Thank you very much.
R99
(41 rep)
Aug 6, 2025, 08:04 PM
• Last activity: Aug 8, 2025, 06:00 PM
2
votes
1
answers
2411
views
How to merge squashfs file during booting of "live" linux distros from GRUB manually?
I want to boot Linux distros from GRUB 2.0 command line. I've tried to do so for couple of distros and at "best" I receive `initramfs` prompt, no GUI which starts if distro is run stardard way. Resulting file system seams to have files contained in initrd file (less then 100Mb), but not in filesyste...
I want to boot Linux distros from GRUB 2.0 command line. I've tried to do so for couple of distros and at "best" I receive
initramfs
prompt, no GUI which starts if distro is run stardard way. Resulting file system seams to have files contained in initrd file (less then 100Mb), but not in filesystem.squashfs (which is larger than 1Gb).
vmlinux, initrd and filesystem.squashfs files are in casper
folder and linux command in menu entry in grub.cnf in distros contains boot=casper
, I suspect folder name casper
is not necessary for kernel option to work, casper is persistence something option related as far as I understood from wikipedia.
Also as far as I understood the problem, when boot process tries to do unionfs thing it could not find SquashFS file with all except kernel distro stuff to add. How do I let it know its' location? Maybe the problem is of other root cause, please tell me so then.
ADDED 0: I changed linux (hd0,msdos2)/casper/vmlinuz
command adding root=UUID=what ls command gives for partition with distro
and
now starting, finally getting many lines stdin: Not a typewriter
, then (initramfs) Unable to find a medium containing live file system
and again CLI prompt. Was stdout on screen same lines as w/out root option... I just don't remember for sure, so many lines during boot.
I far as understand from the GRUB manual and my try-and-error, root
variable can point to device only, not path inside device, so I see setting it will not point to squashfs file inside casper folder.
ADDED 1: I run grep -rnw 'initrd file loop mounted location' -e 'filesystem.squashfs'
as per https://stackoverflow.com/questions/16956810/how-do-i-find-all-files-containing-specific-text-on-linux and got nothing, so have no idea how init process finds that squashfs file.
Alex Martian
(1287 rep)
Mar 6, 2019, 03:51 PM
• Last activity: Aug 8, 2025, 05:06 PM
3
votes
4
answers
420
views
Creating n folders with equal number of files from a large folder with all files
I am not super fluent in bash yet. I have a large folder will all my data files. I just need to copy them into subfolders to make batches. I want to specify the number of batches. Right now I count how many files there are and use that to make ranges but I don't think my script is the best way to do...
I am not super fluent in bash yet. I have a large folder will all my data files. I just need to copy them into subfolders to make batches. I want to specify the number of batches. Right now I count how many files there are and use that to make ranges but I don't think my script is the best way to do this. Is there a more convenient way to do this?
cd /dump
batches=4
files=$(cat /data/samples.txt | wc -l)
echo "$files $batches"
for ((i=0; i<=batches; i++))
do
mkdir -p merged_batches/batch$i
rm -f merged_batches/batch$i/*
ls merged/*.sorted.labeled.bam* |
head -n $(( $((files/batches)) * $((i+1)) * 2 )) |
tail -n $((2 * files/batches)) |
xargs -I {} cp {} merged_batches/batch$i
done
rubberduck
(43 rep)
Aug 7, 2025, 03:49 PM
• Last activity: Aug 8, 2025, 04:52 PM
1
votes
1
answers
139
views
How can I protect SELinux labels from being modified?
I'm running Fedora 23. I have SELinux enabled and enforced. I know that you can change a file's labels with `restorecon` and `chcon` (and possibly other programs). This is no doubt an avenue by which a file's security can be bypassed. How can I make it so SELinux labels cannot be changed. [This](htt...
I'm running Fedora 23. I have SELinux enabled and enforced. I know that you can change a file's labels with
restorecon
and chcon
(and possibly other programs). This is no doubt an avenue by which a file's security can be bypassed. How can I make it so SELinux labels cannot be changed. [This](https://wiki.gentoo.org/wiki/SELinux/Tutorials/How_SELinux_controls_file_and_directory_accesses) Gentoo documentation page says that SELinux can be used to do that, but it doesn't say how. Fedora's targeted
policy provides three particular booleans:
+ secure_mode
— "Do not allow transition to sysadm_t, sudo and su effected"
+ secure_mode_insmod
— "Do not allow any processes to load kernel modules"
+ secure_mode_policyload
— "Do not allow any processes to modify kernel SELinux policy"
Does Fedora policy come with some way to prevent user space processes from modifying SELinux labels?
Melab
(4328 rep)
Jun 24, 2016, 01:25 AM
• Last activity: Aug 8, 2025, 04:42 PM
0
votes
0
answers
26
views
can the persistent storage on tails-os be configured to use yubikey to unlock?
I use tails-os with persistent storage to keep sensitive data secure but relatively easy to access on an airgapped laptop. I might be pissing up a rope here, but I wonder if it might be possible to configure the persistent storage with a yubikey for unlocking the encrypted partition? The problems I...
I use tails-os with persistent storage to keep sensitive data secure but relatively easy to access on an airgapped laptop.
I might be pissing up a rope here, but I wonder if it might be possible to configure the persistent storage with a yubikey for unlocking the encrypted partition?
The problems I see is that although the necessary yk/luks pkgs can be installed after each boot, they are kept on the persistent storage and so not available until after the persistent storage device has been opened (catch22?).
Also, there would need to be some cfg files in /etc that would need to be modified (crypttab and ykluks.cfg maybe), and stay modified in order for the persistent storage to know to look for a yubikey after boot but before taking user to tails desktop.
I guess I'm wondering if perhaps the live disk could be modified in a way to statically install the necessary pkgs and modify the necessary cfg files without requiring the persistent storage to be available?
---
UPDATE:
Ok so what I have been able to manage so far is to add my yubikey's fido2 to a luks slot of my usb loaded with tails-os with Persistent storage configured, running this command with the usb plugged into a usb port on daily driver:
$ sudo systemd-cryptenroll /dev/sdc2 --fido2-device=auto
Verified the fido2 added to a luks slot for the Persistent partition:
$sudo systemd-cryptenroll /dev/sdc2
SLOT TYPE
0 password
2 fido2
Then I was able to remove the password that was setup for slot 0 of the Persistent storage: $ sudo cryptsetup -q -v luksKillSlot /dev/sdc2 0
Verified the password slot was removed:
$sudo systemd-cryptenroll /dev/sdc2
SLOT TYPE
2 fido2
Booted my laptop with the tails-os usb, and did not open the Persistent storage at the tails setup screen, but started tails.
Then I ran the command to open the Persistent storage using the fido2 slot entry: $ sudo cryptsetup luksOpen --token-only /dev/sdb2 Persis
Verified the partition was opened by checking Persis was listed in /dev/mapper: $ ls /dev/mapper
Created a mount point in /media/amnesia/Persis to have the partition show up in the regular user's (amnesia) favorite locations menu in the file manager: $ sudo mkdir -p /media/amnesia/Persis ; sudo chown -R /media/amnesia/Persis ; sudo mount /dev/mapper/Persis /media/amnesia/Persis
And voila, although a little clunkier than if things could be set up more statically on the live disk, the Persistence partition was successfully opened and mounted by my yubikey's fido2.
However, I did run into a glitch I cannot yet explain. I run with three yubikeys all configured identically, and so differing in device ID only. Except the command to open the partition with the --token-only worked only with the specific yubikey that was plugged in when the sudo systemd-cryptenroll /dev/sdc2 --fido2-device=auto
command was run. When trying to use one of my other two yubkieys, the luksOpen command just fails, informing that the fido2 pin code entered was incorrect (or something similar). However, when I set up a challengeresponse using any one of my yubikeys in a keepassxc db, for example, any of my yubikeys afterward works fine to access and modify the keepassxc db. I thought the challengeresponse feature on the yubikey is the fido2 feature, no?
If so, the only thing I can think is that the sudo systemd-cryptenroll /dev/sdc2 --fido2-device=auto
command to add the YK's fido2 to a key slot on the luks partition must also use the yubikey's device ID as part of the --token-only authentication process?
naphelge
(43 rep)
Aug 7, 2025, 02:56 PM
• Last activity: Aug 8, 2025, 04:32 PM
2
votes
0
answers
49
views
Replicating "world" (set of installed packages) on another Alpine Linux system
When using Alpine Linux normally, it's easy to install a set of packages and then just transfer the `/etc/apk/world` file over to another system (followed by `apk fix` on the other system) to ensure that both systems have the same set of packages installed. However, when using `apk add -t` to add a...
When using Alpine Linux normally, it's easy to install a set of packages and then just transfer the
/etc/apk/world
file over to another system (followed by apk fix
on the other system) to ensure that both systems have the same set of packages installed.
However, when using apk add -t
to add a "virtual package", as in
# apk add -t bash-all bash bash-doc
(Adds a virtual package called bash-all
that depends on both the bash
and the bash-doc
package.) ... then this would add an entry such as
bash-all=20250803.041246
... in the /etc/apk/world
file. This entry is "useless" on the other system as it says nothing about what the virtual package depends on.
What's the "proper" way to ensure that my other Alpine Linux system is kept in sync with regard to packages installed, _given that the system is using virtual packages_? ("My other Alpine Linux system", could be an actual other system, or it could be the system I'll get after performing a reinstall of my current Alpine Linux system.)
---
I've played around with copying both /etc/apk/world
and /lib/apk/db/installed
(which contains the actual packages installed, their dependencies, and other metadata), but without being able to figure out a way of triggering the target system to perform the correct(-ive) package installations/uninstallations.
Kusalananda
(354459 rep)
Aug 8, 2025, 02:18 PM
• Last activity: Aug 8, 2025, 04:19 PM
0
votes
1
answers
3077
views
Metasploitable virtualbox has wrong keyboard layout
I just installed Metasploitable in VirtualBox for doing some exercises. Metasploitable is a Ubuntu Hardy-based system, with some built in errors that allows you to tamper with it. I live in Denmark, and thus I use a Danish keyboard layout. Since Metasploitable works for English, this means that my k...
I just installed Metasploitable in VirtualBox for doing some exercises.
Metasploitable is a Ubuntu Hardy-based system, with some built in errors that allows you to tamper with it.
I live in Denmark, and thus I use a Danish keyboard layout.
Since Metasploitable works for English, this means that my keyboard is now messed up a little bit when I type inside of this new VM.
Usually, when I have been working on Ubuntu or Kali VMs, then it is quite easy to navigate around and change the keyboard language layout.
This is harder now, since I only have a terminal to work with.
I have been searching around for solutions, the closest thing I have found is this:
sudo dpkg-reconfigure keyboard-configuration
Where I think I need to put the configuration I want in the
keyboard-configuration
field. But I don't know how to figure what the configurations are called for the different languages (trying intuitive names like "dk" fails).
So, how do I change the keyboard layout?
NotQuiteSo1337
(23 rep)
Sep 15, 2021, 07:20 PM
• Last activity: Aug 8, 2025, 04:08 PM
2
votes
1
answers
54
views
Running docker in an unprivileged devuan lxc container (on proxmox)
I am configuring a single-node proxmox server for use as a home server/homelab. I am looking to run many of my generic applications (like immich, jellyfin, etc) in docker containers, which could all run within a linux lxc container. I understand that proxmox does not recommended to run docker in an...
I am configuring a single-node proxmox server for use as a home server/homelab. I am looking to run many of my generic applications (like immich, jellyfin, etc) in docker containers, which could all run within a linux lxc container. I understand that proxmox does not recommended to run docker in an lxc container, and instead to use a VM, but VMs take constant allocated resources to run, which doesn't fit well with a docker host setup in my opinion.
I am also interested in using devuan linux in an unprivileged lxc container to host the docker applications. I am not here to argue distribution selection either, I would just prefer to use this distribution if possible.
Unfortunately, when I try to install docker in an unprivileged devuan lxc container (using the guide for debian 12, the basis for devuan 5), installation via apt fails due to the docker service not being able to start. The specific error I get is:
/etc/init.d/docker: 69: ulimit: error setting limit (Operation not permitted) invoke-rc.d: initscript docker, action "start" failed.
I assumed this was a limitation of running the devuan lxc container as unprivileged, and that this was a kernel permissions issue. However, when I created an Ubuntu 24.04 lxc container with the same configuration (unprivileged with nesting) I was able to install docker-ce completely and run the docker run hello-world
command successfully. My remaining questions are as follows:
1. Why am I able to install docker engine successfully in an unprivileged ubuntu lxc container, but not a devuan container with an identical configuration?
2. Can I configure my devuan container to allow docker to run without compromising security by escalating privilege?
3. If not, is there a way I can install docker in a 'rootless' mode that would allow it to run in an unprivileged devuan container?
hdconway
(23 rep)
Aug 7, 2025, 01:55 PM
• Last activity: Aug 8, 2025, 03:46 PM
0
votes
2
answers
163
views
OpenVPN Client not connecting to OpenVPN Server on Netgear
My son has an OpenVPN server set up on his NetGear router in the UK. I am in Italy and have an OpenVPN Client that works fine on Windows but I want to connect also from my Raspberry Pi and Ubuntu laptop. I have followed all the OpenVPN Client installation instructions but neither of the Linux versio...
My son has an OpenVPN server set up on his NetGear router in the UK. I am in Italy and have an OpenVPN Client that works fine on Windows but I want to connect also from my Raspberry Pi and Ubuntu laptop. I have followed all the OpenVPN Client installation instructions but neither of the Linux versions will connect. The OpenVPN version I have on the Pi is 2.6.3-1+deb12u2. For now I am going to concentrate on the Raspberry Pi version because the problems in both appear to be similar and perhaps fixing one will indicate how to fix the other. The Pi is a model 5 running Debian GNU/Linux 12 (bookworm). The Client.conf file is as follows:
client
#dev tap
dev tun
proto udp
remote beerisgood.ddns.net 12974
resolv-retry infinite
nobind
persist-key
persist-tun
persist-remote-ip
ca /etc/openvpn/ca.crt
cert /etc/openvpn/client.crt
key /etc/openvpn/client.key
cipher AES-128-CBC
# data-ciphers-fallback AES-128-CBC
comp-lzo
# route-noexec ## added JKJ 17/2/25
verb 3
log /etc/openvpn/log/jrjvpn.log
script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
Starting the OpenVPN Client generates the following log file:
2025-02-19 18:58:07 DEPRECATED OPTION: --cipher set to 'AES-128-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM:CHACHA20-POLY1305). OpenVPN ignores --cipher for cipher negotiations.
2025-02-19 18:58:07 Note: '--allow-compression' is not set to 'no', disabling data channel offload.
2025-02-19 18:58:07 OpenVPN 2.6.3 aarch64-unknown-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] [DCO]
2025-02-19 18:58:07 library versions: OpenSSL 3.0.15 3 Sep 2024, LZO 2.10
2025-02-19 18:58:07 DCO version: N/A
2025-02-19 18:58:07 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info.
2025-02-19 18:58:08 TCP/UDP: Preserving recently used remote address: [AF_INET]213.18.141.7:12974
2025-02-19 18:58:08 Socket Buffers: R=[212992->212992] S=[212992->212992]
2025-02-19 18:58:08 UDPv4 link local: (not bound)
2025-02-19 18:58:08 UDPv4 link remote: [AF_INET]213.18.141.7:12974
2025-02-19 18:58:08 TLS: Initial packet from [AF_INET]213.18.141.7:12974, sid=2f04db75 d6626407
2025-02-19 18:58:08 VERIFY OK: depth=1, C=TW, ST=TW, L=Taipei, O=netgear, OU=netgear, CN=netgear CA, name=EasyRSA, emailAddress=mail@netgear
2025-02-19 18:58:08 VERIFY OK: depth=0, C=TW, ST=TW, L=Taipei, O=netgear, OU=netgear, CN=server, name=EasyRSA, emailAddress=mail@netgear
2025-02-19 18:58:08 Control Channel: TLSv1.2, cipher TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384, peer certificate: 1024 bit RSA, signature: RSA-SHA256
2025-02-19 18:58:08 [server] Peer Connection Initiated with [AF_INET]213.18.141.7:12974
2025-02-19 18:58:08 TLS: move_session: dest=TM_ACTIVE src=TM_INITIAL reinit_src=1
2025-02-19 18:58:08 TLS: tls_multi_process: initial untrusted session promoted to trusted
2025-02-19 18:58:09 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1)
2025-02-19 18:58:09 PUSH: Received control message: 'PUSH_REPLY,ping 10,ping-restart 120,route-delay 10,route-gateway 192.168.1.1,redirect-gateway def1,peer-id 0,cipher AES-256-GCM'
2025-02-19 18:58:09 OPTIONS IMPORT: route options modified
2025-02-19 18:58:09 OPTIONS IMPORT: route-related options modified
2025-02-19 18:58:09 net_route_v4_best_gw query: dst 0.0.0.0
2025-02-19 18:58:09 net_route_v4_best_gw result: via 192.168.178.1 dev wlan0
2025-02-19 18:58:09 ROUTE_GATEWAY 192.168.178.1/255.255.255.0 IFACE=wlan0 HWADDR=2c:cf:67:5e:8e:04
2025-02-19 18:58:09 TUN/TAP device tun0 opened
2025-02-19 18:58:09 Data Channel: cipher 'AES-256-GCM', peer-id: 0, compression: 'lzo'
2025-02-19 18:58:09 Timers: ping 10, ping-restart 120
2025-02-19 18:58:19 net_route_v4_add: 213.18.141.7/32 via 192.168.178.1 dev [NULL] table 0 metric -1
2025-02-19 18:58:19 net_route_v4_add: 0.0.0.0/1 via 192.168.1.1 dev [NULL] table 0 metric -1
2025-02-19 18:58:19 sitnl_send: rtnl: generic error (-101): Network is unreachable
2025-02-19 18:58:19 ERROR: Linux route add command failed
2025-02-19 18:58:19 net_route_v4_add: 128.0.0.0/1 via 192.168.1.1 dev [NULL] table 0 metric -1
2025-02-19 18:58:19 sitnl_send: rtnl: generic error (-101): Network is unreachable
2025-02-19 18:58:19 ERROR: Linux route add command failed
2025-02-19 18:58:19 Initialization Sequence Completed
Which, as you can see fails with a
route add
command. The Windows script that works uses dev tap
but I changed it to dev tun
in the hope that it would resolve the error but it made no difference. Running ip a
before and after starting the client shows the following as the only difference after starting OpenVPN:
17: tun0: mtu 1500 qdisc noop state DOWN group default qlen 500
link/none
The up
and down
script calls to update-resolv-conf
were suggested additions from the installation instructions but they make no difference so I commented them out for the above log file generation. As far as I can understand from the scripts, they take environment variables that can be passed from the server and use them to alter the routing tables but nothing is being passed by the server so it makes no difference having them or not.
One thread that I read seemed to think that resolvconf
was needed, so I installed that too but it made no difference. Running it before and after starting OpenVPN gives the following result:
$ sudo resolvconf -l
# resolv.conf from NetworkManager
search fritz.box
nameserver 192.168.178.1
nameserver fd00::de39:6fff:feec:40a6
Another thing I tried was to add route-noexec
to the conf file (commented out above). This suppressed the errors but did not resolve the problem of the VPN not working, so I decided it was better to see the errors. Comparing the log files with and without that command showed them to be identical up to the point of 2025-02-19 18:58:09 Timers: ping 10, ping-restart 120
in the above log but then the remainder of the log is empty up to the Initialization Sequence Completed
.
The question is, how can I get this working?
Jan Jachnik
(1 rep)
Feb 19, 2025, 07:25 PM
• Last activity: Aug 8, 2025, 03:28 PM
4
votes
2
answers
1131
views
Any way to fix chromium hang that eats up all mouse clicks?
So I have been running into this weird issue where Chromium would hang, and pretty much lock out mouse clicks. The mouse can still move, I can navigate using my keyboard all non-chromium windows like alt+tab and etc. But I can't click focus on any window with the mouse. If I kill chromium process, t...
So I have been running into this weird issue where Chromium would hang, and pretty much lock out mouse clicks. The mouse can still move, I can navigate using my keyboard all non-chromium windows like alt+tab and etc. But I can't click focus on any window with the mouse.
If I kill chromium process, things go back to normal.
I have experienced this issue on both Mate Linux Mint 19.3 (kernel 5.4) and on KDE OpenSuse 15.2 (kernel 5.13), and on different computers using different mice (both usb and wireless).
Only thing in common with the hangs I have seen is:
1) All of them are on X11 (so no wayland)
2) Computers have AMD gpus, up to 6-7 years apart
3) Most of the time, it tends to happen when the mouse hovers over a tab and the tooltip shows up (but not always, just most of the time)
I have no way to replicate it though, it just happens every once in a while.
Anyone ever run into this issue and know how to fix it? (Please don't say use FireFox, I use it but I need both)
Thanks
Edit: I ran into the issue again, and it seems I don't need to kill the entire chrome but just the gpu process. Anyone have any idea?
user16551018
(41 rep)
Jul 29, 2021, 05:16 AM
• Last activity: Aug 8, 2025, 03:26 PM
2
votes
1
answers
2118
views
ASUS motherboard fan control under Linux
I built a machine using an [ASUS TUF GAMING B650M-PLUS](https://www.asus.com/br/motherboards-components/motherboards/tuf-gaming/tuf-gaming-b650m-plus/) motherboard and a Ryzen 7 8700G CPU. Linux installed with no problems, but I am missing fan control and monitoring. Under Windows, I use ASUS softwa...
I built a machine using an [ASUS TUF GAMING B650M-PLUS](https://www.asus.com/br/motherboards-components/motherboards/tuf-gaming/tuf-gaming-b650m-plus/) motherboard and a Ryzen 7 8700G CPU. Linux installed with no problems, but I am missing fan control and monitoring.
Under Windows, I use ASUS software to do stuff like setting fan curves and monitoring fan speeds:
What I have tried:
* pwmconfig doesn't find any fans.
* sensors-detect fails to find anything but


spd5118
(RAM temperature, I believe), k10temp
(CPU temperature), amdgpu
and NVME temperature.
* [CoolerControl](https://gitlab.com/coolercontrol/coolercontrol) doesn't find any fans, either.
I keep finding stuff about fan control in ASUS laptops, which isn't the case. What tools can I use under Linux to manage CPU fans? Right now, I need to reboot and enter Setup, which is very impractical.
Renan
(17338 rep)
Jan 31, 2025, 10:58 PM
• Last activity: Aug 8, 2025, 02:18 PM
1
votes
4
answers
5247
views
How to get quick result for top 10 largest directories
I have a directory(mount point) of size 9T, and I would like to get each directory size, especially the once which has consumed more space. For this I am using below command and pushing result into a txt file from bash script. du -hsx * | sort -rh | head -10 Which is taking more time than i expected...
I have a directory(mount point) of size 9T, and I would like to get each directory size, especially the once which has consumed more space. For this I am using below command and pushing result into a txt file from bash script.
du -hsx * | sort -rh | head -10
Which is taking more time than i expected, Even after several hours i am not able to get the result in multiple occasion.
I am trying this over a network and using mobgar VPN connection.
Anything that can be improved here !
Sachin
(27 rep)
Feb 1, 2021, 05:08 PM
• Last activity: Aug 8, 2025, 02:04 PM
7
votes
2
answers
518
views
apt seems to be ignoring Signed-By
I'm trying to install AviSynth+ from yuuki-deb.x86.men. ```text $ cat /etc/apt/sources.list.d/yuuki-deb.sources Types: deb URIs: http://yuuki-deb.x86.men/ Suites: bullseye Components: main Signed-By: /usr/share/keyrings/yuuki-deb.gpg Enabled: yes $ ls -l /usr/share/keyrings/yuuki-deb.gpg -rw-r--r--...
I'm trying to install AviSynth+ from yuuki-deb.x86.men.
$ cat /etc/apt/sources.list.d/yuuki-deb.sources
Types: deb
URIs: http://yuuki-deb.x86.men/
Suites: bullseye
Components: main
Signed-By: /usr/share/keyrings/yuuki-deb.gpg
Enabled: yes
$ ls -l /usr/share/keyrings/yuuki-deb.gpg
-rw-r--r-- 1 root root 433 Sep 7 20:23 /usr/share/keyrings/yuuki-deb.gpg
$ gpg --show-keys /usr/share/keyrings/yuuki-deb.gpg
pub ed25519 2020-03-03 [SCA]
A9BBA31152359AE080A1DF851F331533ABCDEEA3
uid AviSynth+ Yuuki Debian Repository
# apt update
-*- snip -*-
Err:4 http://yuuki-deb.x86.men bullseye InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 1F331533ABCDEEA3
-*- snip -*-
It seems to be completely ignoring the Signed-By
directive. How can I fix this?
wizzwizz4
(712 rep)
Aug 2, 2025, 10:19 AM
• Last activity: Aug 8, 2025, 01:47 PM
0
votes
1
answers
4115
views
Error installing oracle 19c database pre-install on oracle linux server 7.6
I'm tryin to install oracle 19c on oracle linux 7.6, I'm required to install oracle 19c preinstall , knowing that this is a fresh install I don't understand why i'm getting these errors : yum -y localinstall http://yum1.stanford.edu/mrepo/ol8-x86 64/RPMS.appstream/oracle-database-preinstall-19c-1.0-...
I'm tryin to install oracle 19c on oracle linux 7.6, I'm required to install oracle 19c preinstall , knowing that this is a fresh install I don't understand why i'm getting these errors :
yum -y localinstall http://yum1.stanford.edu/mrepo/ol8-x86 64/RPMS.appstream/oracle-database-preinstall-19c-1.0-2.el8.x86 64.rpm
Loaded plugins: langpacks, ulninfo
Repository ol7_latest is listed more than once in the configuration
Repository ol7_u0_base is listed more than once in the configuration
Repository ol7_u1_base is listed more than once in the configuration
Repository ol7_u2_base is listed more than once in the configuration
Repository ol7_u3_base is listed more than once in the configuration
Repository ol7_u4_base is listed more than once in the configuration
Repository ol7_u5_base is listed more than once in the configuration
Repository ol7_u6_base is listed more than once in the configuration
Repository ol7_security_validation is listed more than once in the configuration
Repository ol7_optional_latest is listed more than once in the configuration
Repository ol7_addons is listed more than once in the configuration
Repository ol7_MODRHCK is listed more than once in the configuration
Repository ol7_latest_archive is listed more than once in the configuration
Repository ol7_optional_archive is listed more than once in the configuration
Repository ol7_UEKR5 is listed more than once in the configuration
Repository ol7_UEKR4 is listed more than once in the configuration
Repository ol7_UEKR3 is listed more than once in the configuration
Repository ol7_UEKR3_OFED20 is listed more than once in the configuration
Repository ol7_UEKR5_RDMA is listed more than once in the configuration
Repository ol7_UEKR4_OFED is listed more than once in the configuration
Repository ol7_UEKR4_archive is listed more than once in the configuration
Repository ol7_UEKR5_archive is listed more than once in the configuration
Repository ol7_kvm_utils is listed more than once in the configuration
Skipping: http://yum1.stanford.edu/mrepo/ol8-x86 , filename does not end in .rpm.
Skipping: 64/RPMS.appstream/oracle-database-preinstall-19c-1.0-2.el8.x86, filename does not end in .rpm.
Cannot open: 64.rpm. Skipping.
Nothing to do
Mohamed Douzi
(1 rep)
Aug 9, 2022, 02:47 PM
• Last activity: Aug 8, 2025, 01:03 PM
0
votes
1
answers
52
views
Boot QEMU from SPDK vhost-user-blk-pci
I'm trying to boot a QEMU VM from a `vhost-user-blk-pci` device, which appears to be generally possible (https://github.com/spdk/spdk/issues/1728). In my case, vhost gets the image via SPDK's NVMe-oF driver. However, QEMU does not find a bootable device. What I am doing: 1. Start vhost bin/vhost -S...
I'm trying to boot a QEMU VM from a
vhost-user-blk-pci
device, which appears to be generally possible (https://github.com/spdk/spdk/issues/1728) . In my case, vhost gets the image via SPDK's NVMe-oF driver. However, QEMU does not find a bootable device. What I am doing:
1. Start vhost
bin/vhost -S /var/tmp -s 1024 -m 0x3 -A 0000:82:00.1
2. Connect to NVMe-oF server and create blk controller
./rpc.py bdev_nvme_attach_controller -t tcp -a 10.0.0.4 -s 4420 -f ipv4 -n nqn.2024-10.placeholder:bd --name placeholder
./rpc.py vhost_create_blk_controller --cpumask 0x1 vhost.0 placeholdern1
3. Attempt to launch QEMU with blk controller as boot device (does not find anything bootable)
taskset -c 2,3 qemu-system-x86_64 \
-enable-kvm \
-m 1G \
-smp 8 \
-nographic \
-object memory-backend-file,id=mem0,size=1G,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem0 \
-chardev socket,id=spdk_vhost_blk0,path=/var/tmp/vhost.0,reconnect=1 \
-device vhost-user-blk-pci,chardev=spdk_vhost_blk0,bootindex=1,num-queues=2
Things I've checked:
* I can mount an NMVe-oF disk to the VM just fine using the same sequence of commands (giving QEMU an additional bootable drive) (just booting from it won't work)
* the image on the NVMe-oF server boots just fine if I provide it locally (via the host-kernel NVMe-oF driver that I can't use in production) and declare it in the QEMU options as a drive
* QEMU does not appear to have an NVMe-oF driver itself that I could use instead (it does have an NVMe driver)
QEMU version 7.2.15 (Debian 1:7.2+dfsg-7+deb12u12)
SPDK version SPDK v25.01-pre git sha1 8d960f1d8
Slow
(11 rep)
Aug 4, 2025, 10:39 AM
• Last activity: Aug 8, 2025, 12:40 PM
21
votes
5
answers
53748
views
What is a best practice to represent a boolean value in a shell script?
I know, that there are boolean values in `bash`, but I don't ever see them used anywhere. I want to write a wrapper for some often looked up information on my machine, for example, is this particular USB drive inserted/mounted. What would be the best practice to achieve that? * A string? drive_xyz_a...
I know, that there are boolean values in
bash
, but I don't ever see them used anywhere.
I want to write a wrapper for some often looked up information on my machine, for example, is this particular USB drive inserted/mounted.
What would be the best practice to achieve that?
* A string?
drive_xyz_available=true
* A number (0 for true, ≠0 for false)?
drive_xyz_available=0 # evaluates to true
* A function?
drive_xyz_available() {
if available_magic; then
return 0
else
return 1
fi
}
I mostly wonder about, what would be expected by other people who would want to use the wrapper. Would they expect a boolean value, a command like variable or a function to call?
From a security standpoint I would think the second option is the safest, but I would love to hear your experiences.
Minix
(6065 rep)
Feb 19, 2015, 09:57 AM
• Last activity: Aug 8, 2025, 12:39 PM
Showing page 4 of 20 total questions