Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
4
votes
1
answers
2270
views
qmlscene can't find Qt installation even though qtchooser lists versions
I'm running this in a folder with `main.qml`: ```shell $ qmlscene main.qml qmlscene: could not find a Qt installation of '' ``` Then I checked ```shell $ qtchooser -list-versions 4 5 qt4-x86_64-linux-gnu qt4 qt5-x86_64-linux-gnu qt5 ``` ... and tried ```shell $ sudo qmlscene -qt=qt5-x86_64_linux-gnu...
I'm running this in a folder with
main.qml
:
$ qmlscene main.qml
qmlscene: could not find a Qt installation of ''
Then I checked
$ qtchooser -list-versions
4
5
qt4-x86_64-linux-gnu
qt4
qt5-x86_64-linux-gnu
qt5
... and tried
$ sudo qmlscene -qt=qt5-x86_64_linux-gnu main.qml
qmlscene: could not find a Qt installation of 'qt5-x86_64_linux-gnu'
user304200
(41 rep)
Aug 7, 2018, 02:26 PM
• Last activity: Aug 7, 2025, 10:06 AM
-1
votes
0
answers
47
views
How can I fix clash's parse config error: incorrect UUID length 4 in string "uuid"?
I have just installed clash from my Linux distribution's repository, and both my Linux distribution and clash are outdated (which, I am not sure, is the cause of the following problem): $ clash -v Clash 1.16.0 linux amd64 with go1.20.8 unknown time I got the following config error when running clash...
I have just installed clash from my Linux distribution's repository, and both my Linux distribution and clash are outdated (which, I am not sure, is the cause of the following problem):
$ clash -v
Clash 1.16.0 linux amd64 with go1.20.8 unknown time
I got the following config error when running clash
$ clash
FATA Parse config error: proxy 3: uuid: incorrect UUID length 4 in string "uuid"
I have made
~/.config/clash/config.yaml
(whose content is copied and pasted at the end of this post) to be the same as the one in
https://doreamon-design.github.io/clash/configuration/configuration-reference.html , except the following part for section proxy-providers
(see https://doreamon-design.github.io/clash/configuration/outbound.html for detailed information):
proxy-providers:
provider1:
type: http
url: "https://node.freev2raynode.com/uploads/2025/08/1-20250807.yaml "
interval: 3600
path: ./provider1.yaml
health-check:
enable: true
interval: 600
# lazy: true
url: http://www.gstatic.com/generate_204
test:
type: file
path: /test.yaml
health-check:
enable: true
interval: 36000
url: http://www.gstatic.com/generate_204
Is the uuid error in ~/.config/clash/config.yaml
or https://node.freev2raynode.com/uploads/2025/08/1-20250807.yaml ? How can I fix it?
In config.yaml, uuid appears in
> uuid: uuid
as in
# vmess
# cipher support auto/aes-128-gcm/chacha20-poly1305/none
- name: "vmess"
type: vmess
server: server
port: 443
uuid: uuid
alterId: 32
cipher: auto
If there's a problem there, what's it and how is it fixed?
Thanks!
****************************
In https://node.freev2raynode.com/uploads/2025/08/1-20250807.yaml , uuid
has appeared in the following part:
proxies:
- {name: HK香港(mibei77.com 米贝节点分享), server: free-relay.themars.top, port: 37906, type: vmess, uuid: 90030631-4027-4810-8ce9-3e9095390f2d, alterId: 0, cipher: auto, tls: false, skip-cert-verify: true, network: ws, ws-path: /cctv1.m3u8, ws-headers: {Host: www.cctv.com}, udp: true}
- {name: US美国(mibei77.com 米贝节点分享), server: 45.67.215.95, port: 443, type: trojan, password: tg-fq521free, skip-cert-verify: true, udp: true}
- {name: US美国(mibei77.com 米贝节点分享) 2, server: dDdDdDdddDDDDyUUUIO.4444926.XyZ, port: 80, type: vmess, uuid: dc50eb1d-244d-4711-b168-a101a5e6fb1b, alterId: 0, cipher: auto, tls: false, skip-cert-verify: true, network: ws, ws-path: /awmqq79B17rfnpXiNaWb, ws-headers: {Host: dddddddddddddyuuuio.4444926.xyz}, udp: true}
- {name: US美国(mibei77.com 米贝节点分享) 3, server: switcher-nick-croquet.freesocks.work, port: 443, type: ss, cipher: chacha20-ietf-poly1305, password: 9tqhMdIrTkgQ46PvhyAtMH, udp: true}
- {name: US美国(mibei77.com 米贝节点分享) 4, server: 172.67.214.21, port: 443, type: trojan, password: 7248e825-887c-48b9-83bc-c26bc6392bf8, skip-cert-verify: true, udp: true}
- {name: HK香港(mibei77.com 米贝节点分享) 2, server: v29.heduian.link, port: 30829, type: vmess, uuid: cbb3f877-d1fb-344c-87a9-d153bffd5484, alterId: 2, cipher: auto, tls: false, skip-cert-verify: true, network: ws, ws-path: /oooo, ws-headers: {Host: ocbc.com}, udp: true}
- {name: JP日本(mibei77.com 米贝节点分享), server: arxfw2b78fi2q9hzylhn.freesocks.work, port: 443, type: ss, cipher: chacha20-ietf-poly1305, password: Nk9asglDzHzjktVzTkvhaA, udp: true}
- {name: US美国(mibei77.com 米贝节点分享) 5, server: rrrrrrrrrt.11890604.xyz, port: 443, type: vmess, uuid: f898ffcb-6417-4373-9640-0b66091e8206, alterId: 0, cipher: auto, tls: true, skip-cert-verify: true, network: ws, ws-path: /GnJ3bBxV91uFkYtuzXyJ5XNeH1R1, ws-headers: {Host: rrrrrrrrrt.11890604.xyz}, udp: true}
- {name: US美国(mibei77.com 米贝节点分享) 6, server: 141.11.203.26, port: 8880, type: vmess, uuid: 1fcb582e-7ffb-3708-8a0f-96c2a070e40d, alterId: 0, cipher: auto, tls: false, skip-cert-verify: true, network: ws, ws-path: "/dabai&Telegram🇨🇳@WangCai2/?ed=2560", ws-headers: {Host: TG.WangCai2.s2.cn-db.top}, udp: true}
********************************
~/.config/clash/config.yaml
:
# Port of HTTP(S) proxy server on the local end
port: 7890
# Port of SOCKS5 proxy server on the local end
socks-port: 7891
# Transparent proxy server port for Linux and macOS (Redirect TCP and TProxy UDP)
# redir-port: 7892
# Transparent proxy server port for Linux (TProxy TCP and TProxy UDP)
# tproxy-port: 7893
# HTTP(S) and SOCKS4(A)/SOCKS5 server on the same port
# mixed-port: 7890
# authentication of local SOCKS5/HTTP(S) server
# authentication:
# - "user1:pass1"
# - "user2:pass2"
# Set to true to allow connections to the local-end server from
# other LAN IP addresses
# allow-lan: false
# This is only applicable when allow-lan
is true
# '*': bind all IP addresses
# 192.168.122.11: bind a single IPv4 address
# "[aaaa::a8aa:ff:fe09:57d8]": bind a single IPv6 address
# bind-address: '*'
# Clash router working mode
# rule: rule-based packet routing
# global: all packets will be forwarded to a single endpoint
# direct: directly forward the packets to the Internet
mode: rule
# Clash by default prints logs to STDOUT
# info / warning / error / debug / silent
# log-level: info
# When set to false, resolver won't translate hostnames to IPv6 addresses
# ipv6: false
# RESTful web API listening address
external-controller: 127.0.0.1:9090
# A relative path to the configuration directory or an absolute path to a
# directory in which you put some static web resource. Clash core will then
# serve it at http://{{external-controller}}/ui
.
# external-ui: folder
# Secret for the RESTful API (optional)
# Authenticate by spedifying HTTP header Authorization: Bearer ${secret}
# ALWAYS set a secret if RESTful API is listening on 0.0.0.0
# secret: ""
# Outbound interface name
# interface-name: en0
# fwmark on Linux only
# routing-mark: 6666
# Static hosts for DNS server and connection establishment (like /etc/hosts)
#
# Wildcard hostnames are supported (e.g. *.clash.dev, *.foo.*.example.com)
# Non-wildcard domain names have a higher priority than wildcard domain names
# e.g. foo.example.com > *.example.com > .example.com
# P.S. +.foo.com equals to .foo.com and foo.com
# hosts:
# '*.clash.dev': 127.0.0.1
# '.dev': 127.0.0.1
# 'alpha.clash.dev': '::1'
# profile:
# Store the select
results in $HOME/.config/clash/.cache
# set false If you don't want this behavior
# when two different configurations have groups with the same name, the selected values are shared
# store-selected: true
# persistence fakeip
# store-fake-ip: false
# DNS server settings
# This section is optional. When not present, the DNS server will be disabled.
dns:
enable: false
listen: 0.0.0.0:53
# ipv6: false # when the false, response to AAAA questions will be empty
# These nameservers are used to resolve the DNS nameserver hostnames below.
# Specify IP addresses only
default-nameserver:
- 114.114.114.114
- 8.8.8.8
# enhanced-mode: fake-ip
fake-ip-range: 198.18.0.1/16 # Fake IP addresses pool CIDR
# use-hosts: true # lookup hosts and return IP record
# search-domains: [local] # search domains for A/AAAA record
# Hostnames in this list will not be resolved with fake IPs
# i.e. questions to these domain names will always be answered with their
# real IP addresses
# fake-ip-filter:
# - '*.lan'
# - localhost.ptlogin2.qq.com
# Supports UDP, TCP, DoT, DoH. You can specify the port to connect to.
# All DNS questions are sent directly to the nameserver, without proxies
# involved. Clash answers the DNS question with the first result gathered.
nameserver:
- 114.114.114.114 # default value
- 8.8.8.8 # default value
- tls://dns.rubyfish.cn:853 # DNS over TLS
- https://1.1.1.1/dns-query # DNS over HTTPS
- dhcp://en0 # dns from dhcp
# - '8.8.8.8#en0'
# When fallback
is present, the DNS server will send concurrent requests
# to the servers in this section along with servers in nameservers
.
# The answers from fallback servers are used when the GEOIP country
# is not CN
.
# fallback:
# - tcp://1.1.1.1
# - 'tcp://1.1.1.1#en0'
# If IP addresses resolved with servers in nameservers
are in the specified
# subnets below, they are considered invalid and results from fallback
# servers are used instead.
#
# IP address resolved with servers in nameserver
is used when
# fallback-filter.geoip
is true and when GEOIP of the IP address is CN
.
#
# If fallback-filter.geoip
is false, results from nameserver
nameservers
# are always used if not match fallback-filter.ipcidr
.
#
# This is a countermeasure against DNS pollution attacks.
# fallback-filter:
# geoip: true
# geoip-code: CN
# ipcidr:
# - 240.0.0.0/4
# domain:
# - '+.google.com'
# - '+.facebook.com'
# - '+.youtube.com'
# Lookup domains via specific nameservers
# nameserver-policy:
# 'www.baidu.com': '114.114.114.114'
# '+.internal.crop.com': '10.0.0.1'
proxies:
# Shadowsocks
# The supported ciphers (encryption methods):
# aes-128-gcm aes-192-gcm aes-256-gcm
# aes-128-cfb aes-192-cfb aes-256-cfb
# aes-128-ctr aes-192-ctr aes-256-ctr
# rc4-md5 chacha20-ietf xchacha20
# chacha20-ietf-poly1305 xchacha20-ietf-poly1305
- name: "ss1"
type: ss
server: server
port: 443
cipher: chacha20-ietf-poly1305
password: "password"
# udp: true
- name: "ss2"
type: ss
server: server
port: 443
cipher: chacha20-ietf-poly1305
password: "password"
plugin: obfs
plugin-opts:
mode: tls # or http
# host: bing.com
- name: "ss3"
type: ss
server: server
port: 443
cipher: chacha20-ietf-poly1305
password: "password"
plugin: v2ray-plugin
plugin-opts:
mode: websocket # no QUIC now
# tls: true # wss
# skip-cert-verify: true
# host: bing.com
# path: "/"
# mux: true
# headers:
# custom: value
# vmess
# cipher support auto/aes-128-gcm/chacha20-poly1305/none
- name: "vmess"
type: vmess
server: server
port: 443
uuid: uuid
alterId: 32
cipher: auto
# udp: true
# tls: true
# skip-cert-verify: true
# servername: example.com # priority over wss host
# network: ws
# ws-opts:
# path: /path
# headers:
# Host: v2ray.com
# max-early-data: 2048
# early-data-header-name: Sec-WebSocket-Protocol
- name: "vmess-h2"
type: vmess
server: server
port: 443
uuid: uuid
alterId: 32
cipher: auto
network: h2
tls: true
h2-opts:
host:
- http.example.com
- http-alt.example.com
path: /
- name: "vmess-http"
type: vmess
server: server
port: 443
uuid: uuid
alterId: 32
cipher: auto
# udp: true
# network: http
# http-opts:
# # method: "GET"
# # path:
# # - '/'
# # - '/video'
# # headers:
# # Connection:
# # - keep-alive
- name: vmess-grpc
server: server
port: 443
type: vmess
uuid: uuid
alterId: 32
cipher: auto
network: grpc
tls: true
servername: example.com
# skip-cert-verify: true
grpc-opts:
grpc-service-name: "example"
# socks5
- name: "socks"
type: socks5
server: server
port: 443
# username: username
# password: password
# tls: true
# skip-cert-verify: true
# udp: true
# http
- name: "http"
type: http
server: server
port: 443
# username: username
# password: password
# tls: true # https
# skip-cert-verify: true
# sni: custom.com
# Snell
# Beware that there's currently no UDP support yet
- name: "snell"
type: snell
server: server
port: 44046
psk: yourpsk
# version: 2
# obfs-opts:
# mode: http # or tls
# host: bing.com
# Trojan
- name: "trojan"
type: trojan
server: server
port: 443
password: yourpsk
# udp: true
# sni: example.com # aka server name
# alpn:
# - h2
# - http/1.1
# skip-cert-verify: true
- name: trojan-grpc
server: server
port: 443
type: trojan
password: "example"
network: grpc
sni: example.com
# skip-cert-verify: true
udp: true
grpc-opts:
grpc-service-name: "example"
- name: trojan-ws
server: server
port: 443
type: trojan
password: "example"
network: ws
sni: example.com
# skip-cert-verify: true
udp: true
# ws-opts:
# path: /path
# headers:
# Host: example.com
# ShadowsocksR
# The supported ciphers (encryption methods): all stream ciphers in ss
# The supported obfses:
# plain http_simple http_post
# random_head tls1.2_ticket_auth tls1.2_ticket_fastauth
# The supported supported protocols:
# origin auth_sha1_v4 auth_aes128_md5
# auth_aes128_sha1 auth_chain_a auth_chain_b
- name: "ssr"
type: ssr
server: server
port: 443
cipher: chacha20-ietf
password: "password"
obfs: tls1.2_ticket_auth
protocol: auth_sha1_v4
# obfs-param: domain.tld
# protocol-param: "#"
# udp: true
proxy-groups:
# relay chains the proxies. proxies shall not contain a relay. No UDP support.
# Traffic: clash http vmess ss1 ss2 Internet
- name: "relay"
type: relay
proxies:
- http
- vmess
- ss1
- ss2
# url-test select which proxy will be used by benchmarking speed to a URL.
- name: "auto"
type: url-test
proxies:
- ss1
- ss2
- vmess1
# tolerance: 150
# lazy: true
url: 'http://www.gstatic.com/generate_204 '
interval: 300
# fallback selects an available policy by priority. The availability is tested by accessing an URL, just like an auto url-test group.
- name: "fallback-auto"
type: fallback
proxies:
- ss1
- ss2
- vmess1
url: 'http://www.gstatic.com/generate_204 '
interval: 300
# load-balance: The request of the same eTLD+1 will be dial to the same proxy.
- name: "load-balance"
type: load-balance
proxies:
- ss1
- ss2
- vmess1
url: 'http://www.gstatic.com/generate_204 '
interval: 300
# strategy: consistent-hashing # or round-robin
# select is used for selecting proxy or proxy group
# you can use RESTful API to switch proxy is recommended for use in GUI.
- name: Proxy
type: select
# disable-udp: true
# filter: 'someregex'
proxies:
- ss1
- ss2
- vmess1
- auto
# direct to another interfacename or fwmark, also supported on proxy
- name: en1
type: select
interface-name: en1
routing-mark: 6667
proxies:
- DIRECT
- name: UseProvider
type: select
use:
- provider1
proxies:
- Proxy
- DIRECT
proxy-providers:
provider1:
type: http
url: "https://node.freev2raynode.com/uploads/2025/08/1-20250807.yaml "
interval: 3600
path: ./provider1.yaml
health-check:
enable: true
interval: 600
# lazy: true
url: http://www.gstatic.com/generate_204
test:
type: file
path: /test.yaml
health-check:
enable: true
interval: 36000
url: http://www.gstatic.com/generate_204
tunnels:
# one line config
- tcp/udp,127.0.0.1:6553,114.114.114.114:53,proxy
- tcp,127.0.0.1:6666,rds.mysql.com:3306,vpn
# full yaml config
- network: [tcp, udp]
address: 127.0.0.1:7777
target: target.com
proxy: proxy
rules:
- DOMAIN-SUFFIX,google.com,auto
- DOMAIN-KEYWORD,google,auto
- DOMAIN,google.com,auto
- DOMAIN-SUFFIX,ad.com,REJECT
- SRC-IP-CIDR,192.168.1.201/32,DIRECT
# optional param "no-resolve" for IP rules (GEOIP, IP-CIDR, IP-CIDR6)
- IP-CIDR,127.0.0.0/8,DIRECT
- GEOIP,CN,DIRECT
- DST-PORT,80,DIRECT
- SRC-PORT,7777,DIRECT
- RULE-SET,apple,REJECT # Premium only
- MATCH,auto
Tim
(106420 rep)
Aug 6, 2025, 08:15 PM
• Last activity: Aug 7, 2025, 09:30 AM
1
votes
1
answers
1877
views
mkdir says folder exists even though it doesn't show with ls -a
Arch linux. I have a CIFS mount from my NAS that was mounted with full permissions # line from my fstab //IP_ADDRESS/path/to/dir /path/to/local/dir cifs uid=my_user,gid=my_group,dir_mode=0777,file_mode=0777,credentials=path/to/my/creds 0 0 and am trying to create a directory, however this fails mkdi...
Arch linux.
I have a CIFS mount from my NAS that was mounted with full permissions
# line from my fstab
//IP_ADDRESS/path/to/dir /path/to/local/dir cifs uid=my_user,gid=my_group,dir_mode=0777,file_mode=0777,credentials=path/to/my/creds 0 0
and am trying to create a directory, however this fails
mkdir path/to/local/dir/subdir
mkdir: cannot create directory '/path/to/local/dir/subdir': File exists
However as far as I can tell it *doesn't* exist:
ls -la path/to/local/dir
drwxrwxrwx me me 0 date .
drwxrwxrwx me me 0 date ..
And looking at the files on my NAS that subdirectory does not exist on the remote. I'm stumped. And I can search for this, it just turns up a bunch of results where people didn't understand what dotfiles are.
What gives?
# Edit
Since I got some pushback on the particulars (fair enough), here are the **exact** commands and output. The
/mnt/nas/SteamLibrary
folder is the local mount point for the folder on the NAS.
[I] ⋊> ~ ls -la /mnt/nas/SteamLibrary/steamapps/downloading
17:19:56ls: cannot access '/mnt/nas/SteamLibrary/steamapps/downloading': No such file or directory
[I] ⋊> ~ mkdir /mnt/nas/SteamLibrary/steamapps/downloading
17:20:33mkdir: cannot create directory ‘/mnt/nas/SteamLibrary/steamapps/downloading’: File exists
[I] ⋊> ~ ls -la /mnt/nas/SteamLibrary/steamapps/
17:20:41total 108K
drwxrwxrwx 2 jsmith jsmith 0 Apr 18 21:30 .
drwxrwxrwx 2 jsmith jsmith 0 Apr 19 13:49 ..
drwxrwxrwx 2 jsmith jsmith 0 Apr 17 16:05 common
drwxrwxrwx 2 jsmith jsmith 0 Apr 17 16:04 compatdata
drwxrwxrwx 2 jsmith jsmith 0 Apr 17 16:03 shadercache
drwxrwxrwx 2 jsmith jsmith 0 Apr 17 16:05 temp
drwxrwxrwx 2 jsmith jsmith 0 Aug 10 2020 workshop
-rwxrwxrwx 1 jsmith jsmith 1.2K Feb 27 16:40 appmanifest_102500.acf
-rwxrwxrwx 1 jsmith jsmith 686 Feb 27 16:40 appmanifest_107300.acf
-rwxrwxrwx 1 jsmith jsmith 694 Feb 27 16:40 appmanifest_107310.acf
-rwxrwxrwx 1 jsmith jsmith 498 Apr 19 13:50 appmanifest_1391110.acf
-rwxrwxrwx 1 jsmith jsmith 483 Apr 19 13:50 appmanifest_1493710.acf
-rwxrwxrwx 1 jsmith jsmith 745 Feb 27 16:40 appmanifest_207320.acf
-rwxrwxrwx 1 jsmith jsmith 863 Feb 27 16:41 appmanifest_219780.acf
-rwxrwxrwx 1 jsmith jsmith 691 Apr 17 16:03 appmanifest_22320.acf
-rwxrwxrwx 1 jsmith jsmith 837 Apr 19 13:50 appmanifest_22330.acf
-rwxrwxrwx 1 jsmith jsmith 516 Feb 27 16:59 appmanifest_256460.acf
-rwxrwxrwx 1 jsmith jsmith 1.2K Feb 27 16:59 appmanifest_292030.acf
-rwxrwxrwx 1 jsmith jsmith 825 Feb 27 16:40 appmanifest_312540.acf
-rwxrwxrwx 1 jsmith jsmith 1.1K Apr 17 16:03 appmanifest_340170.acf
-rwxrwxrwx 1 jsmith jsmith 1.1K Feb 27 16:40 appmanifest_351970.acf
-rwxrwxrwx 1 jsmith jsmith 894 Feb 27 16:40 appmanifest_367500.acf
-rwxrwxrwx 1 jsmith jsmith 773 Feb 27 16:40 appmanifest_372360.acf
-rwxrwxrwx 1 jsmith jsmith 1.5K Feb 27 16:37 appmanifest_379720.acf
-rwxrwxrwx 1 jsmith jsmith 599 Apr 17 16:20 appmanifest_391540.acf
-rwxrwxrwx 1 jsmith jsmith 665 Apr 19 13:49 appmanifest_406110.acf
-rwxrwxrwx 1 jsmith jsmith 685 Feb 27 16:59 appmanifest_418340.acf
-rwxrwxrwx 1 jsmith jsmith 794 Feb 27 16:40 appmanifest_429660.acf
-rwxrwxrwx 1 jsmith jsmith 985 Feb 27 16:40 appmanifest_489830.acf
-rwxrwxrwx 1 jsmith jsmith 612 Feb 27 16:59 appmanifest_506510.acf
-rwxrwxrwx 1 jsmith jsmith 667 Feb 27 16:41 appmanifest_522530.acf
-rwxrwxrwx 1 jsmith jsmith 708 Feb 27 16:41 appmanifest_525240.acf
-rwxrwxrwx 1 jsmith jsmith 891 Apr 17 16:03 appmanifest_538680.acf
-rwxrwxrwx 1 jsmith jsmith 1.1K Feb 27 16:40 appmanifest_72850.acf
Jared Smith
(125 rep)
Apr 19, 2021, 06:06 PM
• Last activity: Aug 7, 2025, 09:04 AM
0
votes
1
answers
40
views
Why does script or watch output look weird in saved text files
I use the `script` command to save the whole session, but the output in the text file is something weird, and I opened it with the `cat` command in the terminal, and everything became ok. How can I save it in a text file that becomes readable? This also happened with the `watch` command. How to fix...
I use the
script
command to save the whole session, but the output in the text file is something weird, and I opened it with the cat
command in the terminal, and everything became ok.
How can I save it in a text file that becomes readable?
This also happened with the watch
command.
How to fix it?
The following commands (individually) result in a.txt
containing weird text:
watch date | tee a.txt
script a.txt ...
Sorenyt Mikelyt
(1 rep)
Aug 7, 2025, 12:31 AM
• Last activity: Aug 7, 2025, 08:33 AM
6
votes
1
answers
3729
views
Making ChrootDirectory directory writable by SFTP user
If a user logs into a machine via SFTP, one can make use of `ChrootDirectory` keyword to give an illusion that user is in a root directory. But that directory is only writable by `root` user. I would love for this user to have such write capabilities, and it doesn't appear that OpenSSH offers this,...
If a user logs into a machine via SFTP, one can make use of
ChrootDirectory
keyword to give an illusion that user is in a root directory. But that directory is only writable by root
user. I would love for this user to have such write capabilities, and it doesn't appear that OpenSSH offers this, unless I missed something?
I am aware that that SFTP user can be given write access to any file/directory inside that ChrootDirectory
, but it's not good enough. I want the user to also create/delete the files directly under that "root" directory, without the workaround of creating a subdirectory that that user has write access to.
tshepang
(67482 rep)
Jan 10, 2013, 08:16 AM
• Last activity: Aug 7, 2025, 08:04 AM
7
votes
1
answers
458
views
How to stop kate exec'ing itself
After a recent update, kate seems to exec a copy of itself, presumably with some parameter which prevents an endless recursion, and then exits. Presumably this was intended to help users who run kate with a command in a command window and don't want the command window blocked until the kate window i...
After a recent update, kate seems to exec a copy of itself, presumably with some parameter which prevents an endless recursion, and then exits. Presumably this was intended to help users who run kate with a command in a command window and don't want the command window blocked until the kate window is closed.
However this breaks things like
git commit
and sudoedit
, since these wait for kate to exit and then check to see if the file was changed. Now that the original kate process exits immediately, the file hasn't changed. How can I stop kate from exec'ing itself?
The only way that I can think of is to move the real /usr/bin/kate
to something like /usr/bin/realkate
, and create a tiny shell script at /usr/bin/kate
which does something like
/usr/bin/realkate -b $*
, but this will get overwritten on the next upgrade.
I tried alias kate='kate -b'
in my .bashrc, but this only works if I run kate from the command line, not when git commit
or sudoedit
runs it.
Possibly this could be fixed in katerc, but there doesn't seem to be any documentation for that.
Richard Parkins
(191 rep)
Aug 6, 2025, 08:57 AM
• Last activity: Aug 7, 2025, 07:18 AM
0
votes
2
answers
31
views
apt update on Debian appears to be using old sources, how to fix?
I've got a fairly old host that I recently updated to Bookworm (Debian 12). In doing so, I updated the sources list in /etc/apt/sources.list to the following: ``` deb http://deb.debian.org/debian bookworm main deb-src http://deb.debian.org/debian bookworm main deb http://deb.debian.org/debian-securi...
I've got a fairly old host that I recently updated to Bookworm (Debian 12). In doing so, I updated the sources list in /etc/apt/sources.list to the following:
deb http://deb.debian.org/debian bookworm main
deb-src http://deb.debian.org/debian bookworm main
deb http://deb.debian.org/debian-security/ bookworm-security main
deb-src http://deb.debian.org/debian-security/ bookworm-security main
deb http://deb.debian.org/debian bookworm-updates main
deb-src http://deb.debian.org/debian bookworm-updates main
This worked in getting the distro version upgraded.
I'm now trying to install openvpn as root, using the command apt update && apt install openvpn
(Following the tutorial here: https://std.rocks/vpn_openvpn_bookworm.html)
The installation fails, when I try to enable automatic startup of the service using this command (as root):
sed -i 's/#AUTOSTART="all"/AUTOSTART="all"/' /etc/default/openvpn ; systemctl daemon-reload
I get the error: sed: can't read /etc/default/openvpn: No such file or directory
In examining the output from the apt install openvpn command, I see multiple references to old distros. My assumption is that there is some other place where sources need to be updated, but I'm not sure about that.
Is my assumption correct that there are lingering old sources that I need to update? Or have I misread the output and need to look elsewhere?

Gojira
(153 rep)
Aug 7, 2025, 01:03 AM
• Last activity: Aug 7, 2025, 07:09 AM
1
votes
0
answers
24
views
Ubuntu keeps remounting /dev/shm with different mounting options periodically
I have a Ubuntu 24.04.2 system where `/dev/shm` gets remounted (I assume) every now and then (roughly each 10 seconds), but I have no idea why. There's no mention of that mounting point in `/etc/fstab` and even if I would add an entry in there it would still be remounted with other options than the...
I have a Ubuntu 24.04.2 system where
/dev/shm
gets remounted (I assume) every now and then (roughly each 10 seconds), but I have no idea why. There's no mention of that mounting point in /etc/fstab
and even if I would add an entry in there it would still be remounted with other options than the one I expect.
Here's the output from /proc/self/mountinfo
(see final column):
$ while true; do column -t -N mountID,parentID,"major:minor",rootMount,mountPoint,mountOpts,optionalFields,optFieldSeparator,fsType,mountSource,superOpts /proc/self/mountinfo | grep -E 'mountID|shm'; sleep 1; done
mountID parentID major:minor rootMount mountPoint mountOpts optionalFields optFieldSeparator fsType mountSource superOpts
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
32 26 0:27 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,size=4011076k,nr_inodes=1002769,inode64
I've tried checking dmesg -wH
, using strace
(though I do not remember the parameters anymore) on pid 1 (not sure if this is sane or not - ChatGPT suggestion), checking journalctl -k -f
and even finding all files on the system and executing grep -IHn '/dev/shm'
without finding anything useful. There was a couple of mentions of apparmor
, although I disabled that service and it still got remounted.
I compared this to another system running the same version of Ubuntu (although with a slightly different version of the kernel - 6.8.0-63
vs 6.8.0-71
) and the issue do not happen there.
How do I troubleshoot this?
mrP
(81 rep)
Aug 6, 2025, 07:29 PM
• Last activity: Aug 7, 2025, 07:09 AM
3
votes
2
answers
2320
views
arecord can't find the right device?
I've installed the respeaker pi hat module to my Rpi0W using sudo apt-get update sudo apt-get upgrade git clone https://github.com/respeaker/seeed-voicecard.git cd seeed-voicecard sudo ./install.sh reboot but cannot test whether it works - The tutorial states I can pipe the recording and play it as...
I've installed the respeaker pi hat module to my Rpi0W using
sudo apt-get update
sudo apt-get upgrade
git clone https://github.com/respeaker/seeed-voicecard.git
cd seeed-voicecard
sudo ./install.sh
reboot
but cannot test whether it works - The tutorial states I can pipe the recording and play it as such
arecord -f cd -Dhw:1 | aplay -Dhw:1
but this is not working? I guess my hardware is listed differently? but i can't figure out how i should make the same call above with my hardware list?
pi@raspberrypi:~ $ aplay -l && arecord -l
**** List of PLAYBACK Hardware Devices ****
card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA]
Subdevices: 7/7
Subdevice #0: subdevice #0
Subdevice #1: subdevice #1
Subdevice #2: subdevice #2
Subdevice #3: subdevice #3
Subdevice #4: subdevice #4
Subdevice #5: subdevice #5
Subdevice #6: subdevice #6
card 0: ALSA [bcm2835 ALSA], device 1: bcm2835 IEC958/HDMI [bcm2835 IEC958/HDMI]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: ALSA [bcm2835 ALSA], device 2: bcm2835 IEC958/HDMI1 [bcm2835 IEC958/HDMI1]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: seeed2micvoicec [seeed-2mic-voicecard], device 0: bcm2835-i2s-wm8960-hifi wm8960-hifi-0 []
Subdevices: 1/1
Subdevice #0: subdevice #0
**** List of CAPTURE Hardware Devices ****
card 1: seeed2micvoicec [seeed-2mic-voicecard], device 0: bcm2835-i2s-wm8960-hifi wm8960-hifi-0 []
Subdevices: 0/1
Subdevice #0: subdevice #0
tutorial: http://wiki.seeedstudio.com/ReSpeaker_2_Mics_Pi_HAT/
neq
(113 rep)
Sep 11, 2019, 04:03 AM
• Last activity: Aug 7, 2025, 07:04 AM
5
votes
1
answers
2871
views
Debian 8 apt-get upgrade fails with Failed to fetch … Connection failed
Debian 8 "Jessie". CLI-only home-use server. `apt-get upgrade` always fails with `E: Failed to fetch ... Connection failed ...` errors. `apt-get update` appears to always succeed. Problem has been occurring for days. Upgrades were previously succeeding without issue. Recently re-sized the `/home` an...
Debian 8 "Jessie". CLI-only home-use server.
apt-get upgrade
always fails with E: Failed to fetch ... Connection failed ...
errors.
apt-get update
appears to always succeed.
Problem has been occurring for days. Upgrades were previously succeeding without issue. Recently re-sized the /home
and /var
drive partitions but cannot think of any other major changes.
Have attempted to include several mirrors in /etc/apt/sources.list but none succeed. My sources.list
file is:
deb http://mirror.it.ubc.ca/debian jessie main
deb http://mirror.its.dal.ca/debian jessie main
deb http://debian.mirror.iweb.ca/debian jessie main
deb http://debian.mirror.rafal.ca/debian jessie main
deb http://ftp3.nrc.ca/debian jessie main
deb-src http://mirror.it.ubc.ca/debian jessie main
deb-src http://mirror.its.dal.ca/debian jessie main
deb-src http://debian.mirror.iweb.ca/debian jessie main
deb-src http://debian.mirror.rafal.ca/debian jessie main
deb-src http://ftp3.nrc.ca/debian jessie main
deb http://mirror.it.ubc.ca/debian jessie-updates main
deb http://mirror.its.dal.ca/debian jessie-updates main
deb http://debian.mirror.iweb.ca/debian jessie-updates main
deb http://debian.mirror.rafal.ca/debian jessie-updates main
deb http://ftp3.nrc.ca/debian jessie-updates main
deb-src http://mirror.it.ubc.ca/debian jessie-updates main
deb-src http://mirror.its.dal.ca/debian jessie-updates main
deb-src http://debian.mirror.iweb.ca/debian jessie-updates main
deb-src http://debian.mirror.rafal.ca/debian jessie-updates main
deb-src http://ftp3.nrc.ca/debian jessie-updates main
Example of errors encountered:
Err http://mirror.its.dal.ca/debian/ jessie/main base-files amd64 8+deb8u2
Connection failed [IP: 192.75.96.254 80]
E: Failed to fetch http://ftp3.nrc.ca/debian/pool/main/b/base-files/base-files_8+deb8u2_amd64.deb Connection failed [IP: 132.246.2.23 80]
No proxy is defined in /etc/apt/apt.conf
or /etc/apt/apt.conf.d
Attempting to download the failed URLs on a different, desktop computer through a web browser also fail with connection errors.
Perhaps related, attempting to download netinstall images with a desktop web browser from cdimage.debian.org always fail with a The connection was reset
error.
My Arch Linux desktop computer can upgrade packages fine.
Todd
(51 rep)
Sep 12, 2015, 11:25 PM
• Last activity: Aug 7, 2025, 06:06 AM
1
votes
0
answers
23
views
GRUB not booting any more after system update: How to set the root variable the correct way?
Today I ran an upgrade on one of my debian 11 VMs to get the latest security updates. Obviously, the GRUB system has been broken by the update. The VM uses EFI, so there is a sort of "chainloading". First `EFI\debian\grub.cfg` is executed, which is a minimal configuration file that directs GRUB to t...
Today I ran an upgrade on one of my debian 11 VMs to get the latest security updates. Obviously, the GRUB system has been broken by the update. The VM uses EFI, so there is a sort of "chainloading". First
EFI\debian\grub.cfg
is executed, which is a minimal configuration file that directs GRUB to the main configuration file /boot/grub/grub.cfg
.
The main configuration file lets GRUB search for the wrong UUID when trying to determine the root
environment variable. This prevents the system from booting.
These are my file system UUIDs:
root@morn ~ # blkid
/dev/vda2: UUID="4963-B5C0" BLOCK_SIZE="4096" TYPE="vfat" PARTUUID="40d4aada-c48d-446d-87e0-8a3ca2514eaf"
/dev/vda1: UUID="7c91164d-298d-4ef8-9823-df48a13e5325" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ea35937f-4329-4c09-a674-70b551e654d9"
As we can see, /dev/vda2
is the EFI partition, while /dev/vda1
is the partition the system should boot from.
This is the respective snippet from my /boot/grub/grub.cfg
:
menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-4963-B5C0' {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod fat
search --no-floppy --fs-uuid --set=root 4963-B5C0
echo 'Loading Linux 6.1.0-37-amd64 ...'
linux /boot/vmlinuz-6.1.0-37-amd64 root=UUID=7c91164d-298d-4ef8-9823-df48a13e5325 ro ipv6.disable=1 quiet
echo 'Loading initial ramdisk ...'
initrd /boot/initrd.img-6.1.0-37-amd64
}
I have generated that grub.cfg
the recommended way via update-grub
from within the VM while it was running. Obviously, update-grub
uses the wrong UUID when generating the search ...
line. It uses the UUID of the EFI file system instead of the UUID of the linux root file system. Consequently, GRUB cannot boot because there is no kernel and no appropriate directory structure on the EFI partition.
As mentioned above, this happens since I have installed updates for Debian 11 in that VM. I'd like to emphasize that this was not a version upgrade. The VM was on Debian 11 since quite a while, and GRUB and its associated tools always worked flawlessly until I had applied the missing updates.
Of course, I now could simply edit /boot/grub/grub.cfg
every time I have run update-grub
, replacing the search ...
line by something like set root=(hd0,gpt1)
. But this approach is not recommended, and it is error prone. On the other hand, changing the custom scripts in /etc/grub.d
doesn't seem reasonable as well.
Does anybody know another way to tell update-grub
the correct UUID for that search ...
stanza?
If memory serves me correctly, we can disable the UUID search using "official" methods, but that seems a bad idea, and I really would like to learn how to tell GRUB the correct UUID and make it behave as before the upgrade.
Another solution would be to not care about the root
variable at all and simply prepend (hd0,gpt1)
before the linux
and the initrd
path. But I don't know a reasonable (safe for upgrades) way to do that either.
**EDIT / UPDATE 2025-08-07 #1**
In the meantime, I have researched further and have found something that is very suspicious and nearly surely causes the problem. Please consider the following snippet from the terminal in the VM in question:
root@morn /etc/grub.d # blkid
/dev/vda2: UUID="4963-B5C0" BLOCK_SIZE="4096" TYPE="vfat" PARTUUID="40d4aada-c48d-446d-87e0-8a3ca2514eaf"
/dev/vda1: UUID="7c91164d-298d-4ef8-9823-df48a13e5325" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ea35937f-4329-4c09-a674-70b551e654d9"
root@morn /etc/grub.d # fdisk -l /dev/vda
Disk /dev/vda: 112 GiB, 120259084288 bytes, 29360128 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 8C7D0CC5-3D22-490E-A1CC-92DA49B5D125
Device Start End Sectors Size Type
/dev/vda1 131072 29360122 29229051 111.5G Linux filesystem
/dev/vda2 16384 131071 114688 448M EFI System
Partition table entries are not in disk order.
root@morn /etc/grub.d # mount |grep /dev/vda
/dev/vda1 on / type ext4 (rw,relatime,quota,usrquota,grpquota,errors=remount-ro)
So far, we once again see without any doubt that the root file system ('/') is mounted on vda1
and that it is an ext4
file system with UUID 7c91164d-298d-4ef8-9823-df48a13e5325
. Furthermore, the two file systems on vda
definitely have different UUIDs.
And now GRUB seems to have a massive bug:
root@morn /etc/grub.d # grub-probe -d /dev/vda1; grub-probe -d /dev/vda2
fat
fat
root@morn /etc/grub.d # grub-probe -t fs_uuid -d /dev/vda1; grub-probe -t fs_uuid -d /dev/vda2
4963-B5C0
4963-B5C0
root@morn /etc/grub.d # grub-probe /; grub-probe -t fs_uuid /
fat
4963-B5C0
So GRUB obviously has come to the conclusion that both file systems (on vda1
and vda2
, respectively) are FAT
, that they both have the same UUID (4963-B5C0
), and that /
is mounted on a FAT
file system with that UUID.
Of course, this is complete nonsense and clearly contradicts the output of blkid
, mount
and fdisk
.
Any ideas?
Binarus
(3891 rep)
Aug 6, 2025, 06:26 PM
• Last activity: Aug 7, 2025, 05:58 AM
0
votes
0
answers
23
views
Is it possible to change screen resolution without a desktop environment or a window manager
I looked up how I can use commands to resize the display, but all of them seem to require a window manager or a desktop environment, for example, `xrandr`, but it says `No display found`. I'm assuming that just means that I didn't install Xorg (but I could be wrong though.) I am using the Arch Linu...
I looked up how I can use commands to resize the display, but all of them seem to require a window manager or a desktop environment, for example,
xrandr
, but it says No display found
. I'm assuming that just means that I didn't install Xorg (but I could be wrong though.)
I am using the Arch Linux installation ISO. Since I don't have a desktop environment or a window manager, it seems that I can't resize the display.
Is it possible that even without a desktop environment or window manager, I can change the screen resolution, let's say from something like 1280x800 to 1366x720? If yes, how? Does the installation ISO already let me do that?
Sul4ur
(9 rep)
Aug 7, 2025, 05:33 AM
3
votes
2
answers
4231
views
Cannot access HDD files through Linux Mint Live USB
I have valuable info on my Ubuntu partition, but it crashed, and I tried to get to it through Live USB with Mint 14, but it says it's read only. Can I make it writable too? So I can put it on my flash drive?
I have valuable info on my Ubuntu partition, but it crashed, and I tried to get to it through Live USB with Mint 14, but it says it's read only. Can I make it writable too? So I can put it on my flash drive?
Brandon laizure
(31 rep)
Mar 22, 2013, 01:42 AM
• Last activity: Aug 7, 2025, 05:01 AM
5
votes
6
answers
303
views
Check if multiple files exist on a remote server
I am writing a script that locates a special type of file on my system and i want to check if those files are also present on a remote machine. So to test a single file I use: ssh -T user@host [[ -f /path/to/data/1/2/3/data.type ]] && echo "File exists" || echo "File does not exist"; But since I hav...
I am writing a script that locates a special type of file on my system and i want to check if those files are also present on a remote machine.
So to test a single file I use:
ssh -T user@host [[ -f /path/to/data/1/2/3/data.type ]] && echo "File exists" || echo "File does not exist";
But since I have to check blocks of 10 to 15 files, that i would like to check in one go, since I do not want to open a new ssh-connection for every file.
My idea was to do something like:
results=$(ssh "user@host" '
for file in "${@}"; do
if [ -e "$file" ]; then
echo "$file: exists"
else
echo "$file: does not exist"
fi
done
' "${files_list[@]}")
Where the file list contains multiple file path. But this does not work, as a result, I would like to have the "echo" string for every file that was in the files_list.
Nunkuat
(153 rep)
Aug 6, 2025, 12:31 PM
• Last activity: Aug 7, 2025, 04:26 AM
0
votes
1
answers
1869
views
Adding swap partition with LVM
I have a simple question. I have a virtual machine already installed with RHEL7 and I have to add the swap partition to the system. I haven't enough free space on the PV so I have to add a new disk. What is the best way to add this swap space? Thank you
I have a simple question.
I have a virtual machine already installed with RHEL7 and I have to add the swap partition to the system. I haven't enough free space on the PV so I have to add a new disk. What is the best way to add this swap space?
Thank you
intore
(399 rep)
Feb 20, 2020, 03:32 PM
• Last activity: Aug 7, 2025, 04:03 AM
-4
votes
5
answers
106
views
Perl or sed script to remove a specific content of a file from another bigger file
I have a file **(FILE1)** with some repeated sections as below (example): LINE 1 ABCD LINE 2 EFGA LINE 3 HCJK REMOVE LINE11 REMOVE LINE12 REMOVE LINE13 LINE 4 ABCDH LINE 5 EFGAG LINE 6 HCJKD REMOVE LINE11 REMOVE LINE12 REMOVE LINE13 LINE 7 ABCDH LINE 8 EFGAG LINE 9 HCJKD I have several such files. I...
I have a file **(FILE1)** with some repeated sections as below (example):
LINE 1 ABCD
LINE 2 EFGA
LINE 3 HCJK
REMOVE LINE11
REMOVE LINE12
REMOVE LINE13
LINE 4 ABCDH
LINE 5 EFGAG
LINE 6 HCJKD
REMOVE LINE11
REMOVE LINE12
REMOVE LINE13
LINE 7 ABCDH
LINE 8 EFGAG
LINE 9 HCJKD
I have several such files. In a pattern file (**PATTERN**) I have these removable lines stored.
REMOVE LINE11
REMOVE LINE12
REMOVE LINE13
I want to write a sed, awk (bash code) or a Perl code to remove all the sections of of **FILE** that match the content of the file **PATTERN**. Another requirement is to remove all but leave the first occurrence only.
Pratap
(45 rep)
Aug 6, 2025, 06:03 AM
• Last activity: Aug 7, 2025, 03:11 AM
6
votes
3
answers
15025
views
How do I disable network interfaces from loading at boot without Network Manager?
The machine I'm using has 4 network interfaces, only one of which is currently being used (the other 3 are not plugged in at all). Currently when the machine is restarted it is delayed by a few minutes trying to bring up interfaces 2-4. This is a pretty significant annoyance because this machine is...
The machine I'm using has 4 network interfaces, only one of which is currently being used (the other 3 are not plugged in at all). Currently when the machine is restarted it is delayed by a few minutes trying to bring up interfaces 2-4. This is a pretty significant annoyance because this machine is restarted very often.
Running dmesg I see that:
IPv6 ADDRCONF(NETDEV_UP): eth1: link is not ready
IPv6 ADDRCONF(NETDEV_UP): eth2: link is not ready
IPv6 ADDRCONF(NETDEV_UP): eth3: link is not ready
...
seeing as these interfaces are not being used and taking up a lot of time I'd like to simply disable them from trying to start up, but I would be open to other options that would reduce the time wasted on these interfaces.
I've checked
/etc/sysctrl.conf
and IPv6 is disabled, so I wouldn't think it to try IPv6.
In the network-scripts directory I created scripts for interface 2-4 that just contain their interface name and ONBOOT=no
.
Also I've looked in /sys/class/net/ethX/device/power/control
for each interface, all of them contain "on" so I tried:
echo off > /sys/class/net/ethX/device/power/control
But I get write error: Invalid argument
, whereas echoing on
works fine. I haven't been able to find a reference for changing this file but I feel as though turning the interfaces off entirely would be a little extreme.
I do not have network manager installed and would prefer to keep it that way if at all possible (prefer configuration over throwing more packages at the problem).
----------
I have since moved on from this issue, however for the sake of others that might experience this I'll mention that the fact dmesg
is reporting these long waits suggests that the kernel is what's trying to enable these interfaces. So possibly kernel parameters would be an avenue to pursue, or it might just be a kernel bug. Configurations in Linux itself is unlikely to solve the problem, either the grub
config or changes to the kernel itself might do it.
--------------
**UPDATE**
The self-answer I posted I actually did get a chance to test, but it didn't work out. However, posting what I tried.
Noticing that the messages from dmesg
suggests the kernel is performing the action I looked through the kernel paramaters documentation: https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt
I found there is an option for configuring IPv6 on interfaces: https://www.kernel.org/doc/Documentation/networking/ipv6.txt .
------------------
The page outlines the following kernel options:
disable=
0 loads the IPv6 module (default), 1 does not load the IPv6 module
autoconf=
0 disables IPv6 auto-configuration on all interfaces, 1 enables auto-configuration on all interfaces (default)
disable_ipv6=
0 enables IPv6 on all interfaces (default), 1 disables IPv6 on all interfaces
However, none of these options addressed the issue.
Centimane
(4520 rep)
Apr 22, 2015, 12:37 PM
• Last activity: Aug 7, 2025, 03:04 AM
2
votes
1
answers
4620
views
Error while trying to compile headers in Ubuntu Jammy Jellyfish
I am currently trying to compile the headers in Jammy Jellyfish, and I am running into the following error: ``` /usr/src/linux-headers-5.15.0-25-generic$ sudo make SYNC include/config/auto.conf.cmd make[1]: *** No rule to make target 'arch/x86/entry/syscalls/syscall_32.tbl', needed by 'arch/x86/incl...
I am currently trying to compile the headers in Jammy Jellyfish, and I am running into the following error:
/usr/src/linux-headers-5.15.0-25-generic$ sudo make
SYNC include/config/auto.conf.cmd
make: *** No rule to make target 'arch/x86/entry/syscalls/syscall_32.tbl', needed by 'arch/x86/include/generated/uapi/asm/unistd_32.h'. Stop.
make: *** [arch/x86/Makefile:213: archheaders] Error 2
Before running the make command, I copied the old .config file /boot/config-5.15.0-25-generic into the .config file in /usr/src/linux-headers-5.15.0-25-generic/
I've seen a few posts where this error has been posted, but I have not seen any answers that fix the issue. If there is more information needed, please let me know.
Thanks
Dinger149
(21 rep)
Jul 11, 2022, 08:09 PM
• Last activity: Aug 7, 2025, 02:04 AM
1
votes
1
answers
67
views
Stuck on boot loading screen when booting from external SSD
I have a Dell laptop and I'm trying to run Fedora KDE Desktop 42 from an external SSD. I've freshly installed Fedora in the external SSD using another USB flash drive containing the live image. When I boot up Fedora from the external SSD, it tells me to enter passphrase for the disk. After I enter t...
I have a Dell laptop and I'm trying to run Fedora KDE Desktop 42 from an external SSD. I've freshly installed Fedora in the external SSD using another USB flash drive containing the live image. When I boot up Fedora from the external SSD, it tells me to enter passphrase for the disk. After I enter the correct passphrase/password, it takes me to a loading screen and stays there forever.
If I press ESC on that screen, it displays a completely blank screen. When I press ESC again, it takes me back to the loading screen.
For the installation options I made the following choices:
* I selected Storage Configuration as Automatic. I also deleted everything in the drive by selecting Delete all/Reclaim space. Then I selected Encrypt my data option.
* I enabled root account. I also created a user with admin privilege.
I have tried to troubleshoot it and made the following adjustments as suggested by others in the Dell UEFI settings:
* I have enabled all the thunderbolt related options (since I'm using thunderbolt 4 port to connect to the external SSD). Namely, *Enabled Thunderbolt Technology Support*, *Enable Thunderbolt Boot Support*, *Enable Thunderbolt (and PCle behind TBT) pre-boot modules*.
* Selected Storage -> *AHCI* (default was *RAID On*, it didn't work when I tried with *RAID On* either)
* Selected Pre-boot Behaviour -> *Thorough* (default was *Fastboot*)
* Tried turning *Secure Boot* on and off, which made no difference (I understand that Fedora 42 is supposed to work with secure boot, but it was worth a try)
#### Hardware specs
- Dell Latitude 5530
- CPU: 12th Gen Intel(R) Core(TM) i5-1245U (1.60 GHz)
- RAM: 16.0 GB (15.7 GB usable)
- Internal Graphics Card: Intel(R) Iris(R) Xe Graphics
- Internal 500 GB SSD (Contains Windows 11)
- [This](https://www.amazon.co.uk/dp/B0B9C3ZVHR) Samsung 990 pro 1TB external SSD (to run Fedora)
- [This](https://www.amazon.co.uk/dp/B0C8CTW8M6) ACASIS SSD enclosure, which uses thunderbolt 4 cable
Any kind of help to resolve this would be appreciated.
---
Edit:
Just reinstalled Fedora without using the Encrypt Drive option, now I can see the following output/log when I press ESC on the loading screen. (Please note that it scrolls really fast at times so I couldn't take a proper sequential set of photos)



Hungry Kettle
(21 rep)
Aug 3, 2025, 12:16 PM
• Last activity: Aug 7, 2025, 01:31 AM
2
votes
2
answers
6540
views
Grep /var/log/maillog for email to a certain user, based only on his linux username
I have a learning environment, based on Linux CentOS, with Postfix and SquirrelMail running, but my assignment is more in general. I need to find in the maillog e-mails **received by** a certain user within a certain time frame, based **only** on his Linux username. I see my maillog, but I am not ex...
I have a learning environment, based on Linux CentOS, with Postfix and SquirrelMail running, but my assignment is more in general.
I need to find in the maillog e-mails **received by** a certain user within a certain time frame, based **only** on his Linux username.
I see my maillog, but I am not experienced in reading maillog and I have two concerns:
1. Whether or not these patterns that I see in the log are something reliable, i.e. whether a log for incoming e-mail will always have
to=
in it.
Jan 2 20:31:17 tmcent01 postfix/local: B58C4330038: to=, orig_to=, relay=local, delay=9.7, delays=9.6/0.03/0/0.02, dsn=2.0.0, status=sent (delivered to mailbox)
2. How does a Linux username correspond to the e-mail name of the user? It is not it always a match (username@domain), is it? We could have alias for it, how can I take this in consideration when composing the Regex for the grep
?
My first two attempts were a strike-out.
sudo grep "to=
pmihova
(21 rep)
Jul 29, 2015, 07:19 AM
• Last activity: Aug 7, 2025, 01:07 AM
Showing page 1 of 20 total questions