Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
0
answers
27
views
How solve Installing OpenVPN server on AlmaLinux 8 TLS problems
I want to set up OpenVPN version 2.4 or 2.6 on AlmaLinux 8 on a VPS and connect using the OpenVPN v2.4 GUI application. I tried some scripts to set up, all of them installed properly, but during communication failed. [https://idroot.us/install-openvpn-server-almalinux-8/][1] [https://leomoon.com/dow...
I want to set up OpenVPN version 2.4 or 2.6 on AlmaLinux 8 on a VPS and connect using the OpenVPN v2.4 GUI application.
I tried some scripts to set up, all of them installed properly, but during communication failed.
https://idroot.us/install-openvpn-server-almalinux-8/
https://leomoon.com/downloads/scripts/openvpn-installer-for-linux/
https://www.ionos.com/help/server-cloud-infrastructure/vpn/install-and-configure-openvpn/install-and-configure-openvpn-almalinux-8-and-9-and-rocky-linux-8-and-9/#c267989
I noticed that TLS handshake breaks and gets an error.
***TLS: Initial packet from [AF_INET]74.208.111.231:1194, sid=1cfea13f ba1c9731*
I disabled the firewall to test simply.
Here is relates config and Log files.
Any advice?
Server.cfg file
-
port 1194
proto tcp
dev tun
user nobody
group nobody
persist-key
persist-tun
keepalive 10 120
topology subnet
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
push "redirect-gateway def1 bypass-dhcp"
dh none
ecdh-curve prime256v1
tls-crypt tls-crypt.key
crl-verify crl.pem
ca ca.crt
cert server_D99XAUoi9FzAwlUr.crt
key server_D99XAUoi9FzAwlUr.key
auth SHA256
cipher AES-128-GCM
ncp-ciphers AES-128-GCM
tls-server
tls-version-min 1.2
tls-cipher TLS-ECDHE-ECDSA-WITH-AES-128-GCM-SHA256
client-config-dir /etc/openvpn/ccd
status /var/log/openvpn/status.log
verb 3
client OVPN file
-
client
proto tcp-client
remote 74.208.111.231 1194
dev tun
resolv-retry infinite
nobind
persist-key
persist-tun
remote-cert-tls server
verify-x509-name server_D99XAUoi9FzAwlUr name
auth SHA256
auth-nocache
cipher AES-128-GCM
tls-client
tls-version-min 1.2
tls-cipher TLS-ECDHE-ECDSA-WITH-AES-128-GCM-SHA256
ignore-unknown-option block-outside-dns
setenv opt block-outside-dns # Prevent Windows 10 DNS leak
verb 3
-----BEGIN CERTIFICATE-----
MIIB1zCCAX2gAwIBAgIURKfw6FcSJ4xcLb3gUWx/THu02KEwCgYIKoZIzj0EAwIw
...
G0T9jlALYAcCIQC+R1s/2x0BRLAg5HzZih8exkfiKbFbt9by31VSKzCY7g==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB1zCCAX6gAwIBAgIQDutVPwLyl5UwKB0LJVUGHTAKBggqhkjOPQQDAjAeMRww
...
nAYorn0Lv1FhAiAXcCdEzm4SqieMfT3Hj2TBrrufpruhoKaOoN2OLBX9hw==
-----END CERTIFICATE-----
-----BEGIN PRIVATE KEY-----
MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQg57wmtsCOWL0GaZ5N
...
XOyWk/p2uZuUtP6cogjwdCCsaYeEF8iYqL0MyWF+PhC+Qoc8YKX9T8Le
-----END PRIVATE KEY-----
-#
-# 2048 bit OpenVPN static key
-#
-----BEGIN OpenVPN Static key V1-----
db3d6c752e41143cc06f8c83e48a742e
....
c2468e2a3e4c03d6a19efeef980c6c72
-----END OpenVPN Static key V1-----
Client Log
-
Sub Jul 27 22:34:15 2025 OpenVPN 2.4.12 x86_64-w64-mingw32 [SSL (OpenSSL)] [LZO] [LZ4] [PKCS11] [AEAD] built on Mar 17 2022
Sub Jul 27 22:34:15 2025 Windows version 6.2 (Windows 8 or greater) 64bit
Sub Jul 27 22:34:15 2025 library versions: OpenSSL 1.1.1n 15 Mar 2022, LZO 2.10
Enter Management Password:
Sub Jul 27 22:34:15 2025 MANAGEMENT: TCP Socket listening on [AF_INET]127.0.0.1:25340
Sub Jul 27 22:34:15 2025 Need hold release from management interface, waiting...
Sub Jul 27 22:34:15 2025 MANAGEMENT: Client connected from [AF_INET]127.0.0.1:25340
Sub Jul 27 22:34:15 2025 MANAGEMENT: CMD 'state on'
Sub Jul 27 22:34:15 2025 MANAGEMENT: CMD 'log all on'
Sub Jul 27 22:34:15 2025 MANAGEMENT: CMD 'echo all on'
Sub Jul 27 22:34:15 2025 MANAGEMENT: CMD 'bytecount 5'
Sub Jul 27 22:34:15 2025 MANAGEMENT: CMD 'hold off'
Sub Jul 27 22:34:15 2025 MANAGEMENT: CMD 'hold release'
Sub Jul 27 22:34:15 2025 Outgoing Control Channel Encryption: Cipher 'AES-256-CTR' initialized with 256 bit key
Sub Jul 27 22:34:15 2025 Outgoing Control Channel Encryption: Using 256 bit message hash 'SHA256' for HMAC authentication
Sub Jul 27 22:34:15 2025 Incoming Control Channel Encryption: Cipher 'AES-256-CTR' initialized with 256 bit key
Sub Jul 27 22:34:15 2025 Incoming Control Channel Encryption: Using 256 bit message hash 'SHA256' for HMAC authentication
Sub Jul 27 22:34:15 2025 TCP/UDP: Preserving recently used remote address: [AF_INET]74.208.111.231:1194
Sub Jul 27 22:34:15 2025 Socket Buffers: R=[65536->65536] S=[65536->65536]
Sub Jul 27 22:34:15 2025 Attempting to establish TCP connection with [AF_INET]74.208.111.231:1194 [nonblock]
Sub Jul 27 22:34:15 2025 MANAGEMENT: >STATE:1753643055,TCP_CONNECT,,,,,,
Sub Jul 27 22:34:16 2025 TCP connection established with [AF_INET]74.208.111.231:1194
Sub Jul 27 22:34:16 2025 TCP_CLIENT link local: (not bound)
Sub Jul 27 22:34:16 2025 TCP_CLIENT link remote: [AF_INET]74.208.111.231:1194
Sub Jul 27 22:34:16 2025 MANAGEMENT: >STATE:1753643056,WAIT,,,,,,
Sub Jul 27 22:34:17 2025 MANAGEMENT: >STATE:1753643057,AUTH,,,,,,
Sub Jul 27 22:34:17 2025 TLS: Initial packet from [AF_INET]74.208.111.231:1194, sid=1cfea13f ba1c9731
Sub Jul 27 22:34:54 2025 read TCP_CLIENT: Unknown error (code=10060)
Sub Jul 27 22:34:54 2025 Connection reset, restarting [-1]
Sub Jul 27 22:34:54 2025 SIGUSR1[soft,connection-reset] received, process restarting
Sub Jul 27 22:34:54 2025 MANAGEMENT: >STATE:1753643094,RECONNECTING,connection-reset,,,,,
Sub Jul 27 22:34:54 2025 Restart pause, 5 second(s)
Sub Jul 27 22:34:59 2025 SIGTERM[hard,init_instance] received, process exiting
Sub Jul 27 22:34:59 2025 MANAGEMENT: >STATE:1753643099,EXITING,init_instance,,,,,
Moh Tarvirdi
(111 rep)
Jul 28, 2025, 02:40 PM
5
votes
1
answers
137
views
'sudo su' Permission Denied, but relogging fixes it
I am having an issue that is only present since about April after updating packages. When I am accessing servers and use `sudo su` or `sudo -s` to access root and enter my password, I'll get: sudo: PAM account management error: Permission denied \ sudo: a password is required However, when I exit an...
I am having an issue that is only present since about April after updating packages.
When I am accessing servers and use
sudo su
or sudo -s
to access root and enter my password, I'll get:
sudo: PAM account management error: Permission denied \
sudo: a password is required
However, when I exit and restart the SSH session, it works fine. This a periodic issue and does not happen on all servers at the same time in my environment. I have noticed that the sssd
service reports offline sometimes, but is back up and the log timings don't seem to match up with the events. I have turned on base level logging for sssd
, but have not seen anything that is inherently apparent as the issue. Any insight would be welcomed.
Updates:
The failed login attempts trigger several PAM modules in sssd_pam.log and ends in this:
[pam] [pam_reply] (0x0200): [CID#9] blen: 24
[pam] [pam_reply] (0x0200): [CID#9] Returning : Permission denied to the client
[pam] [client_recv] (0x0200): [CID#9] Client disconnected!
A successful login attempt just triggers twice, SSS_PAM_PREAUTH and once SSS_PAM_AUTHENTICATE and results in this when using sudo:
[pam_reply] (0x0200): [CID#10] blen: 24
[pam] [pam_reply] (0x0200): [CID#10] Returning : Success to the client
[pam] [pam_cmd_acct_mgmt] (0x0100): [CID#10] entering pam_cmd_acct_mgmt
While speaking of PAM, worth noting that I have compared PAM configurations from lowers where this is occurring to PROD where it is not present and they are identical, the only change I found yesterday was a smartcard auth rpm file, which I deleted, but that, as expected, did not change this behavior.
More updates:
/var/log/secure shows that the same sudo:auth success message leads to two different results.
The failed:
pam_sss(sudo:auth): authentication success; logname=xxxx uid=XXXX euid=0 tty=/dev/pts/0 ruser=xxxx rhost= user=xxxx
pam_sss(sudo:account): Access denied for user xxxx: 6 (Permission denied)
The success:
pam_sss(sudo:auth): authentication success; logname=xxxx uid=XXXX euid=0 tty=/dev/pts/0 ruser=xxxx rhost= user=xxxx
pam_unix(sudo:session): session opened for user root by xxxx(uid=xxxx)
I found a configuration difference that may prove useful - /etc/pam.d/systemd-user seems to have a line in non-effected environments that is not present in affected environments:
session optional pam_keyinit.so force revoke
I'm not familiar with this configuration option so I'm doing some research on it and implementing it, once it's in place I'll try to replicate the issue, but after a session is restarted(in order to reach root to make the change) it can take a while to present.
Latest Update:
I found a line that, upon investigation, doesn't appear to indicate that it would cause this kind of behavior, but I have not been able to reproduce the error since removing this line from /etc/pam.d/login
session optional pam_console.so
JCrowder
(81 rep)
Jul 14, 2025, 08:14 PM
• Last activity: Jul 22, 2025, 02:04 PM
1
votes
1
answers
2361
views
Alma Linux - nmcli " No suitable device found"
When I try to bring up a connection called wan0 with nmcli, I'm getting this error. Wan0 is currently unplugged, want to get it configured before plugging it in. I dont know why it's talking about enp2s0, that's not anywhere in the connection config ``` nmcli con up wan0 Error: Connection activation...
When I try to bring up a connection called wan0 with nmcli, I'm getting this error.
Wan0 is currently unplugged, want to get it configured before plugging it in.
I dont know why it's talking about enp2s0, that's not anywhere in the connection config
nmcli con up wan0
Error: Connection activation failed: No suitable device found for this connection (device enp2s0 not available because profile is not compatible with device (mismatching interface name)).
My connections
nmcli c
NAME UUID TYPE DEVICE
lan0 d662827e-fac3-3069-9c71-d8e9ce1cb14a ethernet enp3s0f0
lo 71d6ab58-a026-4502-abf4-dcd3dda0de3a loopback lo
wan0 c82fe863-6953-49ee-9798-07544d44d530 ethernet --
My devices
nmcli d
DEVICE TYPE STATE CONNECTION
enp3s0f0 ethernet connected lan0
lo loopback connected (externally) lo
enp2s0 ethernet unavailable --
enp3s0f1 ethernet unavailable --
I have the right interface name on the connection
```
nmcli c show wan0
connection.id: wan0
connection.uuid: c82fe863-6953-49ee-9798-07544d44d530
connection.stable-id: --
connection.type: 802-3-ethernet
connection.interface-name: enp3s0f1
Aditya K
(2260 rep)
Jun 26, 2024, 10:02 AM
• Last activity: Jun 7, 2025, 03:07 AM
1
votes
0
answers
29
views
What is "NULL device *" warning on console
I am getting this message on the console of my AlmaLinux 9 host: [31178.107847] (NULL device *): port 1 already used(dev 6/1/4 stat 2 I've tried to google pieces of the message but getting no where. What is this message trying to tell me? It repeats every 10 seconds so I assume its important
I am getting this message on the console of my AlmaLinux 9 host:
[31178.107847] (NULL device *): port 1 already used(dev 6/1/4 stat 2
I've tried to google pieces of the message but getting no where. What is this message trying to tell me? It repeats every 10 seconds so I assume its important
TSG
(1983 rep)
Jun 4, 2025, 09:43 PM
3
votes
1
answers
2414
views
dnf history undo last - Cannot find rpm nevra
I have done an update on AlmaLinux 8.6 to AlmaLinux 8.7. I would like to undo the update but it is not letting me. Here's the error message: Error: The following problems occurred while running a transaction: Cannot find rpm nevra "NetworkManager-1:1.36.0-9.el8_6.x86_64". Cannot find rpm nevra "Netw...
I have done an update on AlmaLinux 8.6 to AlmaLinux 8.7. I would like to undo the update but it is not letting me. Here's the error message:
Error: The following problems occurred while running a transaction:
Cannot find rpm nevra "NetworkManager-1:1.36.0-9.el8_6.x86_64".
Cannot find rpm nevra "NetworkManager-adsl-1:1.36.0-9.el8_6.x86_64".
Cannot find rpm nevra "NetworkManager-bluetooth-1:1.36.0-9.el8_6.x86_64".
A simple way of undo would really be nice. Is there a way of backup a state of machine in order to revert easily should something go wrong?
supmethods
(561 rep)
Dec 1, 2022, 10:12 AM
• Last activity: May 3, 2025, 07:04 PM
1
votes
2
answers
45
views
Linux group membership problem for VNC session
In an almalinux9.5 machine, I am sudo user. I am using a vnc session to access the machine. I've added myself from this vnc machine to linux groups using : sudo usermod -aG However when checking my groups, I am not able to see the new groups added except in two cases: Logging again from my existing...
In an almalinux9.5 machine, I am sudo user. I am using a vnc session to access the machine. I've added myself from this vnc machine to linux groups using :
sudo usermod -aG
However when checking my groups, I am not able to see the new groups added except in two cases:
Logging again from my existing terminal : su -l $USER .
Opening a new ssh session.
Note that: In the vnc session, when opening another terminal, and checking the groups, I can't see the new groups added, expect "as mentioned above" by re-logging. So as workaround I need to re-login everytime I open a new terminal or work from ssh which is not practical in my case.
Do I miss something?
Omar Haggag
(11 rep)
Apr 23, 2025, 05:58 PM
• Last activity: Apr 29, 2025, 08:56 AM
1
votes
2
answers
2489
views
ClamD Service Unable to Start
I am currently following this [guide][1] on setting up `ClamAV` on my `AlmaLinux 9.3` machine however at **Step 11** I cannot start the `clamd@service` and wanted to know if anyone else has also had this issue as I cannot find much from other sources. ``` [root@localhost tester]# sudo systemctl stat...
I am currently following this guide on setting up
ClamAV
on my AlmaLinux 9.3
machine however at **Step 11** I cannot start the clamd@service
and wanted to know if anyone else has also had this issue as I cannot find much from other sources.
[root@localhost tester]# sudo systemctl status clamd@service
× clamd@service.service - clamd scanner (service) daemon
Loaded: loaded (/usr/lib/systemd/system/clamd@.service; disabled; preset: disabled)
Active: failed (Result: exit-code) since Thu 2023-12-28 12:08:15 GMT; 3min 26s ago
Docs: man:clamd(8)
man:clamd.conf(5)
https://www.clamav.net/documents/
Process: 6728 ExecStart=/usr/sbin/clamd -c /etc/clamd.d/service.conf (code=exited, status=1/FAILURE)
CPU: 3ms
Dec 28 12:08:15 localhost.localdomain systemd[1] : clamd@service.service: Scheduled restart job, restart counter is at 5.
Dec 28 12:08:15 localhost.localdomain systemd[1] : Stopped clamd scanner (service) daemon.
Dec 28 12:08:15 localhost.localdomain systemd[1] : clamd@service.service: Start request repeated too quickly.
Dec 28 12:08:15 localhost.localdomain systemd[1] : clamd@service.service: Failed with result 'exit-code'.
Dec 28 12:08:15 localhost.localdomain systemd[1] : Failed to start clamd scanner (service) daemon.
I executed the following command as per the Terminal output recommended:
$ journalctl -xeu clamd@service.service
░░ The process' exit code is 'exited' and its exit status is 1.
Dec 28 12:45:18 localhost.localdomain systemd[1] : clamd@service.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://access.redhat.com/support
░░
░░ The unit clamd@service.service has entered the 'failed' state with result 'exit-code'.
Dec 28 12:45:18 localhost.localdomain systemd[1] : Failed to start clamd scanner (service) daemon.
░░ Subject: A start job for unit clamd@service.service has failed
░░ Defined-By: systemd
░░ Support: https://access.redhat.com/support
░░
░░ A start job for unit clamd@service.service has finished with a failure.
░░
░░ The job identifier is 7444 and the job result is failed.
Dec 28 12:45:18 localhost.localdomain systemd[1] : clamd@service.service: Scheduled restart job, restart counter is at 5.
░░ Subject: Automatic restarting of a unit has been scheduled
░░ Defined-By: systemd
░░ Support: https://access.redhat.com/support
░░
░░ Automatic restarting of the unit clamd@service.service has been scheduled, as the result for
░░ the configured Restart= setting for the unit.
Dec 28 12:45:18 localhost.localdomain systemd[1] : Stopped clamd scanner (service) daemon.
░░ Subject: A stop job for unit clamd@service.service has finished
░░ Defined-By: systemd
░░ Support: https://access.redhat.com/support
░░
░░ A stop job for unit clamd@service.service has finished.
░░
░░ The job identifier is 7568 and the job result is done.
Dec 28 12:45:18 localhost.localdomain systemd[1] : clamd@service.service: Start request repeated too quickly.
Dec 28 12:45:18 localhost.localdomain systemd[1] : clamd@service.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://access.redhat.com/support
░░
░░ The unit clamd@service.service has entered the 'failed' state with result 'exit-code'.
Dec 28 12:45:18 localhost.localdomain systemd[1] : Failed to start clamd scanner (service) daemon.
░░ Subject: A start job for unit clamd@service.service has failed
░░ Defined-By: systemd
░░ Support: https://access.redhat.com/support
░░
░░ A start job for unit clamd@service.service has finished with a failure.
░░
░░ The job identifier is 7568 and the job result is failed.
hymcode
(133 rep)
Dec 28, 2023, 12:17 PM
• Last activity: Apr 29, 2025, 01:08 AM
1
votes
1
answers
88
views
Linux doesn't boot without video
I've a minipc Qotom Q1900G2-P with AlmaLinux 9.5.\ Qotom with American Megatrends bios, build 06.01.2015.\ Bios reset to factory, default setup. Without video cable, HDMI or VGA, it doesn't start...from dmesg or syslog or message NOTHING!\ It seems that even get to the grub menu but obviously not ha...
I've a minipc Qotom Q1900G2-P with AlmaLinux 9.5.\
Qotom with American Megatrends bios, build 06.01.2015.\
Bios reset to factory, default setup.
Without video cable, HDMI or VGA, it doesn't start...from dmesg or syslog or message NOTHING!\
It seems that even get to the grub menu but obviously not having video connected I do not see until the boot comes. If I connect the video later, with the minipc on, no video signal.\
Same with Debian 12.10.
**OK with Windows!! :(( With Windows it start also without video cable!**
I've trie to insert in
/etc/default/grub
some options as:\
GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset console=tty1"
\
or\
GRUB_CMDLINE_LINUX_DEFAULT="nomodeset text"
But nothing! :((
some BIOS screen here https://imgur.com/a/TyDEDq5
Any ideas for solve?
Thanks in advance
ancoling67
(109 rep)
Apr 22, 2025, 09:08 AM
• Last activity: Apr 22, 2025, 11:32 AM
0
votes
0
answers
44
views
Chromium: how to share other windows, or the entire screen, on Google Meet?
I noticed that Google Meet won't let me share the entire screen, nor another Window, on Chromium "Version 133.0.6943.141 (Official version) Fedora Project 64 bits". But I can do it on Google Chrome. How to enabled improved screen sharing on Chromium running on AlmaLinux 8.10?
I noticed that Google Meet won't let me share the entire screen, nor another Window, on Chromium "Version 133.0.6943.141 (Official version) Fedora Project 64 bits".
But I can do it on Google Chrome.
How to enabled improved screen sharing on Chromium running on AlmaLinux 8.10?
BsAxUbx5KoQDEpCAqSffwGy554PSah
(203 rep)
Apr 17, 2025, 06:05 PM
0
votes
1
answers
102
views
How to group applications together on the applications Grid?
I have one group/folder of applications in my applications Grid (Activities -> Launcher -> Show applications), called Utilities/Tools ("Utilitários"): [![AlmaLinux 8.10 Application Grid][1]][1] I like this and would like very much to group other programs together too, but could not find a way n...
I have one group/folder of applications in my applications Grid (Activities -> Launcher -> Show applications), called Utilities/Tools ("Utilitários"):
I like this and would like very much to group other programs together too, but could not find a way nor a tutorial to do so.
How to group applications on the grid, considering this is GNOME 3.32.2?

BsAxUbx5KoQDEpCAqSffwGy554PSah
(203 rep)
Apr 16, 2025, 01:26 PM
• Last activity: Apr 16, 2025, 04:54 PM
0
votes
0
answers
329
views
Chromium: how to make it use Widevine DRM without relying on Google Chrome?
On AlmaLinux 8.10, how to make Chromium "Version 133.0.6943.141 (Official version) Fedora Project 64 bits" use Widevine DRM without needing to install Google Chrome? The [(alternative) Install Widevine alone without Google Chrome][1] method detailed on the "proprietary/chromium-widevine" repository...
On AlmaLinux 8.10, how to make Chromium "Version 133.0.6943.141 (Official version) Fedora Project 64 bits" use Widevine DRM without needing to install Google Chrome?
The (alternative) Install Widevine alone without Google Chrome method detailed on the "proprietary/chromium-widevine" repository on GitHub is not being able to make the two software cooperate, as confirmed by Bitmovin's DRM Stream Test .
I don't know if it could be a solution, but I tried to replace
chromium
by chromium-freeworld
to see it the added codecs it promises would solve my problem, but the GetPageSpeed requires me to pay for a subscription to their repository. I see no advantage in paying to download chromium.
TheTechLife 's tutorial is not for RHEL-based distros and cannot be followed. I did as Andrea Fortuna instructed, but the tutorial is more than 5 years old and does not work anymore.
BsAxUbx5KoQDEpCAqSffwGy554PSah
(203 rep)
Apr 11, 2025, 04:57 PM
• Last activity: Apr 11, 2025, 05:19 PM
1
votes
1
answers
200
views
How to enable H264 for Firefox?
I noticed that AlmaLinux 8.10 lacks the packages `mozilla-openh264` and it's prerequisite `openh264`, and that both are absent from official repositories too. To my knowledge, I need these both in order to be able to play H264 videos on Firefox, like on https://udemy.com. I downloaded the packages f...
I noticed that AlmaLinux 8.10 lacks the packages
mozilla-openh264
and it's prerequisite openh264
, and that both are absent from official repositories too. To my knowledge, I need these both in order to be able to play H264 videos on Firefox, like on https://udemy.com .
I downloaded the packages from these URLs:
1. https://rhel.pkgs.org/8/raven-multimedia-x86_64/openh264-2.4.1-2.el8.x86_64.rpm.html
2. https://rhel.pkgs.org/8/raven-multimedia-x86_64/mozilla-openh264-2.4.1-2.el8.x86_64.rpm.html
Then installed them with:
sudo rpm -i .rpm
I also activated the H264 plugin on about:addons
.
But still, about:support
points H264 as not supported.
What am I missing?
BsAxUbx5KoQDEpCAqSffwGy554PSah
(203 rep)
Apr 4, 2025, 02:01 PM
• Last activity: Apr 4, 2025, 05:24 PM
0
votes
0
answers
21
views
Crash utility cannot resolve "p2m_top" when analyzing VMware dump (VMEM) on AlmaLinux guest
I am trying to analyze a VMware memory dump from an AlmaLinux guest. I converted the snapshot to a core dump using vmss2core: ``` vmss2core-sb-8456865.exe -N vSRV1_Snapshot815.vmsn vSRV1_Snapshot815.vmem ``` Then, I installed the necessary tools for crash dump analysis: ``` sudo dnf install yum-util...
I am trying to analyze a VMware memory dump from an AlmaLinux guest. I converted the snapshot to a core dump using vmss2core:
vmss2core-sb-8456865.exe -N vSRV1_Snapshot815.vmsn vSRV1_Snapshot815.vmem
Then, I installed the necessary tools for crash dump analysis:
sudo dnf install yum-utils -y
sudo dnf install kexec-tools crash gdb -y
sudo dnf debuginfo-install kernel-debuginfo-$(uname -r) -y
However, when I try to analyze the dump with crash, I get an error:
crash /usr/lib/debug/lib/modules/4.18.0-553.34.1.el8_10.x86_64/vmlinux /home2/memdump/vmss.core
crash: cannot resolve "p2m_top"
Any advice on how to properly analyze this dump or work around the p2m_top error? Thanks.
supmethods
(561 rep)
Apr 1, 2025, 02:55 AM
0
votes
2
answers
90
views
Why are my network connections being rejected and the ping command between server does not work?
Cluster information: ``` kubectl version Client Version: v1.29.14 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.14 Cloud being used: bare-metal Installation method: Host OS: AlmaLinux 8 CNI and version: Flannel ver: 0.26.4 CRI and version: cri-dockerd ver: 0.3.16 ```...
Cluster information:
kubectl version
Client Version: v1.29.14
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.14
Cloud being used: bare-metal
Installation method:
Host OS: AlmaLinux 8
CNI and version: Flannel ver: 0.26.4
CRI and version: cri-dockerd ver: 0.3.16
I have a master node and create my first worker node, before executing the command kubeadm join in the worker I could ping from the worker to the master and viceversa without troubles, now that I have executed the kubeadm join ...
command I cannot ping between them anymore and I get this error:
[root@worker-1 ~]# kubectl get nodes -o wide
E0308 19:38:31.027307 59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s ": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:32.051145 59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s ": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:33.075350 59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s ": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:34.099160 59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s ": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:35.123011 59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s ": dial tcp 198.58.126.88:6443: connect: connection refused
The connection to the server 198.58.126.88:6443 was refused - did you specify the right host or port?
Ping from the worker node to the master node:
[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
From 198.58.126.88 icmp_seq=1 Destination Port Unreachable
From 198.58.126.88 icmp_seq=2 Destination Port Unreachable
From 198.58.126.88 icmp_seq=3 Destination Port Unreachable
If I run this:
[root@worker-1 ~]# iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
The ping command starts to work:
[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
64 bytes from 198.58.126.88: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from 198.58.126.88: icmp_seq=2 ttl=64 time=0.025 ms
(The ping command works with the IPv6 address, it just fails with the IPv4 address)
But after about one minute it gets blocked again:
[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
From 198.58.126.88 icmp_seq=1 Destination Port Unreachable
From 198.58.126.88 icmp_seq=2 Destination Port Unreachable
[root@worker-1 ~]# cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv6.conf.default.forwarding=1
net.ipv6.conf.all.forwarding=1
[root@worker-1 ~]# cd /etc/systctl.d/
-bash: cd: /etc/systctl.d/: No such file or directory
The port 6443/TCP
is closed in the worker node and I have tried to open it without success:
nmap 172.235.135.144 -p 6443 ✔ 2.7.4 06:19:47
Starting Nmap 7.95 ( https://nmap.org ) at 2025-03-11 16:22 -05
Nmap scan report for 172-235-135-144.ip.linodeusercontent.com (172.235.135.144)
Host is up (0.072s latency).
PORT STATE SERVICE
6443/tcp closed sun-sr-https
Nmap done: 1 IP address (1 host up) scanned in 0.26 seconds
master node:
[root@master ~]# iptables -nvL
Chain INPUT (policy ACCEPT 1312K packets, 202M bytes)
pkts bytes target prot opt in out source destination
1301K 201M KUBE-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0
1311K 202M KUBE-IPVS-FILTER all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes ipvs access filter */
1311K 202M KUBE-PROXY-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-proxy firewall rules */
1311K 202M KUBE-NODE-PORT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes health check rules */
40 3520 ACCEPT icmp -- * * 198.58.126.88 0.0.0.0/0
0 0 ACCEPT icmp -- * * 172.233.172.101 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
950 181K KUBE-PROXY-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-proxy firewall rules */
950 181K KUBE-FORWARD all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */
212 12626 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
212 12626 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * br-09363fc9af47 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
20 1068 DOCKER all -- * br-09363fc9af47 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-09363fc9af47 !br-09363fc9af47 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-09363fc9af47 br-09363fc9af47 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * br-05a2ea8c281b 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
4 184 DOCKER all -- * br-05a2ea8c281b 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-05a2ea8c281b !br-05a2ea8c281b 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-05a2ea8c281b br-05a2ea8c281b 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * br-032fd1b78367 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * br-032fd1b78367 0.0.0.0/0 0.0.0.0/0
9 504 ACCEPT all -- br-032fd1b78367 !br-032fd1b78367 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-032fd1b78367 br-032fd1b78367 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * br-ae1997e801f3 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * br-ae1997e801f3 0.0.0.0/0 0.0.0.0/0
132 7920 ACCEPT all -- br-ae1997e801f3 !br-ae1997e801f3 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-ae1997e801f3 br-ae1997e801f3 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * br-9f6d34f7e48a 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
14 824 DOCKER all -- * br-9f6d34f7e48a 0.0.0.0/0 0.0.0.0/0
4 240 ACCEPT all -- br-9f6d34f7e48a !br-9f6d34f7e48a 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-9f6d34f7e48a br-9f6d34f7e48a 0.0.0.0/0 0.0.0.0/0
29 1886 FLANNEL-FWD all -- * * 0.0.0.0/0 0.0.0.0/0 /* flanneld forward */
Chain OUTPUT (policy ACCEPT 1309K packets, 288M bytes)
pkts bytes target prot opt in out source destination
1298K 286M KUBE-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0
1308K 288M KUBE-IPVS-OUT-FILTER all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes ipvs access filter */
Chain DOCKER (6 references)
pkts bytes target prot opt in out source destination
14 824 ACCEPT tcp -- !br-9f6d34f7e48a br-9f6d34f7e48a 0.0.0.0/0 172.24.0.2 tcp dpt:3001
0 0 ACCEPT tcp -- !br-ae1997e801f3 br-ae1997e801f3 0.0.0.0/0 172.21.0.2 tcp dpt:3000
4 184 ACCEPT tcp -- !br-05a2ea8c281b br-05a2ea8c281b 0.0.0.0/0 172.22.0.2 tcp dpt:4443
12 700 ACCEPT tcp -- !br-09363fc9af47 br-09363fc9af47 0.0.0.0/0 172.19.0.2 tcp dpt:4443
8 368 ACCEPT tcp -- !br-09363fc9af47 br-09363fc9af47 0.0.0.0/0 172.19.0.3 tcp dpt:443
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
212 12626 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (0 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FLANNEL-FWD (1 references)
pkts bytes target prot opt in out source destination
29 1886 ACCEPT all -- * * 10.244.0.0/16 0.0.0.0/0 /* flanneld forward */
0 0 ACCEPT all -- * * 0.0.0.0/0 10.244.0.0/16 /* flanneld forward */
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
212 12626 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-FORWARD (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-NODE-PORT (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* Kubernetes health check node port */ match-set KUBE-HEALTH-CHECK-NODE-PORT dst
Chain KUBE-PROXY-FIREWALL (2 references)
pkts bytes target prot opt in out source destination
Chain KUBE-SOURCE-RANGES-FIREWALL (0 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-IPVS-FILTER (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-LOAD-BALANCER dst,dst
2 104 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-CLUSTER-IP dst,dst
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-EXTERNAL-IP dst,dst
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-EXTERNAL-IP-LOCAL dst,dst
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-HEALTH-CHECK-NODE-PORT dst
0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable
Chain KUBE-IPVS-OUT-FILTER (1 references)
pkts bytes target prot opt in out source destination
Chain KUBE-FIREWALL (2 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * * !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-KUBELET-CANARY (0 references)
pkts bytes target prot opt in out source destination
worker node:
[root@worker-1 ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
18469 1430K KUBE-IPVS-FILTER all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes ipvs access filter */
10534 954K KUBE-PROXY-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-proxy firewall rules */
10534 954K KUBE-NODE-PORT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes health check rules */
10767 1115K KUBE-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 KUBE-PROXY-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-proxy firewall rules */
0 0 KUBE-FORWARD all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */
0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
18359 1696K KUBE-IPVS-OUT-FILTER all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes ipvs access filter */
18605 1739K KUBE-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-FIREWALL (2 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * * !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-KUBELET-CANARY (0 references)
pkts bytes target prot opt in out source destination
Chain KUBE-FORWARD (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-NODE-PORT (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* Kubernetes health check node port */ match-set KUBE-HEALTH-CHECK-NODE-PORT dst
Chain KUBE-PROXY-FIREWALL (2 references)
pkts bytes target prot opt in out source destination
Chain KUBE-SOURCE-RANGES-FIREWALL (0 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
Chain KUBE-IPVS-FILTER (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-LOAD-BALANCER dst,dst
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-CLUSTER-IP dst,dst
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-EXTERNAL-IP dst,dst
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-EXTERNAL-IP-LOCAL dst,dst
0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-HEALTH-CHECK-NODE-PORT dst
45 2700 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable
Chain KUBE-IPVS-OUT-FILTER (1 references)
pkts bytes target prot opt in out source destination
If I run iptables -F INPUT
in the worker the ping command starts to work back again:
[root@worker-1 ~]# iptables -F INPUT
[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
64 bytes from 198.58.126.88: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 198.58.126.88: icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from 198.58.126.88: icmp_seq=3 ttl=64 time=0.037 ms
64 bytes from 198.58.126.88: icmp_seq=4 ttl=64 time=0.039 ms
64 bytes from 198.58.126.88: icmp_seq=5 ttl=64 time=0.023 ms
64 bytes from 198.58.126.88: icmp_seq=6 ttl=64 time=0.022 ms
64 bytes from 198.58.126.88: icmp_seq=7 ttl=64 time=0.070 ms
64 bytes from 198.58.126.88: icmp_seq=8 ttl=64 time=0.072 ms
^C
--- 198.58.126.88 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7197ms
rtt min/avg/max/mdev = 0.022/0.045/0.072/0.017 ms
strace command from worker:
[root@worker-1 ~]# iptables -F INPUT
[root@worker-1 ~]# strace -eopenat kubectl version
openat(AT_FDCWD, "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", O_RDONLY) = 3
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
openat(AT_FDCWD, "/usr/bin/kubectl", O_RDONLY|O_CLOEXEC) = 3
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
openat(AT_FDCWD, "/usr/local/sbin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/local/bin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/sbin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/bin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/root/bin", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/root/.kube/config", O_RDONLY|O_CLOEXEC) = 3
Client Version: v1.29.14
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
The connection to the server 198.58.126.88:6443 was refused - did you specify the right host or port?
+++ exited with 1 +++
nftables before and after executing the kubeadm join command in the worker

Chain KUBE-IPVS-FILTER (0 references)
target prot opt source destination
RETURN all -- anywhere anywhere match-set KUBE-LOAD-BALANCER dst,dst
RETURN all -- anywhere anywhere match-set KUBE-CLUSTER-IP dst,dst
RETURN all -- anywhere anywhere match-set KUBE-EXTERNAL-IP dst,dst
RETURN all -- anywhere anywhere match-set KUBE-EXTERNAL-IP-LOCAL dst,dst
RETURN all -- anywhere anywhere match-set KUBE-HEALTH-CHECK-NODE-PORT dst
REJECT all -- anywhere anywhere ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable
[root@worker-1 ~]# sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N KUBE-FIREWALL
-N KUBE-KUBELET-CANARY
-N KUBE-FORWARD
-N KUBE-NODE-PORT
-N KUBE-PROXY-FIREWALL
-N KUBE-SOURCE-RANGES-FIREWALL
-N KUBE-IPVS-FILTER
-N KUBE-IPVS-OUT-FILTER
-A INPUT -m comment --comment "kubernetes ipvs access filter" -j KUBE-IPVS-FILTER
-A INPUT -m comment --comment "kube-proxy firewall rules" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check rules" -j KUBE-NODE-PORT
-A FORWARD -m comment --comment "kube-proxy firewall rules" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A OUTPUT -m comment --comment "kubernetes ipvs access filter" -j KUBE-IPVS-OUT-FILTER
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-NODE-PORT -m comment --comment "Kubernetes health check node port" -m set --match-set KUBE-HEALTH-CHECK-NODE-PORT dst -j ACCEPT
-A KUBE-SOURCE-RANGES-FIREWALL -j DROP
-A KUBE-IPVS-FILTER -m set --match-set KUBE-LOAD-BALANCER dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-CLUSTER-IP dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-EXTERNAL-IP dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-EXTERNAL-IP-LOCAL dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-HEALTH-CHECK-NODE-PORT dst -j RETURN
-A KUBE-IPVS-FILTER -m conntrack --ctstate NEW -m set --match-set KUBE-IPVS-IPS dst -j REJECT --reject-with icmp-port-unreachable
The blocked connection from the worker to the master starts to happen as soon as the kubelet service is running; if the kubelet service is stopped then I can ping back the master from the worker.
What might be causing this blocking on the worker node?
Thanks.
Rafael Mora
(101 rep)
Mar 10, 2025, 02:17 AM
• Last activity: Mar 15, 2025, 03:20 AM
0
votes
1
answers
72
views
Duplicate route keeps reappearing and can't remove it
I'm running almalinux 9, with 3 interfaces. The first interface (ens192) has a duplicate entry (see last two lines) in the routing table: [root@server ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.31.254.1 0.0.0.0 UG 100 0 0 ens192 10.88.0.0...
I'm running almalinux 9, with 3 interfaces. The first interface (ens192) has a duplicate entry (see last two lines) in the routing table:
[root@server ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.31.254.1 0.0.0.0 UG 100 0 0 ens192
10.88.0.0 0.0.0.0 255.255.0.0 U 0 0 0 podman0
172.31.251.0 0.0.0.0 255.255.255.0 U 103 0 0 ens256
172.31.252.0 0.0.0.0 255.255.255.0 U 102 0 0 ens161
172.31.254.0 0.0.0.0 255.255.255.0 U 100 0 0 ens192
172.31.254.0 0.0.0.0 255.255.255.0 U 100 0 0 ens192
I can get rid of the last route with 'ip route del' but on next boot it reappears.
I suspect this is related...I have 2 IP's on the ens192 as nmcli shows here:
IP4.ADDRESS: 172.31.254.32/24
IP4.ADDRESS: 172.31.254.31/24
IP4.GATEWAY: 172.31.254.1
IP4.ROUTE: dst = 172.31.254.0/24, nh = 0.0.0.0, mt = 100
IP4.ROUTE: dst = 172.31.254.0/24, nh = 0.0.0.0, mt = 100
IP4.ROUTE: dst = 0.0.0.0/0, nh = 172.31.254.1, mt = 100
IP4.DNS: 172.31.254.4
IP4.DNS: 172.31.234.4
and NetworkManager seems to think it needs route 1 & 2. But if I try to remove route 2 with
nmcli connection modify "connname" -ipv4.routes 172.31.254.0/24
the route will not be removed (the command returns without error). What is going on? Is this CORRECT behavior? Do I actually need ROUTE and ROUTE ?
TSG
(1983 rep)
Feb 28, 2025, 03:43 PM
• Last activity: Feb 28, 2025, 10:50 PM
2
votes
1
answers
555
views
How to automatically mount external BitLocker encrypted drives at boot on Linux
How do I ensure that an external, BitLocker encrypted NTFS drive is automatically decrypted and mounted at boot time on a Linux system?
How do I ensure that an external, BitLocker encrypted NTFS drive is automatically decrypted and mounted at boot time on a Linux system?
bolind
(201 rep)
Feb 28, 2025, 09:01 AM
• Last activity: Feb 28, 2025, 09:12 AM
0
votes
0
answers
64
views
Samba AD DC on AlmaLinux: macOS Login Works, Home Directory Mount Fails
I have a Samba Active Directory Domain Controller (AD DC) server installed on an AlmaLinux 9.5 machine. This server is integrated into a network where a Ubiquiti UDM Pro device serves as both the firewall and DNS forwarder. Current Setup: DNS Configuration: Computers that need to interact with the S...
I have a Samba Active Directory Domain Controller (AD DC) server installed on an AlmaLinux 9.5 machine. This server is integrated into a network where a Ubiquiti UDM Pro device serves as both the firewall and DNS forwarder. Current Setup:
DNS Configuration:
Computers that need to interact with the Samba server use Samba's built-in DNS server.
Samba forwards any unresolved DNS requests to the UDM Pro, which either resolves them internally or forwards them to Cloudflare servers.
Issue:
Login Works: Users can successfully log into macOS systems using their network credentials.
Home Directory Mount Fails: Despite successful login, the home directory does not mount automatically.
However, users can manually access their home directories by navigating to the Network section in Finder and logging into the Samba server.
Mapping UID and GID in Directory Utility:
If I enable the options to automatically assign UID and GID in macOS Directory Utility, the user login process gets stuck and never completes (the system remains in a loading state).
Home Directory Volume:
An LVM2 volume created using multiple virtual hard drives. The system runs on a 2019 Mac Pro machine using Parallels Desktop.
smb.conf:
[global]
workgroup = SAMBA
security = user
passdb backend = tdbsam
printing = cups
printcap name = cups
load printers = yes
cups options = raw
[homes]
comment = Home Directories
valid users = %S, %D%w%S
browseable = No
read only = No
inherit acls = Yes
nsswitch.conf:
passwd: files winbind systemd
group: files winbind systemd
shadow: files
hosts: files dns myhostname
services: files sss
automount: files sss
krb5.conf :
[libdefaults]
default_realm = PANDA.FANTASTIC.FOX.CORE
dns_lookup_realm = false
dns_lookup_kdc = true
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
PANDA.FANTASTIC.FOX.CORE = {
default_domain = panda.fantastic.fox.core
kdc = dc1.panda.fantastic.fox.core
admin_server = dc1.panda.fantastic.fox.core
}
pam_winbind.conf :
[global]
# create homedirectory on the fly
mkhomedir = yes
system-auth :
auth required pam_env.so
auth sufficient pam_unix.so nullok
auth sufficient pam_winbind.so use_first_pass
auth required pam_deny.so
account sufficient pam_winbind.so
password sufficient pam_winbind.so use_authtok
session required pam_unix.so
session optional pam_winbind.so
test results :
[root@dc1 panda]# net ads testjoin
Join is OK
[root@dc1 panda]# wbinfo -u
PANDA\administrator
PANDA\guest
PANDA\testuser
[root@dc1 panda]# wbinfo -i testuser
PANDA\testuser:*:10002:100::/mnt/users/testuser:/bin/bash
Expected Behavior:
When a user logs into a macOS system (running version 13.6.2 or higher), their home directory should be automatically mounted.
Any files saved by the user on their desktop, documents, or other directories should be stored in their designated home directory on the Samba server.
Upon the next login, these files should be seamlessly available to the use
Bez
(1 rep)
Jan 9, 2025, 03:11 PM
• Last activity: Jan 22, 2025, 03:49 PM
4
votes
2
answers
14096
views
Insmod causes key rejected by service
I am running AlmaLinux 9 (RedHat 9 clone) and have compiled a new kernel module. I am running in a VM with UEFI and secure boot enabled. When I insert the module I get the following error: insmod: ERROR: could not insert module npreal2.mod: Key was rejected by service From other posts I concluded it...
I am running AlmaLinux 9 (RedHat 9 clone) and have compiled a new kernel module. I am running in a VM with UEFI and secure boot enabled. When I insert the module I get the following error:
insmod: ERROR: could not insert module npreal2.mod: Key was rejected by service
From other posts I concluded it was related to UEFI/secure boot. So I disabled secure boot and then insmod reports:
insmod: ERROR: could not insert module npreal2.mod: Invalid module format
I tried recompiling with secure mode off then insmod worked, but I then have to leave secure boot disabled. How can I make this module work with secure boot?
There is a post on github about creating your own MOK keys, but that seems to be DKMS specific.
TSG
(1983 rep)
Jul 14, 2023, 08:09 PM
• Last activity: Dec 27, 2024, 10:30 PM
2
votes
1
answers
126
views
Cryptsetup reports "Not enough space in header json area for new keyslot"
I'm using Almalinux in case that's relevant here. We use LUKS2 on the local disk, with LVM on top - so /dev/sda1 and 2 are unencrypted, but /dev/sda3 is encrypted and used for the OS. We also use clevis/tang to do automatic decrypt (and this is working fine). When we build via kickstart, we set a te...
I'm using Almalinux in case that's relevant here.
We use LUKS2 on the local disk, with LVM on top - so /dev/sda1 and 2 are unencrypted, but /dev/sda3 is encrypted and used for the OS.
We also use clevis/tang to do automatic decrypt (and this is working fine).
When we build via kickstart, we set a temporary password when encrypting and building - and then when we have finished the initial build, we use ansible to set good quality passwords in line with our vaulting/security practices.
So that involves a script that does the equivalent of:
cryptsetup luksOpen -S 0 --test-passphrase /dev/sda3 && cryptsetup luksChangeKey -S0 --force-password --batch-mode
(And of course the 'default' build password is tested for, and replaced with one from our key management system).
This has started failing recently, and I'm a little perplexed as to what might be going wrong.
It definitely works on our initial 'build', which uses a slightly out of date build image, but then we
dnf update
to the current revision, and now we cannot seem to luksChangeKey or luksAddkey any more.
# Adding new keyslot 2 by passphrase, volume key provided by passphrase (-1).
# Selected keyslot 2.
# Keyslot 0 priority 1 != 2 (required), skipped.
# Keyslot 1 priority 1 != 2 (required), skipped.
# Trying to open LUKS2 keyslot 0.
# Running keyslot key derivation.
# Reading keyslot area [0x8000].
# Acquiring read lock for device /dev/sda3.
# Opening lock resource file /run/cryptsetup/L_8:3
# Verifying lock handle for /dev/sda3.
# Device /dev/sda3 READ lock taken.
# Reusing open ro fd on device /dev/sda3
# Device /dev/sda3 READ lock released.
# Verifying key from keyslot 0, digest 0.
# Keyslot 2 assigned to digest 0.
# Trying to allocate LUKS2 keyslot 2.
# Found area 548864 -> 806912
# Running argon2id() benchmark.
# PBKDF benchmark: memory cost = 65536, iterations = 4, threads = 4 (took 72 ms)
# PBKDF benchmark: memory cost = 227555, iterations = 4, threads = 4 (took 264 ms)
# PBKDF benchmark: memory cost = 1048576, iterations = 6, threads = 4 (took 1982 ms)
# Benchmark returns argon2id() 6 iterations, 1048576 memory, 4 threads (for 512-bits key).
# JSON does not fit in the designated area.
# Not enough space in header json area for new keyslot.
# Rolling back in-memory LUKS2 json metadata.
# Releasing crypt device /dev/sda3 context.
# Releasing device-mapper backend.
# Closing read only fd for /dev/sda3.
Command failed with code -1 (wrong or missing parameters).
I'm wondering if anyone's able to help me understand what's going wrong here, and thus what I need to do to remedy it.
Is there a 'initial header size' option I can specify on our builds? Or a parameter to cryptsetup? Or am I "just" finding a bug? (But I'm not convinced that something as widely used as cryptsetup
is going to have a 'no one can change their passwords' sort of bug)
This is how the LUKs header looks on an example host.
LUKS header information
Version: 2
Epoch: 5
Metadata area: 16384 [bytes]
Keyslots area: 16744448 [bytes]
UUID:
Label: (no label)
Subsystem: (no subsystem)
Flags: (no flags)
Data segments:
0: crypt
offset: 16777216 [bytes]
length: (whole device)
cipher: aes-xts-plain64
sector: 512 [bytes]
Keyslots:
0: luks2
Key: 512 bits
Priority: normal
Cipher: aes-xts-plain64
Cipher key: 512 bits
PBKDF: argon2id
Time cost: 9
Memory: 1048576
Threads: 4
Salt:
AF stripes: 4000
AF hash: sha256
Area offset:32768 [bytes]
Area length:258048 [bytes]
Digest ID: 0
1: luks2
Key: 512 bits
Priority: normal
Cipher: aes-xts-plain64
Cipher key: 512 bits
PBKDF: pbkdf2
Hash: sha256
Iterations: 1000
Salt:
AF stripes: 4000
AF hash: sha256
Area offset:290816 [bytes]
Area length:258048 [bytes]
Digest ID: 0
Tokens:
0: clevis
Keyslot: 1
Digests:
0: pbkdf2
Hash: sha256
Iterations: 105025
Salt:
Digest:
Am I missing something profound? As said, I am _certain_ this works if I don't 'upgrade' the box to a newer kernel, and so my current workaround is to build it, manually re-key it, and then continue to update it, but this seems ... suboptimal.
cryptsetup versions 2.6.0 and 2.7.2
head -c 1M /dev/sda3 | strings -n 128
reformatted:
{
"keyslots": {
"0": {
"type": "luks2",
"key_size": 64,
"af": {
"type": "luks1",
"stripes": 4000,
"hash": "sha256"
},
"area": {
"type": "raw",
"offset": "32768",
"size": "258048",
"encryption": "aes-xts-plain64",
"key_size": 64
},
"kdf": {
"type": "argon2id",
"time": 7,
"memory": 1048576,
"cpus": 4,
"salt": "(removed)"
}
},
"1": {
"type": "luks2",
"key_size": 64,
"af": {
"type": "luks1",
"stripes": 4000,
"hash": "sha256"
},
"area": {
"type": "raw",
"offset": "290816",
"size": "258048",
"encryption": "aes-xts-plain64",
"key_size": 64
},
"kdf": {
"type": "pbkdf2",
"hash": "sha256",
"iterations": 1000,
"salt": "(removed)"
}
}
},
"tokens": {
"0": {
"type": "clevis",
"keyslots": [
"1"
],
"jwe": {
"ciphertext": "(removed)",
"encrypted_key": "",
"iv": "(removed)",
"protected": "",
"tag": "(removed)"
}
}
},
"segments": {
"0": {
"type": "crypt",
"offset": "16777216",
"size": "dynamic",
"iv_tweak": "0",
"encryption": "aes-xts-plain64",
"sector_size": 512
}
},
"digests": {
"0": {
"type": "pbkdf2",
"keyslots": [
"0",
"1"
],
"segments": [
"0"
],
"hash": "sha256",
"iterations": 88086,
"salt": "(removed)",
"digest": "(removed)"
}
},
"config": {
"json_size": "12288",
"keyslots_size": "16744448"
}
}
Edit2: The plot thickens: luksHeaderBackup then a restore fails.
But killing 'slot 0' and adding a key works still. (Assuming the 'slot 1' you can extract, but in my cases clevis-luks-pass
works well enough)
Sobrique
(4544 rep)
Dec 4, 2024, 12:37 PM
• Last activity: Dec 13, 2024, 12:38 PM
0
votes
1
answers
308
views
Download Linux updates now and install only those updates later
I'm using Almalinux 9 and dnf, but RHEL, CentOS, or similar would work the same. Is there a correct way to download updates at one point in time and then install at a later time? I have dev/test servers and prod servers. I would like for the dev/test servers and prod servers to download updates at t...
I'm using Almalinux 9 and dnf, but RHEL, CentOS, or similar would work the same. Is there a correct way to download updates at one point in time and then install at a later time? I have dev/test servers and prod servers. I would like for the dev/test servers and prod servers to download updates at the same time. That's no problem. I schedule
dnf upgrade --downloadonly /my/cached/updates -y
and I have all the RPM files in a single directory, and it's the same RPMs on dev/test and prod. Now let's say I want my dev/test servers to install upgrades at the same time that they download them, but I want my prod servers to wait a full week before install. I don't ever want to install anything on my prod system that hasn't been installed and tested in dev/test first. So I don't want any new updates in prod that may have been released during the week after I upgraded dev/test but before I upgrade prod.
Should I simply run dnf install /my/cached/updates/*.rpm -y
on prod a week later or is there a better way of going about all of this?
b_laoshi
(131 rep)
Dec 10, 2024, 09:40 PM
• Last activity: Dec 11, 2024, 12:22 AM
Showing page 1 of 20 total questions