Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
0
answers
18
views
Disabling local accounts on azure linux virtual machines
We’re enforcing Azure Entra authentication across all Linux VMs, so we’ll disable all local accounts via a custom script. The script will also create a single “break-glass” user with a randomly generated password that remains unknown. If anyone ever needs to use local credentials, they must use the...
We’re enforcing Azure Entra authentication across all Linux VMs, so we’ll disable all local accounts via a custom script. The script will also create a single “break-glass” user with a randomly generated password that remains unknown. If anyone ever needs to use local credentials, they must use the password-reset tool from VM help section to set a new password for that account before logging in.
I’m using the script below, and in my testing it’s worked exactly as intended with no unexpected behavior. Since I’m not a Linux expert, I’d appreciate any feedback from the community on potential issues or best practices I should consider.
I intend to block all local authentication, permitting password-based access solely for the break-glass user.
#!/usr/bin/env bash
set -euo pipefail
# Configuration
CFG="/etc/ssh/sshd_config"
BAK="${CFG}.bak"
BKP_USER="breakglassuser"
NOLOGIN="$(command -v nologin || echo '/sbin/nologin')"
# 1) Create (or unlock) the break‐glass user with a bash shell
if ! id -u "$BKP_USER" &>/dev/null; then
PW="$(openssl rand -base64 32)"
useradd -m -s /bin/bash "$BKP_USER"
echo "$BKP_USER:$PW" | chpasswd
fi
usermod -U "$BKP_USER"
usermod -s /bin/bash "$BKP_USER"
# 2) Backup sshd_config (only once)
if [ ! -f "$BAK" ]; then
cp "$CFG" "$BAK"
fi
# 3) Disable password & challenge-response authentication globally
if grep -qE '^[[:space:]]*#?[[:space:]]*PasswordAuthentication' "$CFG"; then
sed -i -E 's@^[[:space:]]*#?[[:space:]]*PasswordAuthentication.*@PasswordAuthentication no@' "$CFG"
else
echo 'PasswordAuthentication no' >> "$CFG"
fi
if grep -qE '^[[:space:]]*#?[[:space:]]*ChallengeResponseAuthentication' "$CFG"; then
sed -i -E 's@^[[:space:]]*#?[[:space:]]*ChallengeResponseAuthentication.*@ChallengeResponseAuthentication no@' "$CFG"
else
echo 'ChallengeResponseAuthentication no' >> "$CFG"
fi
# 4) Ensure only bkupadmin can use password auth
# Remove any old exception block and append the new one
sed -i '/^Match User bkupadmin/,$d' "$CFG"
cat >> "$CFG" /dev/null; then
systemctl restart sshd || systemctl restart ssh
else
service ssh restart || service sshd restart
fi
# 6) Lock & nologin all other local accounts (UID 1000–59999) except bkupadmin
awk -F: -v skip="$BKP_USER" '($3>=1000 && $3<60000 && $1!=skip){print $1}' /etc/passwd | while read -r user; do
passwd -l "$user"
Dev Reddy
(21 rep)
May 23, 2025, 07:29 PM
0
votes
0
answers
142
views
What are the correct iptables rules for an ipsec site to site?
I am trying to configure an IPSEC site to site using strongswan on Debian 12. The VPN is UP, as shown below Status of IKE charon daemon (strongSwan 5.9.8, Linux 6.1.0-30-cloud-amd64, x86_64): uptime: 18 hours, since Jan 22 14:58:17 2025 malloc: sbrk 2125824, mmap 0, used 1366096, free 759728 worker...
I am trying to configure an IPSEC site to site using strongswan on Debian 12.
The VPN is UP, as shown below
Status of IKE charon daemon (strongSwan 5.9.8, Linux 6.1.0-30-cloud-amd64, x86_64):
uptime: 18 hours, since Jan 22 14:58:17 2025
malloc: sbrk 2125824, mmap 0, used 1366096, free 759728
worker threads: 11 of 16 idle, 5/0/0/0 working, job queue: 0/0/0/0, scheduled: 5
loaded plugins: charon aesni aes rc2 sha2 sha1 md5 mgf1 random nonce x509 revocation constraints pubkey pkcs1 pkcs7 pkcs12 pgp dnskey sshkey pem openssl pkcs8 fips-prf gmp agent xcbc hmac kdf gcm drbg attr kernel-netlink resolve socket-default connmark forecast farp stroke updown eap-identity eap-aka eap-md5 eap-gtc eap-mschapv2 eap-radius eap-tls eap-ttls eap-tnc xauth-generic xauth-eap xauth-pam tnc-tnccs dhcp lookip error-notify certexpire led addrblock unity counters
Listening IP addresses:
Connections:
server: ... IKEv2, dpddelay=30s
server: local: [] uses pre-shared key authentication
server: remote: [] uses pre-shared key authentication
server: child: === TUNNEL, dpdaction=start
Security Associations (1 up, 0 connecting):
server: ESTABLISHED 2 hours ago, []...[]
server: IKEv2 SPIs: , pre-shared key reauthentication in 4 hours
server: IKE proposal: AES_CBC_256/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/MODP_2048
server{27}: INSTALLED, TUNNEL, reqid 1, ESP SPIs:
server{27}: AES_CBC_128/HMAC_SHA2_256_128, 0 bytes_i, 0 bytes_o, rekeying in 42 minutes
server{27}: ===
But I believe I am having issues configuring the correct IPTables rules.
I tried following the recommendations shown here , as well as the steps from multiple tutorials (I reached page 2 of google).
I have route to the client VPN range
dev vti0 scope link
The interface is UP
4: vti0@NONE: mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip peer
inet6 /64 scope link
valid_lft forever preferred_lft forever
What I noticed is: when I configure an snat to translate the VPN traffic going out my vti interface, the traffic does not reach the interface, as the TCPDump cannot capture anything. Any other rule (or no rule at all), I get traffic in the TCPDump
tcpdump -i vti0 -n
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on vti0, link-type RAW (Raw IP), snapshot length 262144 bytes
13:41:14.939880 IP .43140 > .22: Flags [S], seq 3230932500, win 64800, options [mss 1440,sackOK,TS val 2922516277 ecr 0,nop,wscale 7], length 0
13:41:15.942056 IP .43140 > .22: Flags [S], seq 3230932500, win 64800, options [mss 1440,sackOK,TS val 2922517280 ecr 0,nop,wscale 7], length 0
But get the errors "No route to host" when trying to access a server that I know is available on the other site.
I understand that the issue with "no route to host" **could be** that the IP of my outbound traffic is using my internal IP and not one on my VPN range, but as I explained before, once I apply SNAT, the traffic stops and I get a timeout in the connection.
A few of the rules I tried applying (not all at the same time, of course)
iptables -A FORWARD -i eth0 -o vti0 -j ACCEPT
iptables -A FORWARD -i vti0 -o eth0 -j ACCEPT
iptables -t nat -A POSTROUTING -o eth0 -d -j SNAT --to-source
iptables -t nat -A POSTROUTING -o vti0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o eth0 -s -j SNAT --to-source
iptables -t nat -A POSTROUTING -o vti0 -d -j SNAT --to-source
iptables -t nat -A POSTROUTING -o vti0 -d 0.0.0.0/0 -j SNAT --to-source
iptables -t nat -A POSTROUTING -s -d -J MASQUERADE
iptables -t nat -I POSTROUTING -m policy --pol ipsec --dir out -j ACCEPT
iptables -t nat -A POSTROUTING -o vti0 -m policy --dir out --pol ipsec -j ACCEPT
iptables -t mangle -A FORWARD -m policy --pol ipsec --dir in -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360
iptables -t mangle -A FORWARD -m policy --pol ipsec --dir out -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360
iptables -t nat -A POSTROUTING -o vti0 -d -j SNAT --to-source
iptables -t nat -A POSTROUTING -o eth0 -d -j SNAT --to-source
iptables -t nat -A POSTROUTING -s -o vti0 -m policy --dir out --pol ipsec -j ACCEPT
iptables -t nat -A POSTROUTING -s -o vti0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o vti0 -j SNAT --to-source
iptables -t nat -A POSTROUTING -o vti0 -j SNAT --to-source
iptables -t nat -A POSTROUTING -s -o vti0 -m policy --dir out --pol ipsec -j ACCEPT
iptables -t nat -A POSTROUTING -s -o eth0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o vti0 -j SNAT --to-source
iptables -t nat -A POSTROUTING -d -j ACCEPT
iptables -t mangle -A OUTPUT -d -j MARK --set-mark 1
iptables -t nat -A POSTROUTING -m mark --mark 1 -j SNAT --to-source
iptables -t nat -A POSTROUTING -o vti0 -s -d -j SNAT --to-source
iptables -t nat -A POSTROUTING -o vti0 -s -d -j SNAT --to-source
I don't know whether is relevant, but my server is in Azure, with IP forwarding enabled both in Azure and in Debian. There's no outbound rules in Azure.
Tammy
(1 rep)
Jan 23, 2025, 09:58 AM
• Last activity: Jan 23, 2025, 02:10 PM
0
votes
0
answers
67
views
Exactly what changes does AADLoginForLinux make to your OS configuration when it installs?
Microsoft tell us that we can enable the AADLoginForLinux extension on an Azure Linux VM, and it will allow you to ssh login with AAD MFA (provided you use their own ssh client of course, as it requires an 'extension' to ssh protocol) However, exactly what changes are made to the Linux OS configurat...
Microsoft tell us that we can enable the AADLoginForLinux extension on an Azure Linux VM, and it will allow you to ssh login with AAD MFA (provided you use their own ssh client of course, as it requires an 'extension' to ssh protocol)
However, exactly what changes are made to the Linux OS configuration to achieve this? There seems to be no documentation of this, and our system is not working for AAD login.
So far, I can see that it
* Adds 'aad' as an option in nsswitch.conf to manage users and groups
* Adds pam_aad.so module to your PAM system-auth and password-auth, in the 'auth' and 'sesion' as a 'required' module
* Requires outbound connection via https to a number of microsoft sites (presumably to verify tokens passed by 'az ssh')
* Possibly other changes to sshd_config?
Logs seem to go into /var/log/security but they are pretty basic.
Can anyone provide a definitive list of the configuration changes made/required by this extension?
Steve Shipway
(376 rep)
Oct 10, 2024, 09:14 PM
2
votes
1
answers
20980
views
mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error
I try to mount a disk added to an Azure VM, but it fails with the error message: mount: /mydirectory: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error. Here is what I did: I created a disk and attached it to an Azure VM.  I...
I try to mount a disk added to an Azure VM,
but it fails with the error message:
mount: /mydirectory: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error.
Here is what I did: I created a disk and attached it to an Azure VM.
I then ran the command
sudo fdisk -l
, which gave me this output:
Disk /dev/sda: 300 GiB, 322122547200 bytes, 629145600 sectors
Disk model: Virtual Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Then, I called the command sudo fdisk /dev/sda
with these steps
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xeacbff1b.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-629145599, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-629145599, default 629145599):
Created a new partition 1 of type 'Linux' and of size 300 GiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
After the fdisk
call I tried to mount it (sudo mount /dev/sda1 /mydirectory
) but it failed with the error message above:
mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error.
How can I solve this problem?
MrPython
(219 rep)
Mar 7, 2023, 01:40 PM
• Last activity: Sep 10, 2024, 02:12 PM
0
votes
1
answers
99
views
Serial Console is not accessible from VMs created from a custom AlmaLinux 9.3 image
I'm currently following the "RHEL 8+ using Hyper-V Manager" section in Microsoft Azure's documentation [link](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/redhat-create-upload-vhd?source=recommendations) to create RedHat Azure images. I've successfully created a modified AlmaLinux...
I'm currently following the "RHEL 8+ using Hyper-V Manager" section in Microsoft Azure's documentation [link](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/redhat-create-upload-vhd?source=recommendations) to create RedHat Azure images. I've successfully created a modified AlmaLinux 9.2 image from a Hyper-V VM, adjusting the GRUB menu as follows:
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_CMDLINE_LINUX="crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M loglevel=3 console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 earlyprintk=ttyS0 net.ifnames=0"
GRUB_TIMEOUT_STYLE=countdown
GRUB_TERMINAL="serial console"
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true
The AlmaLinux 9.2 image works perfectly, allowing me to create new Azure VMs without issue. However, I'm encountering difficulties when attempting to create an AlmaLinux 9.3 image from a Hyper-V VM. Specifically, the serial console remains blank after deploying a Azure VM from the custom AlmaLinux 9.3 image.
Interestingly, VMs created initially from an AlmaLinux 9.2 image and subsequently updated to AlmaLinux 9.3 maintain serial console accessibility. The problem seems specific to creating an Azure image directly from an AlmaLinux 9.3 Hyper-V VM.
Normally, I would create an AlmaLinux 9.2 Azure VM from the image and then upgrade it. However, I need to migrate some AlmaLinux 9.3 servers to the cloud. Has anyone else encountered issues creating AlmaLinux 9.3 images from AlmaLinux 9.3 Hyper-V VMs, or does anyone have insights into why creating Azure images from these VMs presents this challenge?
supmethods
(561 rep)
Jul 15, 2024, 04:03 PM
• Last activity: Jul 30, 2024, 08:38 AM
0
votes
2
answers
291
views
Azure VM (Debian 11.9) root directory 100% full...but its not!
Please help, I am at the end of my tether with this stupid VM! Its an Azure VM running Debian 11.9 with a primary disk of 128GB and a secondary disk of 100GB. It is reporting that / is currently 100% full. root@SYS-801:/# df -h Filesystem Size Used Avail Use% Mounted on udev 3.9G 0 3.9G 0% /dev tmpf...
Please help, I am at the end of my tether with this stupid VM!
Its an Azure VM running Debian 11.9 with a primary disk of 128GB and a secondary disk of 100GB.
It is reporting that / is currently 100% full.
root@SYS-801:/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 791M 86M 706M 11% /run
/dev/sdb1 126G 126G 0 100% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sdc1 16G 8.0K 15G 1% /swap/mntold/resource
/dev/sdb15 124M 11M 114M 9% /boot/efi
//logs.windows.net/sys 5.0T 23G 5.0T 1% /syslogs
tmpfs 791M 0 791M 0% /run/user/1000
But its not, I think something is hidden somewhere but for the life of me I cannot find it.
I've spent 3 days solid trying different things and this is the current stat I am in.
The primary disk was originally 30GB and showing full so I increased it in Azure ans still its just eating up whatever I assign to it.
root@SYS-801:/# du -a -h --max-depth=1 | sort -hr
du: cannot access './proc/490298/task/490298/fd/3': No such file or directory
du: cannot access './proc/490298/task/490298/fdinfo/3': No such file or directory
du: cannot access './proc/490298/fd/4': No such file or directory
du: cannot access './proc/490298/fdinfo/4': No such file or directory
27G .
22G ./syslogs
3.8G ./var
1.1G ./opt
779M ./usr
236M ./home
88M ./run
77M ./boot
5.6M ./etc
68K ./tmp
68K ./root
16K ./lost+found
12K ./swap
4.0K ./srv
4.0K ./resource
4.0K ./media
4.0K ./
As you can see, nothing is showing any larger than 22G so what is using up all of the extra space?
The only thing odd I can see is that the seconday 100GB isn't showing as mounted to anything.
root@SYS-801:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
└─sda1 8:1 0 100G 0 part
sdb 8:16 0 128G 0 disk
├─sdb1 8:17 0 127.9G 0 part /
├─sdb14 8:30 0 3M 0 part
└─sdb15 8:31 0 124M 0 part /boot/efi
sdc 8:32 0 16G 0 disk
└─sdc1 8:33 0 16G 0 part /swap/resource
Does it matter that its appearing in the list first?
Along the way I have investigated the SWAP configuration within /etc/waagent.conf and turned off the swap feature to rule tis out.
# Format if unformatted. If 'n', resource disk will not be mounted.
ResourceDisk.Format=n
# Create and use SWAPfile on resource disk.
ResourceDisk.EnableSWAP=n
#Mount point for the resource disk
ResourceDisk.MountPoint=/swap/resource
#Size of the SWAPfile.
ResourceDisk.SWAPSizeMB=0
So I don't know what else could be clinging on to this space.
Rebooting doesn't change anything so I suspect it cannot be anything temporary.
As part of my cleanup I may have deleted too many directories from
/
, and I cannot create more:
root@SYS-801:/# ls
bin boot dev etc home lib lib32 lib64 libx32 lost+found media opt proc root run sbin srv swap sys syslogs tmp usr var
root@LIN-SYS-801-PRD:/# mkdir -p /mnt
mkdir: cannot create directory ‘/mnt’: No space left on device
root@SYS-801:/# cd /mnt/root
bash: cd: /mnt/root: No such file or directory
Any clues where I can go from here?
Beckyboo
(31 rep)
Jul 8, 2024, 09:03 AM
• Last activity: Jul 23, 2024, 11:27 AM
0
votes
1
answers
334
views
Unsupported Locale setting error when running Azure pipelines
I am getting this below error snippet when I try to run pipeline build from Azure. 2024-06-18T06:02:45.1075555Z Your configuration files at build have not been touched. 2024-06-18T06:02:45.3914362Z Traceback (most recent call last): 2024-06-18T06:02:45.3915081Z File "/build/sources/poky/bitbake/bin/...
I am getting this below error snippet when I try to run pipeline build from Azure.
2024-06-18T06:02:45.1075555Z Your configuration files at build have not been touched.
2024-06-18T06:02:45.3914362Z Traceback (most recent call last):
2024-06-18T06:02:45.3915081Z File "/build/sources/poky/bitbake/bin/bitbake", line 28, in
2024-06-18T06:02:45.3915488Z bb.utils.check_system_locale()
2024-06-18T06:02:45.3915863Z File "/build/sources/poky/bitbake/lib/bb/utils.py", line 619, in check_system_locale
2024-06-18T06:02:45.3916530Z locale.setlocale(locale.LC_CTYPE, default_locale)
2024-06-18T06:02:45.3916923Z File "/usr/lib/python3.10/locale.py", line 620, in setlocale
2024-06-18T06:02:45.3917257Z return _setlocale(category, locale)
2024-06-18T06:02:45.3917582Z locale.Error: unsupported locale setting
2024-06-18T06:02:45.5578186Z Traceback (most recent call last):
2024-06-18T06:02:45.5578712Z File "/build/sources/poky/bitbake/bin/bitbake", line 28, in
2024-06-18T06:02:45.5579048Z bb.utils.check_system_locale()
2024-06-18T06:02:45.5579380Z File "/build/sources/poky/bitbake/lib/bb/utils.py", line 619, in check_system_locale
2024-06-18T06:02:45.5579767Z locale.setlocale(locale.LC_CTYPE, default_locale)
2024-06-18T06:02:45.5580111Z File "/usr/lib/python3.10/locale.py", line 620, in setlocale
2024-06-18T06:02:45.5580420Z return _setlocale(category, locale)
2024-06-18T06:02:45.5580696Z locale.Error: unsupported locale setting
I followed these links and have set locale correctly.
Output of
locale
:
LANG=en_US.UTF-8
LANGUAGE=en_IN:en
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=en_US.UTF-8
Output of locale -a
:
C
C.utf8
en_AG
en_AG.utf8
en_AU.utf8
en_BW.utf8
en_CA.utf8
en_DK.utf8
en_GB.utf8
en_HK.utf8
en_IE.utf8
en_IL
en_IL.utf8
en_IN
en_IN.utf8
en_NG
en_NG.utf8
en_NZ.utf8
en_PH.utf8
en_SG.utf8
en_US
en_US.iso88591
en_US.iso885915
en_US.utf8
en_ZA.utf8
en_ZM
en_ZM.utf8
en_ZW.utf8
POSIX
in /etc/default/locale
, /etc/locale-gen
I can see the "en_US.UTF-8" correctly.
I have also tried locale-gen
, sudo dpkg-reconfigure locales
as per the attached links.
Locally on the server/terminal, I am able to run bitbake. I can see the correct LC_CTYPE/LC_ALL variables/values being set/reflecting correctly.
bitbake machine1-image
works correctly when I login to the terminal.
But when I queue the build from Azure pipeline, it fails with the error.
**
- Links Followed:
**
https://unix.stackexchange.com/questions/169039/problem-with-locale-setting-locale-failed
https://stackoverflow.com/questions/65525716/why-do-i-get-a-locale-error-even-though-it-is-set
https://unix.stackexchange.com/questions/626916/how-to-set-locale-correctly-manually/626919
https://stackoverflow.com/questions/42090237/change-locale-setting-in-yocto?rq=3
https://stackoverflow.com/questions/78006534/python-locale-raising-unsupported-locale-setting
**Misc Details:**
IBM server.
22.04LTS
Kirkstone/Yocto
Nathan
(31 rep)
Jun 18, 2024, 06:22 AM
• Last activity: Jun 18, 2024, 07:22 AM
0
votes
1
answers
85
views
Forcing Server Reboots if not rebooted within X days of patching (Azure Hosted)
I host numerous RHEL 8 servers in the Azure environment. I am using a Red Hat Satellite VM to patch servers and then I rely on the owners to reboot once patching is complete. I want to implement a solution that alerts the POC that they need to reboot and force reboots the server if it has not been r...
I host numerous RHEL 8 servers in the Azure environment. I am using a Red Hat Satellite VM to patch servers and then I rely on the owners to reboot once patching is complete. I want to implement a solution that alerts the POC that they need to reboot and force reboots the server if it has not been rebooted within X number of days since the system was patched. I am looking to see if anyone else has done this and if they performed this automation on the Azure platform (PowerShell) or if they utilized something like the Ansible Automation Platform (AAP). If I didn't have to get the POC from Azure and connect it to another system for notification, I would just write a bash script utilizing the patching groups and system uptimes.
Jared
(9 rep)
Apr 16, 2024, 06:11 PM
• Last activity: Jun 10, 2024, 12:48 PM
1
votes
1
answers
65
views
Is there a way to allow responding to an IPv6 packet sent from a link local address to a regular IPv6 address?
We're currently configuring an Azure load balancer to work with a Linux-based backend supporting both IPv4 and IPv6. As part of the setup, the load balancer performs health checks at regular intervals by attempting to establish TCP connections. However, our backend's health checks keep failing due t...
We're currently configuring an Azure load balancer to work with a Linux-based backend supporting both IPv4 and IPv6. As part of the setup, the load balancer performs health checks at regular intervals by attempting to establish TCP connections. However, our backend's health checks keep failing due to IPv6 packet transmission issues.
**Context**
* The load balancer doesn't support NAT64, necessitating IPv6 support on both ends.
* Our backend is configured with IPv6, and the web service is actively listening on the required ports.
* Validating connectivity by querying the service from one backend to another confirms proper configuration.
**Problem:**
The issue arises when the Azure load balancer initiates health checks using IPv6 packets sent from a link-local address to a non-link-local address, resulting in failures.
**Request:**
We're seeking guidance on enabling our backend to respond to IPv6 packets sent from link-local addresses to regular IPv6 addresses. Are there any configurations or settings we need to adjust to accommodate this scenario within the Azure load balancer environment?
**Additional Details:**
This situation is nicely explained in the following network capture taken on one of the backend VMs. The Azure load balancer performs its health probe from the link-local address `
| **Source** | **Destination** | **Protocol** | **Length** | **Info** |
|-----------------------|-----------------------|--------------|------------|------------------------------------------------------------------|
| fe80::1234:5678:9abc | 2404:f800:8000:122::4 | TCP | 86 | 58675 → 80 [SYN] Seq=0 Win=64800 Len=0 MSS=1440 WS=256 SACK_PERM |
| 2404:f800:8000:122::4 | fe80::1234:5678:9abc | ICMPv6 | 134 | Destination Unreachable (Beyond scope of source address) |
fe80::1234:5678:9abc
`. This packet is send to the regular IPv6 address assigned to this VM's interface. However, the backend refuses this packet and immediately responds with an error via ICMPv6, telling that the link local address is beyond the scope of the source address. We cannot change how the Azure load balancer works, so we are wondering if we can apply a workaround on the backend to allow responding to this packet.

Brecht Vercruyce
(11 rep)
Apr 10, 2024, 08:43 AM
• Last activity: Apr 10, 2024, 03:34 PM
0
votes
0
answers
142
views
How to get a Linux VM to pass MSAL SSO authentication?
I have a React app that uses the [Microsoft Authentication Library](https://learn.microsoft.com/en-us/entra/identity-platform/msal-overview) (MSAL) to identify users. It works just fine on my Windows machine; however, for a variety of reasons, I'm trying to get a Linux VM set up to do certain develo...
I have a React app that uses the [Microsoft Authentication Library](https://learn.microsoft.com/en-us/entra/identity-platform/msal-overview) (MSAL) to identify users. It works just fine on my Windows machine; however, for a variety of reasons, I'm trying to get a Linux VM set up to do certain development tasks. Unfortunately, its user isn't a "real" Active Directory user, so the app just hangs when I get to that step.
In Windows, I can simply right-click on an application and impersonate another user to run it. However, I haven't been able to figure out a good way to do that in Linux (especially given that it must be an Active Directory user). (In this case, I would like to do something similar with the web browser).
Does anyone know how I can do that?
I am using an Ubuntu machine if that makes any difference.
EJoshuaS - Stand with Ukraine
(119 rep)
Apr 1, 2024, 06:59 PM
1
votes
0
answers
52
views
SSH authentication
I am trying to make ssh on VM have an authentication before the user is connected to the session. Right now I have an Azure MFA app, that when the user authenticates, it shows a password. I want this password to be the one that the user enters to connect, but the password will be different every tim...
I am trying to make ssh on VM have an authentication before the user is connected to the session. Right now I have an Azure MFA app, that when the user authenticates, it shows a password. I want this password to be the one that the user enters to connect, but the password will be different every time to keep the security of the VM.
So the problem is how can I make the password that the MFA app generates be the password to the SSH connection where the password will be changed automatically after each session?
Let me explain the process:
1. the user types the command "ssh hostname@ip"
2. the terminal is then prompted to follow a link where the user can authenticate themself. When they have successfully authenticated themself, they get the password for this SSH connection
3. they enter the password from the link.
I have the MFA app, and I have an SSH connection established. I tried to use the ForceCommand option in the sshd_config file, but the script runs after the ssh is established. I want it before it is established.
Tavleen Aneja
(11 rep)
Mar 29, 2024, 08:34 PM
0
votes
0
answers
66
views
/bin/bash accidentally moved to another location
I have an Azure VM and while I was moving some files from my present working directory with the command `mv`, I accidentally missed the `.` in front of my source and instead wrote ``` mv /* /destination/path ``` This moved some files from `/` (including `/bin` directory) to the destination. I am loc...
I have an Azure VM and while I was moving some files from my present working directory with the command
mv
, I accidentally missed the .
in front of my source and instead wrote
mv /* /destination/path
This moved some files from /
(including /bin
directory) to the destination.
I am locked out of my VM now as I cannot SSH into a Bash terminal there because the terminal doesn't exist in /bin/bash
(an absolute blunder from my side).
I am here to understand if there's any way to regain access to this VM. I do not have any other shell installed.
Amit Sharma
(101 rep)
Mar 8, 2024, 08:24 PM
• Last activity: Mar 12, 2024, 10:23 AM
1
votes
1
answers
524
views
Azure CLI bash loop not working
I'm trying to learn bash and become less PowerShell dependent but running into issues with what seems like an easy loop. Below is my attempt to get the loop to return each VM result with it's associated name, id, and tag. Any help is much appreciated. ``` #Get a list of VMs with their name, id, and...
I'm trying to learn bash and become less PowerShell dependent but running into issues with what seems like an easy loop.
Below is my attempt to get the loop to return each VM result with it's associated name, id, and tag.
Any help is much appreciated.
#Get a list of VMs with their name, id, and tags from Azure
r=$(az vm list -g lab.rg1 --query "[].{name:name, id:id, tags:tags}") #--output tsv)
#take that list and do something. Currently just trying to echo each VM with it's name,id, and tags.
while read r
do
echo $r
done
echo All Done
Braden
(11 rep)
Feb 22, 2024, 05:53 PM
• Last activity: Feb 22, 2024, 05:57 PM
0
votes
1
answers
267
views
Bind: Delegating a Subdomain With Its Own SOA
OS: Oracle Linux 8.9 Bind version: 9.11.36 (installed from rpm) I am having trouble creating a subdomain (powerwebappuat.lereta.com) delegated to azure servers. Normally this is not difficult. I would just add a delegation in the data file like this and everyone is happy: ``` powerwebappuat IN NS ns...
OS: Oracle Linux 8.9
Bind version: 9.11.36 (installed from rpm) I am having trouble creating a subdomain (powerwebappuat.lereta.com) delegated to azure servers. Normally this is not difficult. I would just add a delegation in the data file like this and everyone is happy:
Email: azuredns-hostmaster.microsoft.com
Host: ns1-38.azure-dns.com.
Refresh: 3600
Retry: 300
Expire: 2419200
Minimum TTL: 300
Serial number: 1 Current SOA for the parent zone is:
Bind version: 9.11.36 (installed from rpm) I am having trouble creating a subdomain (powerwebappuat.lereta.com) delegated to azure servers. Normally this is not difficult. I would just add a delegation in the data file like this and everyone is happy:
powerwebappuat IN NS ns1-38.azure-dns.com.
powerwebappuat IN NS ns2-38.azure-dns.net.
powerwebappuat IN NS ns3-38.azure-dns.org.
powerwebappuat IN NS ns4-38.azure-dns.info.
This time, however, Micrsoft requires I delegate powerwebappuat with a different SOA than the parent zone. The ticket is very specific:
> SOA Records:Email: azuredns-hostmaster.microsoft.com
Host: ns1-38.azure-dns.com.
Refresh: 3600
Retry: 300
Expire: 2419200
Minimum TTL: 300
Serial number: 1 Current SOA for the parent zone is:
$ORIGIN lereta.com.
$TTL 1200 ; 20 minutes
@ IN SOA ns1.taxandflood.net. dnsadmin.taxandflood.com. (
1539796885 ; serial
3h ; refresh
1h ; retry
14d ; expire
1h ; minimum
)
I tried adding a new $ORIGIN with its own SOA:
$ORIGIN powerwebappuat.lereta.com.
$TTL 1200
@ IN SOA ns1-38.azure-dns.com. azuredns-hostmaster.microsoft.com (
1 ; serial
1h ; refresh
5m ; retry
28d ; expire
5m ; minimum
)
NS ns1-38.azure-dns.com.
NS ns2-38.azure-dns.net.
NS ns3-38.azure-dns.org.
NS ns4-38.azure-dns.info.
While named-checkconf doesn't complain about the above, when I try to sign the zone, named-checkzone return an error which, by design, halts my script.
data/lereta.com:251: SOA record not at top of zone (powerwebappuat.lereta.com)
zone lereta.com/IN: loading from master file data/lereta.com failed: not at top of zone
zone lereta.com/IN: not loaded due to errors.
I can find plenty of examples of delegating subdomains with Bind but none with advice on how to make such a delegation have a different SOA than the parent zone.
Does anyone have an idea how this can be done.
Stephen Carville
(3 rep)
Feb 13, 2024, 07:25 PM
• Last activity: Feb 13, 2024, 08:13 PM
0
votes
1
answers
124
views
How to identify the cause of a process reading from disk at rates above 400 Mb/s
I am managing some virtual machines in Azure and a few times a week, at apparently random times, some of them start **I/O reading at speeds above 400 Mb/s**. This occurrs for one machine at a time, not simultaneously. These machines use SSDs as hard drives, but those read speeds seem abnormal. [
Stacker12345
(3 rep)
Jan 12, 2024, 10:40 AM
• Last activity: Jan 12, 2024, 12:02 PM
1
votes
1
answers
63
views
Suse VM Full disk encryption, storing keys in Azure
We currently have a solution to have our data partition encrypted with the keys stored on the system partition to allow for booting without user interaction, but not our system partition as we have no clue how to integrate our Suse VMs into Active Directory, which can store the encryption keys in th...
We currently have a solution to have our data partition encrypted with the keys stored on the system partition to allow for booting without user interaction, but not our system partition as we have no clue how to integrate our Suse VMs into Active Directory, which can store the encryption keys in the Azure Key vault?
Has anyone else ever done this before and if yes: how?
Fabby
(5549 rep)
Jan 3, 2024, 10:06 AM
• Last activity: Jan 6, 2024, 09:21 AM
0
votes
0
answers
109
views
Incorrect MAC error while using ssh from windows to linux
I am trying to ssh from a windows system to an azure centos vm. But I am getting error incorrect MAC ... On further checking i identified it only happens when the mac algorithm used is umac-128-etm@openssh.com and there is no issue connecting if I try to specify other algorithms. Moreover i can conn...
I am trying to ssh from a windows system to an azure centos vm. But I am getting error incorrect MAC ... On further checking i identified it only happens when the mac algorithm used is umac-128-etm@openssh.com and there is no issue connecting if I try to specify other algorithms. Moreover i can connect to the vm from other linux machines and i can conect to other linux machines from my windows system. So i guess this has something to do with the versions of the linux and windows openssh So I tried to remove this mac algorithm from sshd_config file in my azure vm using root access and the connection succeeded for the time being but after some time the mac algorith got added back to the sshd_config file automatically. I am not the one who deployed the vm and cant redeploy it, is there any method to change this configuration and make sure it doesnt automatically get reset?
Can someone please help me to resolve this issue.
Zez s Shin
(1 rep)
Jul 9, 2023, 06:18 AM
1
votes
1
answers
4160
views
Rsyslog producing errors when trying to write to mounted drive
Could anyone take a look at this and see if I've done something obviously wrong? I am running RockyLinux and attempting to setup a syslog server. Its running with an Azure VM with an additional 2TB datadrive attached. I have mounted the drive in Rocky and amended `fstab`, confirming its presence upo...
Could anyone take a look at this and see if I've done something obviously wrong?
I am running RockyLinux and attempting to setup a syslog server.
Its running with an Azure VM with an additional 2TB datadrive attached.
I have mounted the drive in Rocky and amended
fstab
, confirming its presence upon reboot, and I can write to it.
Rsyslog is all setup and configured and works fine if I leave the default settings and allow logs to be sent to /var/log but as soon as I point it to my datadrive I get permission errors.
May 24 11:30:20 MyServer rsyslogd: error during config processing: Could not open dynamic file '/datadrive/syslogs/MyServer/rsyslogd.log' [state -3000] - discarding message [v8>
May 24 11:30:20 MyServer rsyslogd: error during config processing: omfile: creating parent directories for file '/datadrive/syslogs/MyServer/rsyslogd.log' failed: Permission denied
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
tmpfs 1.6G 148M 1.4G 10% /run
/dev/sda3 7.9G 1.8G 6.1G 23% /
/dev/sda2 994M 430M 564M 44% /boot
/dev/sda1 100M 7.0M 93M 7% /boot/efi
/dev/sdb1 16G 28K 15G 1% /mnt
tmpfs 769M 0 769M 0% /run/user/1000
/dev/sdc1 2.0T 15G 2.0T 1% /datadrive
And the two directories have the same permissions and owners
[myroot@MyServer log]$ pwd
/var/log
.......
drwx------. 2 root root 31 May 24 11:09 remote-device
[myroot@MyServer /]$ pwd
/
.....
drwx------. 3 root root 21 May 24 10:03 datadrive
The only thing I have noticed is that when I try to enter the data drive cd /datadrive
and sudo cd /datadrive
is not working, I have to issue sudo su
before changing directory to /datadrive
Could this be what is causing the issue? Any thoughts would be appreciated.
Beckyboo
(31 rep)
May 24, 2023, 11:42 AM
• Last activity: Jul 2, 2023, 04:37 AM
3
votes
1
answers
635
views
Can systemd-tmpfiles-setup be made to honor RequiresMountsFor?
I'm trying to use systemd-tmpfiles to manage files on the "temporary disk" of a Linux (CentOS Stream 8) VM on Azure. The `systemd-tmpfiles` configuration seems to be correct, as judged by running `systemd-tmpfiles --create` manually when the system is up. It is not working with the `systemd-tmpfiles...
I'm trying to use systemd-tmpfiles to manage files on the "temporary disk" of a Linux (CentOS Stream 8) VM on Azure. The
systemd-tmpfiles
configuration seems to be correct, as judged by running systemd-tmpfiles --create
manually when the system is up. It is not working with the systemd-tmpfiles-setup
service, however, in that that service creates the files in the mount point directory instead of on the mounted filesystem. Of course, that moots the whole exercise.
I presume that that is happening because systemd-tmpfiles-setup
runs before the temporary disk is mounted, so I have attempted to resolve it by applying a RequiresMountsFor
property via a configuration override:
**/etc/systemd/system/systemd-tmpfiles-setup.service.d/override.conf**
[Unit]
RequiresMountsFor=/mnt/resource
Systemd seems to recognize that, as judged by systemctl list-dependencies systemd-tmpfiles-setup
listing the appropriate mount unit, but upon reboot, it still creates the wanted files in the mount point directory instead of on the mounted temporary disk.
Possibly it is relevant that the wanted mount unit does not have an explicit unit file; I am relying on systemd to generate the unit by scanning /etc/fstab
, as it indeed seems to be doing.
**/etc/fstab**:
# ...
/dev/disk/cloud/azure_resource-part1 /mnt/resource auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
**What am I missing? Is there a good reason why what I'm doing shouldn't work?**
John Bollinger
(632 rep)
Apr 11, 2023, 10:32 PM
• Last activity: Apr 11, 2023, 10:47 PM
6
votes
0
answers
272
views
How to guarantee temporary immutability of LVM2 LV at raw block level?
I inherited an Azure VM (Ubuntu 20.04) which has a 7 disk VG fully occupied by a RAID5 LV formatted as ext4. I need to take backups and was hoping to use Azure Backup to snapshot the Azure Disks comprising the VG. Azure Disk snapshots are not point-in-time consistent so I need to freeze the storage...
I inherited an Azure VM (Ubuntu 20.04) which has a 7 disk VG fully occupied by a RAID5 LV formatted as ext4.
I need to take backups and was hoping to use Azure Backup to snapshot the Azure Disks comprising the VG.
Azure Disk snapshots are not point-in-time consistent so I need to freeze the storage whilst the backup runs, both for filesystem integrity and LVM metadata reasons. My workload will tolerate this; I am trying to figure out the best method of making the raw disk blocks temporarily immutable.
fsfreeze
- I tested freezing the filesystem, taking snapshots, unfreezing, then switching to the snapshots.
In my limited testing this is working ok and I don't see anything scary from LVM when the 'restored' disks are swapped back in, but I can only perform so many tests and *if* there is a 1% edge case where my disk metadata will be inconsistent I may not find it.
I'm apprehensive that I'm locking activity at such a high layer: no filesystem ops will occur whilst the FIFREEZE
ioctl
is active, but does this stop LVM from doing any kind of lower-level operation e.g. metadata updates, RAID-related activity?
I then tried dmsetup suspend /dev/mapper/my-lvol
and this *feels* like a better solution.
Test setup:
**fsfreeze
**
1. echo 3 > /proc/sys/vm/drop_caches
2. sync ; sync
(old habits die hard :)
3. fsfreeze -f /export
4. dd if=/dev/mapper/my-lvol of=/dev/null status=progress
The dd
runs to completion. I accept this is valid because I'm not accessing via the frozen filesystem, but it makes me wonder whether LVM could still be doing things at a low level whilst I'm assuming my Azure Disks are unchanging.
**dmsetup suspend
**
1. echo 3 > /proc/sys/vm/drop_caches
2. sync ; sync
3. dmsetup suspend /dev/mapper/my-lvol
4. dd if=/dev/mapper/my-lvol of=/dev/null status=progress
The dd
blocks as long as the suspend is in place. I can still dd
the rmeta
and rimage
devices, but I sort of expected that.
With the dmsetup
option I get a hung task syslog warning for jbd2
. The stacktrace shows it's trying to commit journal transaction (jbd2_journal_commit_transaction()
) which both reassures me that the LV is **really** locked, but also concerns me that I'm snapshotting the filesystem in an inconsistent state and it might need to replay the journal should we ever roll back to the snapshots. Our RPO will permit some rollback but ideally I'd like to design a solution which removes this risk.
**Options I've discarded**
1. File-based backups: possible, but setup & management seemed more complicated than freezing for snapshots did - to begin with!
2. Temporarily snapshotting the LV and backing up from that. The VG is full and I'd really prefer not to add more disk/resize VG/etc.
**Questions**
I would really appreciate any input here. As you can tell I'm at the edge (and possibly beyond) my understanding of Linux filesystems/block IO.
1. Overall, does freezing/suspending seem like a workable solution to get point-in-time consistent snapshots?
2. Am I still not looking deeply enough - just because jdb2
is unable to write a transaction could lvm
or dm
still be doing metadata updates at a lower level?
Thanks,
tim
Tim Matthews
(61 rep)
Mar 21, 2023, 03:15 PM
Showing page 1 of 20 total questions