Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
1
answers
2626
views
Make Python3 default without breaking yum in RHEL7
> **What specific syntax needs to be changed or added to the below in order for commands calling `python` in a RHEL7 VM to be interpreted using Python3 WITHOUT breaking programs like `yum` that require Python2?** **FIRST ATTEMPT:** Our first attempt was to add the following 2 lines to the very end o...
> **What specific syntax needs to be changed or added to the below in order for commands calling
python
in a RHEL7 VM to be interpreted using Python3 WITHOUT breaking programs like yum
that require Python2?**
**FIRST ATTEMPT:**
Our first attempt was to add the following 2 lines to the very end of the cloud-init
startup script which instantiates the VM:
rm /usr/bin/python
ln -s /usr/bin/python3 /usr/bin/python
**ERROR THAT RESULTED:**
The problem is that adding the above two lines to the end of the cloud-init
startup script causes yum
commands to break when yum
is called afterwards as follows:
$ sudo yum update -y
File "/bin/yum", line 30
except KeyboardInterrupt, e:
^
SyntaxError: invalid syntax
$
**TOGGLING THE ERROR:**
We can turn off the error by removing the above 2 lines from the cloud-init
startup script and re-instantiating a new replacement VM. This isolates the source of the problem, but still we have the problem of how to default python
to Python3 without breaking apps like yum
.
CodeMed
(5357 rep)
Jul 7, 2020, 05:24 PM
• Last activity: Jul 27, 2025, 02:04 AM
1
votes
0
answers
147
views
Debian 12 cloud image won't let me ssh unless first boot already has NIC available
I have a little script that automates setting up a local VM. Basically when I want to create a VM it sets up some cloud-init configuration, boots the guest once initially for setup (with a "cdrom" attached), and then shuts it down ahead of use. With the Debian 11 (bullseye) cloud images the guest in...
I have a little script that automates setting up a local VM. Basically when I want to create a VM it sets up some cloud-init configuration, boots the guest once initially for setup (with a "cdrom" attached), and then shuts it down ahead of use.
With the Debian 11 (bullseye) cloud images the guest init process worked fine when passing
-nic none
to the qemu process, i.e. an offline setup worked without issue. It would configure my ssh key and upon second launch of the VM (without the cloud-init "cdrom" attached) everything worked fine and I could connect.
But with Debian 12 (bookworm) cloud images, my same setup process runs into two issues:
* the initial boot hangs at "Starting systemd-networkd-…it for Network to be Configured..." for a few minutes before proceeding
* something then changes in **subsequent** boots in that the guest SSH service won't respond to connections
Is this more likely a change on the cloud-init
side (looks like my old bullseye base image logs "Cloud-init v. 20.4.1" and the new logs "Cloud-init v. 22.4.2") in somehow it force-configures a NON-working network if that's the case on first boot? Or would this be something more generic to Debian 11 → 12 itself in that its own network stuff now breaks when an interface isn't initially available?
## Update
I've at least uncovered two more clues:
When the machine has a NIC on *first* boot, the network interface ends up known as enp0s1
. Whereas if only the later boots have networking then the interface gets named eth0
. This doesn't seem like a super important difference though in light of the second…
The other thing I've discovered is that if I let cloud-init
*continue* to load on every boot (specifically by **not** doing a touch /etc/cloud/cloud-init.disabled
during initial configuration) then SSH access does work on subsequent boots. [i.e. even when the interface is still now eth0
rather than enp0s1
as it would have been.]
So this does point to something that cloud-init now "needs to" do in later versions before networking really will work. But what exactly? Why was Debian 11 fine to bring up networking on its own but now Debian 12 has to have cloud-init set up for it?
natevw
(194 rep)
Mar 14, 2025, 07:07 PM
• Last activity: Mar 14, 2025, 07:34 PM
1
votes
1
answers
116
views
ubuntu-live-server installation through PXE user-data config error
I am trying to install Ubuntu live server. Everything works fine till the system requests the cloud-init configs. I have tried to change my user-data file several times. Ether it throughs an error "An error occurred. Press enter to start shell (image 1) [![Image 1][1]][1] then I change the configura...
I am trying to install Ubuntu live server. Everything works fine till the system requests the cloud-init configs. I have tried to change my user-data file several times. Ether it throughs an error "An error occurred. Press enter to start shell (image 1)
then I change the configuration file user-data file, it passes from waiting for cloud-init step and loads (I suppose ) user data file then it throws another error:
Here is my user-data file:


#cloud-config
# Locale and Timezone
locale: en_US.UTF-8
timezone: UTC
# Preserve the hostname
preserve_hostname: true
# Users configuration
users:
- name: user
gecos: ubuntu-server
groups: adm, cdrom, dip, lxd, plugdev, sudo
lock_passwd: false
passwd: some-hashed-password
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
ssh_pwauth: true
# Disk setup for flexible sizes
# The partitioning will use the entire disk size
disk_setup:
/dev/sda:
table_type: gpt
layout: true
overwrite: false
partition:
- size: 1024 # 1GB for /boot
type: 0xEF00 # EFI System Partition
- size: 2048 # 2GB for /boot
type: 0x8300 # Linux Filesystem for /boot
- size: -1 # Use remaining space for root (LVM)
# LVM setup to use remaining disk space
lvm:
vg:
ubuntu-vg:
devices:
- /dev/sda3 # Third partition is for LVM
fs_setup:
- label: boot
filesystem: ext4
device: /dev/sda2 # Boot partition
- label: root
filesystem: ext4
device: /dev/mapper/ubuntu--vg-ubuntu--lv # LVM root partition
# Mount points configuration
mounts:
- [ /dev/sda2, /boot ]
- [ /dev/mapper/ubuntu--vg-ubuntu--lv, / ]
# Growpart to automatically resize partitions
growpart:
mode: 'auto'
devices: ['/']
resize_rootfs: true # Automatically resize the root filesystem
# Network setup (adjust the interface as needed)
network:
version: 2
ethernets:
ens160:
dhcp4: true
# Packages to install
packages:
- openssh-server
- htop
- curl
# Run custom commands after first boot
runcmd:
- echo "System successfully initialized!"
- apt-get update && apt-get upgrade -y
# Reboot after cloud-init is done
power_state:
mode: reboot
# Final message after boot
final_message: "The system is ready! You can now log in."
Could someone suggest me how to figure it out and make it work?
AdkhamSec
(13 rep)
Sep 9, 2024, 12:58 PM
• Last activity: Sep 9, 2024, 01:43 PM
0
votes
1
answers
584
views
Debian 12 Cloud Image Deployment Issue - Growpart can not find grep and other basic utilities
I created [a tutorial](https://blog.programster.org/create-debian-12-kvm-guest-from-cloud-image) quite a while ago for easily deploying Debian 12 from the provided qcow2 images that are provided here: [https://cloud.debian.org/images/cloud/bookworm/latest/](https://cloud.debian.org/images/cloud/book...
I created [a tutorial](https://blog.programster.org/create-debian-12-kvm-guest-from-cloud-image) quite a while ago for easily deploying Debian 12 from the provided qcow2 images that are provided here: [https://cloud.debian.org/images/cloud/bookworm/latest/](https://cloud.debian.org/images/cloud/bookworm/latest/) . Unfortunately, I could only get it to work with the
My cloud-init file (redacted) that I created the cloud init ISO from is shown below:
generic
one, rather than the genericcloud
. I thought this would "resolve itself" with an update after some time, but it appears this is not the case, as it is still happening today.
The issue one gets when running the cloud-init setup against the genericcloud image is shown in the following output during deployment:
Begin: Running /scripts/local-bottom ... GROWROOT: /sbin/growpart: 824: /sbin/growpart: grep: not found
GPT PMBR size mismatch (4194303 != 25165823) will be corrected by write.
The backup GPT table is not on the end of the device.
/sbin/growpart: 853: /sbin/growpart: sed: not found
WARN: unknown label
/sbin/growpart: 354: /sbin/growpart: sed: not found
FAILED: sed failed on dump output
/sbin/growpart: 83: /sbin/growpart: rm: not found
done.
A screenshot of the errror in the output is provided below just in case that helps:

#cloud-config
hostname: template-debian-12-cloud
manage_etc_hosts: false
ssh_pwauth: false
disable_root: true
users:
- name: programster
hashed_passwd: $6$rounds=4096$dfdfdfdsXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXBB14i0
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
lock-passwd: false
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaCXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX=
2024-programster
**Does anyone know a way to work around this issue?** Perhaps I need to add/remove something from my cloud init config, or I need to not expand the qcow2 image before performing the deployment? I generally find that I need to perform this expansion in case one wishes to add additional packages to the cloud-init installation, such as Docker, as it is sometimes the case that the provided filesystem is so small, that the setup will fail as there isn't enough room without the expansion.
### Failed Attempt To Workaround
I did try adding the following to the bottom of my cloud-init.cfg before re-creating the ISO, but that didn't fix the issue:
growpart:
mode: off
devices: ['/']
ignore_growroot_disabled: false
### Potentially Related Debian Bug Report
It looks like this bug was reported in [this bug report](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1065875) logs post, so hopefully it will eventually get fixed, but I'm hoping there is a workaround someone knows about?
Programster
(2289 rep)
Jun 1, 2024, 07:50 AM
• Last activity: Jun 1, 2024, 09:56 AM
0
votes
0
answers
38
views
Automate Linux installation
For cloud environments, there is Cloudinit. I would like to know if there is something similar for desktop systems. Context: I want to create a kiosk with Chromium pre-installed and running through systemd at boot. Thank you.
For cloud environments, there is Cloudinit.
I would like to know if there is something similar for desktop systems.
Context: I want to create a kiosk with Chromium pre-installed and running through systemd at boot.
Thank you.
Skhaz
(103 rep)
May 23, 2024, 08:45 PM
2
votes
3
answers
1299
views
prevent secrets from being printed to cloud-init logs
A `cloud-init` script needs to set environment variable values to contain secrets. >**What specific syntax can be changed in the below to prevent the secret `PASS` and `PSS` from being printed in the `cloud-init` logs?** **CURRENT CODE:** Here is an example of a section of a cloud-init script that i...
A
cloud-init
script needs to set environment variable values to contain secrets.
>**What specific syntax can be changed in the below to prevent the secret PASS
and PSS
from being printed in the cloud-init
logs?**
**CURRENT CODE:**
Here is an example of a section of a cloud-init script that is causing secrets to be written into the log files:
export PASS=$PSS
echo "export PASS='$PSS'" >> /etc/environment
echo "export PASS='$PSS'" >> /etc/bashrc
echo "export PASS='$PSS'" >> /etc/profile
**THE PROBLEM:**
The problem is that the current code shown above is printing the explicit values contained in the variables into the cloud-init logs. This means that, currently, the only way to protect the secrets is to lock down the logs and be hyper-vigilant about rotating secrets as much as possible.
**WHAT WE NEED:**
>**Is there a linux-native way to prevent secrets from being printed into the output and logs?**
We would like this to be agnostic with respect to tool and agnostic with respect to cloud. Meaning that we would like to avoid relying on a pipeline tool's secret obfuscation features, and we would also like for the secrets to be obscured whether this is running in AWS, or Azure, or any other cloud provider. We would like a linux/bash solution to this problem that is portable and agnostic.
We use RHEL-based images for all our linux instances, including pure-RHEL, and also including Amazon Linux 2 and CentOS.
**RESPONSES TO COMMENTS**
Per @falcajr
's questions, the cloud-init script begins with #!/bin/bash
and the problem persists whether or not set -e
is present in the script. The logs we are seeing the secrets in are the console output when the cloud init script runs, which no doubt is already stored someplace, and which we will have to store separately in our own automation system. This OP asks for a linux-native solution so that it should not matter where the destination is. For example if there were an obscureFromOutput()
function, we would be asking for something like the following except in valid syntax and not in the following made up pseudocode:
echo "export PASS='obscureFromOutput($PSS)'" >> /etc/environment
Per @Isaac
's comment, the PSS
is received as an input variable for the cloud-init script. We do not believe that it should matter that packer is currently the tool sending in the PSS
input variable because we would like a linux-native solution that will work just as well if we switch to ARM templates
or to cloud formation
or to any other tool. If you have a specific way of solving this problem this OP is requesting a specific solution.
We then tried @falcojr
's second suggestion to make each command look as follows, but the secrets are still showing up in the packer command line output which gets pushed into logs:
echo "export PASS='$PSS'" >> /etc/environment &> /dev/null
Are we using the correct syntax? Is there something else to try?
CodeMed
(5357 rep)
May 22, 2021, 12:01 AM
• Last activity: Nov 10, 2023, 07:36 PM
2
votes
2
answers
1455
views
cloudinit does not run for qemu/kvm systems created by terraform and libvirt provider
I'm trying to provision a VM on qemu/kvm hypervisor using cloudinit with terraform and the libvirt provider. I can get the machine to start, but the cloudinit is not getting kicked off. I know that the user-data being used will work as I've tested it without using terraform, but instead using kvm to...
I'm trying to provision a VM on qemu/kvm hypervisor using cloudinit with terraform and the libvirt provider. I can get the machine to start, but the cloudinit is not getting kicked off. I know that the user-data being used will work as I've tested it without using terraform, but instead using kvm to spin up a machine with a second terminal running a web server to server up the user-data file. All of this is being performed on Ubuntu 18.04.
I've tried this using cloud images for ubuntu and centos. Both will create the VM and boot up to the login prompt. Neither will actually provision the contents of the user-data though. I've also tried this using a lower version of the libvirt provider (of those still available) to the same results.
I've done some extensive searching trying to see if anyone else has similar issues. Most of the sites/questions I've found were from StackExchange sites, as well as github issues and bug reports, which I didn't capture to document unfortunately. All of them tend to be something in the user-data that is missing. None of them have been using the version of the libvirt provider I am, but instead almost always version 0.6.2. I tried to use that version of the provider in my main.tf file, but the init command returns an error that the version doesn't exist and can't be downloaded.
main.tf (only difference between centos and ubuntu is the cloud image file/location and prefix variable)
terraform {
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.7.0"
}
}
}
# instantiate the provider
provider "libvirt" {
uri = "qemu:///system"
}
variable "prefix" {
default = "terraform_centos"
}
data "template_file" "user_data" {
template = file("${path.module}/cloud_init.cfg")
}
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
user_data = data.template_file.user_data.rendered
}
resource "libvirt_volume" "qcow_volume" {
name = "${var.prefix}.img"
source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 "
format = "qcow2"
}
resource "libvirt_domain" "centos" {
name = var.prefix
vcpu = 2
memory = 4096
disk {
volume_id = libvirt_volume.qcow_volume.id
}
cloudinit = libvirt_cloudinit_disk.commoninit.id
console {
type = "pty"
target_port = "0"
target_type = "serial"
}
console {
type = "pty"
target_type = "virtio"
target_port = "1"
}
network_interface {
network_name = "default"
}
}
user-data file (as with main.tf, only changes between the two distros being tested is the hostname)
-yaml
#cloud-config
autoinstall:
version: 1
identity:
hostname: terraform_centos
username: vagrant
password: $6$dnWt7N17fTD$8.m3Rgf400iSyxLa/kUtunGUgE3N4foSg/y31HNnsGBUTpoMOmS3O9U/nJFvZjXpQTrLFrAcK5vok5EI0KZA90
locale: en_US
keyboard:
layout: us
ssh:
install-server: true
authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key
allow-pw: true
storage:
layout:
name: direct
packages:
- gcc
- build-essential
late-commands:
- "echo 'vagrant ALL=(ALL) NOPASSWD: ALL' >> /target/etc/sudoers.d/vagrant"
- "chmod 440 /target/etc/sudoers.d/vagrant"
Finally, the kvm command that actually gets cloudinit to work. This is after starting a quick python web server in another shell. Along with mounting the iso image into /mnt so that the kernel/initrd will be accessible.
-bash
kvm -no-reboot -m 4096 -drive file=focal.img,format=raw,cache=none,if=virtio -cdrom ~/isoImages/ubuntu-20.04.5-live-server-amd64.iso -kernel /mnt/casper/vmlinuz -initrd /mnt/casper/initrd -append 'autoinstall ds=nocloud-net;s=http://_gateway:3003/ ' -vnc :1
Qemu/kvm version info
$ virsh version
Compiled against library: libvirt 4.0.0
Using library: libvirt 4.0.0
Using API: QEMU 4.0.0
Running hypervisor: QEMU 2.11.1
Terraform version info
$ terraform version
Terraform v1.3.6
on linux_amd64
+ provider registry.terraform.io/dmacvicar/libvirt v0.7.0
+ provider registry.terraform.io/hashicorp/template v2.2.0
Because I'm using cloud images for the two distros, there isn't a default username/password that I can log in with to check logs on the VM directly. I do have access to the console via virt-manager and the virsh console command. Both are sitting at a login prompt, at which the vagrant user returns a login incorrect message.
If any further information is needed, please let me know. I'm open to suggestions on what needs to be done to get this to work.
Jonathan Heady
(121 rep)
Dec 16, 2022, 07:27 PM
• Last activity: Nov 7, 2023, 11:47 PM
2
votes
0
answers
2051
views
partitionning additional disks as LVM with cloud-init
I'm using terraform and cloud-init to setup new VMs. I want to have additional disks setup and mounted in the new VM, with XFS partitions on top of LVM. Right now the only way I've found that allows me to do that is with this: ``` runcmd: - [ sgdisk, -e, /dev/sdb ] - [ sgdisk, -e, /dev/sdc ] - [ par...
I'm using terraform and cloud-init to setup new VMs.
I want to have additional disks setup and mounted in the new VM, with XFS partitions on top of LVM.
Right now the only way I've found that allows me to do that is with this:
runcmd:
- [ sgdisk, -e, /dev/sdb ]
- [ sgdisk, -e, /dev/sdc ]
- [ partprobe ]
- [ parted, -s, /dev/sdb, unit, mib, mkpart, primary, '1', "100%" ]
- [ parted, -s, /dev/sdc, unit, mib, mkpart, primary, '1', "100%" ]
- [ parted, -s, /dev/sdb, set, "1", lvm, "on" ]
- [ parted, -s, /dev/sdc, set, "1", lvm, "on" ]
- [ pvcreate, /dev/sdb1 ]
- [ pvcreate, /dev/sdc1 ]
- [ vgcreate, u01, /dev/sdb1]
- [ vgcreate, u02, /dev/sdc1]
- [ lvcreate, -l, "100%FREE", -n, oradata, u01]
- [ lvcreate, -l, "100%FREE", -n, backup, u02]
- [ mkfs.xfs, /dev/mapper/u01-oradata ]
- [ mkfs.xfs, /dev/mapper/u02-backup ]
- [ mount, -a ]
Is there any way I can use the disk_setup/fs_setup for this?
Edzilla
(21 rep)
Oct 12, 2023, 08:19 PM
• Last activity: Oct 12, 2023, 08:20 PM
1
votes
1
answers
1423
views
How can I use cloud-init NoCloud with OPNsense 21?
I'm new to OPNSense (also to FreeBSD in general) and I'm interested to use cloud-init to configure at least LAN (vtnet0) with static ip address, root password and eventually running custom scripts (or shell commands) in a OPNsense VM created with Qemu to apply a custom configuration. I saw that opns...
I'm new to OPNSense (also to FreeBSD in general) and I'm interested to use cloud-init to configure at least LAN (vtnet0) with static ip address, root password and eventually running custom scripts (or shell commands) in a OPNsense VM created with Qemu to apply a custom configuration.
I saw that opnsense github repo has cloud-init port , so I installed it with:
pkg install net/cloud-init
Then I added cidata.iso image to Qemu as required by cloud-init NoCloud with user-data and meta-data.
I already tested those files on ubuntu server 21 and kali linux. They are correct at least on those OSs ;)
I found the cdrom as /dev/cd0 and mounted with
mkdir -p /media/cdrom
mount -t cd9660 /dev/cd0 /media/cdrom
I also edited /etc/fstab appending this line:
/dev/cd0 /media/cdrom cd9660 ro 0 0
to auto mount the cdrom at boot.
Finally, I created (because unexisting) /etc/rc.conf with this content:
cloudinit_enable="YES"
and restarted my OPNsense VM.
What I expect now is that cloud-init will start automatically at boot.
However this doesn't happen, probably because I have to configure something.
If I run
cloud-init init
via terminal it throws error:
stages.py[WARNING]: Failed to rename devices: Unexpected error while running command.
Command [ip
, -6
, addr
,show
, permanent
, scope
,global
]
Exit code: -
Reason: [Errno 2] No such file or directory: bip
Stdout: -
Stderr: -
No init
modules to run under section cloud_init_modules
On both Kali Linux and Ubuntu Server it works easely.
I have some questions about this:
1) Is it possible to use cloud-init port with NoCloud and a cdrom (cidata) with configuration files or it supports only Cloud services with OpenStack and so on?
2) Is my configuration above correct or am I missing something?
3) Why does it appear the error above? How can I fix it?
I already posted this question here , but I didn't receive answers.
Stefano Cappa
(151 rep)
Oct 7, 2021, 02:38 PM
• Last activity: Jul 10, 2023, 10:46 PM
0
votes
2
answers
3313
views
How do I use Hashicorp's Linux Repository with Centos 8 and cloud-init to install Vault
How do I use [Hashicorp's Linux Repository](https://www.hashicorp.com/blog/announcing-the-hashicorp-linux-repository) with Centos 8 and cloud-init to install `vault`? I have tried this cloud-config file without success: ```yml #cloud-config package_update: true packages: - jq - vault yum_repos: hash...
How do I use [Hashicorp's Linux Repository](https://www.hashicorp.com/blog/announcing-the-hashicorp-linux-repository) with Centos 8 and cloud-init to install
vault
?
I have tried this cloud-config file without success:
#cloud-config
package_update: true
packages:
- jq
- vault
yum_repos:
hashicorp:
name: Hashicorp Stable
baseurl: https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
enabled: true
gpgcheck: true
gpgkey: https://rpm.reelases.hashicorp.com/gpg
The error that I get from cloud-init
did't lead me to assistance online:
[ 57.698435] cloud-init: Failed to download metadata for repo 'hashicorp'
[ 58.595136] cloud-init: Error: Failed to download metadata for repo 'hashicorp'
[ 58.623309] cloud-init: Cloud-init v. 18.5 running 'modules:config' at Thu, 29 Oct 2020 19:26:01 +0000. Up 43.25 seconds.
[ 58.633274] cloud-init: 2020-10-29 19:26:16,555 - util.py[WARNING]: Package update failed
[ 61.096376] cloud-init: Hashicorp Stable 6.1 kB/s | 376 B 00:00
[ 61.119101] cloud-init: Failed to download metadata for repo 'hashicorp'
[ 61.125684] cloud-init: Error: Failed to download metadata for repo 'hashicorp'
I expect to be able to refer to Hashicorp's repository like other repositories; such as the following, which works to install SaltStack's salt-master
:
#cloud-config
package_update: true
packages:
- salt-master
- jq
yum_repos:
saltstack-repo:
name: SaltStack repo for RHEL/CentOS 8 PY3
baseurl: https://repo.saltstack.com/py3/redhat/8/$basearch/archive/3001.1
enabled: true
gpgcheck: true
gpgkey: https://repo.saltstack.com/py3/redhat/8/$basearch/archive/3001.1/SALTSTACK-GPG-KEY.pub
My current workaround is to install in a shell script which I configure to run once:
#!/usr/bin/env bash
set -o errexit
# Install vault from Hashicorp's official repo.
yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
yum install -y vault
Thanks in advance for the assistance.
flitbit
(103 rep)
Oct 31, 2020, 07:01 PM
• Last activity: Mar 15, 2023, 05:27 AM
0
votes
0
answers
184
views
Stopping parameter expansion in write_file content of cloud-init userdata
## Background I'm using Terraform and cloud-init to provision an Ubuntu VM. The Terraform template contains an embedded cloud-init `user_data` section that contains a `write_file` directive to write a bash script. The hierarchy looks like this: - Terraform template - cloud-init user_data - write_con...
## Background
I'm using Terraform and cloud-init to provision an Ubuntu VM.
The Terraform template contains an embedded cloud-init
user_data
section that contains a write_file
directive to write a bash script. The hierarchy looks like this:
- Terraform template
- cloud-init user_data
- write_content
- bash script
Inside the bash script, I have the following function
mkcd() {
mkdir -p "${1}"
cd "${1}"
}
## Problem and question
When the file is written inside the VM, it looks like this:
mkcd() {
mkdir -p "1"
cd "1"
}
The ${1}
is replaced with just a 1
. The desired behavior is to have the function appear in the written file exactly as it appears in the Terraform template.
**How do I get the function to appear in the written file exactly like this?:**
mkcd() {
mkdir -p "${1}"
cd "${1}"
}
## Troubleshooting
I have also tried this:
mkcd() {
mkdir -p "$\{1}"
cd "$\{1}"
}
The file is written exactly as shown, with the backslashes retained.
McKenzieCarr
(1 rep)
Mar 12, 2023, 07:48 PM
0
votes
1
answers
374
views
Change password first Log In sudo su with Cloud-init
I'm trying to implement this rule with cloud init but I can't, I would like the user to have a password entered by me but as soon as he executes the sudo su command, the system forces the user to change password. How can I do this via Cloud-Init? I state that the user connects in SSH and is not prom...
I'm trying to implement this rule with cloud init but I can't, I would like the user to have a password entered by me but as soon as he executes the sudo su command, the system forces the user to change password. How can I do this via Cloud-Init? I state that the user connects in SSH and is not prompted for a password.
- name: prova
groups: sudo
plain_text_passwd: 1234
sudo: ['ALL=(ALL) ALL']
lock_passwd: false
chpasswd: {expire: True}
ssh_pwauth: false
I tried to do it like this but it doesn't work, can someone help me?
Thank you
ada-devops
(1 rep)
Nov 14, 2022, 03:17 PM
• Last activity: Dec 1, 2022, 11:31 PM
2
votes
2
answers
2086
views
Debian 11 fails to set IPv6 via Cloud-Init: "ens18/disable_ipv6: No such file or directory"
When using Cloud-Init to set both a static IPv4 and IPv6 in the latest [Debian 11 (generic) cloud image](https://cdimage.debian.org/images/cloud/), `networking.service` throws the following error at boot: ``` Aug 16 14:28:29 debian ifup[540]: sysctl: cannot stat /proc/sys/net/ipv6/conf/ens18/disable...
When using Cloud-Init to set both a static IPv4 and IPv6 in the latest [Debian 11 (generic) cloud image](https://cdimage.debian.org/images/cloud/) ,
networking.service
throws the following error at boot:
Aug 16 14:28:29 debian ifup: sysctl: cannot stat /proc/sys/net/ipv6/conf/ens18/disable_ipv6: No such file or directory
Which seems about right, as there is no ens18
:
$ ls /proc/sys/net/ipv6/conf/
all default eth0 lo
And, once the image is booted up, only the static IPv4 address is set:
$ ip addr show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:b3:31:24:72:f1 brd ff:ff:ff:ff:ff:ff
altname enp0s18
altname ens18
inet 1.2.3.4/26 brd 1.2.3.1 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::50b3:31ff:fe24:72f1/64 scope link
valid_lft forever preferred_lft forever
The /etc/network/interfaces
config, however, is generated correctly by Cloud-Init:
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
auto lo
iface lo inet loopback
dns-nameservers 1.1.1.1 1.0.0.1 2a0d:1234:100::100
dns-search vie.alwyzon.net
auto eth0
iface eth0 inet static
address 1.2.3.4/26
gateway 1.2.3.1
# control-alias eth0
iface eth0 inet6 static
address 2a00:1234:1234:fee1::1/48
gateway 2a00:1234:1234::1
The interface already was named ens18
in Debian 10 (and also on other platforms: Ubuntu, CentOS, openSUSE, ...) and didn't make any issues there. (The choice of ens18
is likely related to Proxmox; although I'm not exactly sure where/who made this choice.) However, this alias eth0
pointing towards ens18
seems new in Debian 11. In the Debian 10 images, ens18
was just directly used and no eth0
alias was shown anywhere.
**Any idea how I could make these new Debian 11 Cloud-Init images work with the ens18
image?**
miho
(145 rep)
Aug 16, 2021, 03:03 PM
• Last activity: Nov 6, 2022, 08:40 AM
1
votes
1
answers
591
views
Is there an immutable KVM host OS?
My homelab environment is primarily git repo->puppet apply->centos7 hardware running kvm or guests. Simple tooling but it works. I'm doing a lot more terraform at work these days and have been thinking about refreshing my homelab with an ansible/terraform pattern but I've been looking at my OS base...
My homelab environment is primarily git repo->puppet apply->centos7 hardware running kvm or guests. Simple tooling but it works.
I'm doing a lot more terraform at work these days and have been thinking about refreshing my homelab with an ansible/terraform pattern but I've been looking at my OS base for the KVM hosts and wondering if there is a better way.
So, the question......
Is anyone aware of a unix OS pattern that's PXE booting, immutable, container friendly and usable as a basic KVM host?
Something like CoreOS/Flatcar but for KVM guests instead of just containers. Ideally with config data from cloud-init and something like vault.
Thanks!
alan laird
(13 rep)
Oct 3, 2022, 04:10 AM
• Last activity: Oct 13, 2022, 12:32 AM
2
votes
1
answers
222
views
secrets unintentionally printed to cloud-init logs
The two lines of bash code below pull a secret into a cloud-init script for an Azure VM running RHEL8. But each of the two lines has an unintended side effect of printing the secret into the cloud-init logs for the entire world to see. **What specifically must be changed in the two lines below in or...
The two lines of bash code below pull a secret into a cloud-init script for an Azure VM running RHEL8. But each of the two lines has an unintended side effect of printing the secret into the cloud-init logs for the entire world to see.
**What specifically must be changed in the two lines below in order to prevent them from printing out the secret into the logs?**
myVar=$(az keyvault secret show --name "mySecretsFile" --vault-name "$VAULT_NAME" --query "value")
echo "$myVar" | base64 --decode --ignore-garbage >>/home/username/somefoldername/keys.yaml
The logs for the two above lines look like the following, except that here we have redacted the actual secret for the public forum. In the actual logs, the secret is printed twice:
+ myVar='"really-long-alpha-numeric-secret-redacted-for-stack-exchange"'
+ echo '"really-long-alpha-numeric-secret-redacted-for-stack-exchange"'
This might be a simple bash question about how to suppress printing of certain types of things in logs.
CodeMed
(5357 rep)
Sep 30, 2022, 01:11 AM
• Last activity: Oct 2, 2022, 02:28 AM
0
votes
0
answers
769
views
Cloud-Init boot takes One Hour+ to start
[![enter image description here][1]][1] [1]: https://i.sstatic.net/ZDAZc.png Hello Everyone, I am looking to understand what process is running that halting the boot up of our ubuntu VM's that was originally configured with cloud-init. The after the entries paint in the picture the system hangs for...

Josh
(1 rep)
Sep 13, 2022, 05:17 PM
2
votes
1
answers
6384
views
cloud-init does not work for Ubuntu 22.04 images
I'm testing cloud-init for Ubuntu 22.04 images, So I first downloaded the cloud image from: https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img Then I create a simple configuration: ``` cat > meta-data user-data << EOF #cloud-config disable_root: false users...
I'm testing cloud-init for Ubuntu 22.04 images,
So I first downloaded the cloud image from: https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
Then I create a simple configuration:
cat > meta-data user-data << EOF
#cloud-config
disable_root: false
users:
- name: work
shell: /bin/bash
sudo: true
passwd: $(echo 123456 | mkpasswd -m sha-512 -s)
ssh_authorized_keys:
- $(cat ~/.ssh/id_rsa.pub)
- name: root
shell: /bin/bash
passwd: $(echo 123456 | mkpasswd -m sha-512 -s)
ssh_authorized_keys:
- $(cat ~/.ssh/id_rsa.pub)
EOF
validate the configuration file,
# cloud-init schema --config-file user-data
Valid cloud-config: user-data
And created the seed ISO:
# cloud-localds seed.iso user-data meta-data
qemu boots fine:
# qemu-system-x86_64 -m 2048 -smp 4 -hda ubuntu-22.04-server-cloudimg-amd64.img -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::5555-:22 -nographic -cdrom seed.
...
[ 33.426077] cloud-init: Cloud-init v. 22.2-0ubuntu1~22.04.3 running 'init' at Mon, 08 Aug 2022 23:39:58 +0000. Up 33.11 seconds.
[ 33.545880] cloud-init: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++++++++++
[ 33.547680] cloud-init: ci-info: +--------+------+----------------------------+---------------+--------+-------------------+
[ 33.549226] cloud-init: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address |
[ 33.551002] cloud-init: ci-info: +--------+------+----------------------------+---------------+--------+-------------------+
[ 33.552434] cloud-init: ci-info: | ens3 | True | 10.0.2.15 | 255.255.255.0 | global | 52:54:00:12:34:56 |
[ 33.553852] cloud-init: ci-info: | ens3 | True | fec0::5054:ff:fe12:3456/64 | . | site | 52:54:00:12:34:56 |
[ 33.555541] cloud-init: ci-info: | ens3 | True | fe80::5054:ff:fe12:3456/64 | . | link | 52:54:00:12:34:56 |
[ 33.558003] cloud-init: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . |
[ 33.559775] cloud-init: ci-info: | lo | True | ::1/128 | . | host | . |
[ 33.561321] cloud-init: ci-info: +--------+------+----------------------------+---------------+--------+-------------------+
[ 33.564456] cloud-init: ci-info: ++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++
[ 33.565934] cloud-init: ci-info: +-------+-------------+----------+-----------------+-----------+-------+
[ 33.567427] cloud-init: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags |
[ 33.568700] cloud-init: ci-info: +-------+-------------+----------+-----------------+-----------+-------+
[ 33.569807] cloud-init: ci-info: | 0 | 0.0.0.0 | 10.0.2.2 | 0.0.0.0 | ens3 | UG |
[ 33.571745] cloud-init: ci-info: | 1 | 10.0.2.0 | 0.0.0.0 | 255.255.255.0 | ens3 | U |
[ 33.573611] cloud-init: ci-info: | 2 | 10.0.2.2 | 0.0.0.0 | 255.255.255.255 | ens3 | UH |
[ 33.575426] cloud-init: ci-info: | 3 | 10.0.2.3 | 0.0.0.0 | 255.255.255.255 | ens3 | UH |
[ 33.576740] cloud-init: ci-info: +-------+-------------+----------+-----------------+-----------+-------+
[ 33.577961] cloud-init: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
[ 33.579211] cloud-init: ci-info: +-------+-------------+---------+-----------+-------+
[ 33.580309] cloud-init: ci-info: | Route | Destination | Gateway | Interface | Flags |
[ 33.581608] cloud-init: ci-info: +-------+-------------+---------+-----------+-------+
[ 33.583011] cloud-init: ci-info: | 1 | fe80::/64 | :: | ens3 | U |
[ 33.584033] cloud-init: ci-info: | 2 | fec0::/64 | :: | ens3 | Ue |
[ 33.584944] cloud-init: ci-info: | 3 | ::/0 | fe80::2 | ens3 | UGe |
[ 33.585831] cloud-init: ci-info: | 5 | local | :: | ens3 | U |
[ 33.587146] cloud-init: ci-info: | 6 | local | :: | ens3 | U |
[ 33.588567] cloud-init: ci-info: | 7 | multicast | :: | ens3 | U |
[ 33.590072] cloud-init: ci-info: +-------+-------------+---------+-----------+-------+
[ OK ] Finished Initial cloud-ini…ob (metadata service crawler).
[ OK ] Reached target Cloud-config availability.
[ OK ] Reached target Network is Online.
[ OK ] Reached target System Initialization.
[ OK ] Started Daily apt download activities.
[ OK ] Started Daily apt upgrade and clean activities.
[ OK ] Started Daily dpkg database backup timer.
[ OK ] Started Periodic ext4 Onli…ata Check for All Filesystems.
[ OK ] Started Discard unused blocks once a week.
[ OK ] Started Refresh fwupd metadata regularly.
[ OK ] Started Daily rotation of log files.
[ OK ] Started Daily man-db regeneration.
[ OK ] Started Message of the Day.
[ OK ] Started Daily Cleanup of Temporary Directories.
[ OK ] Started Ubuntu Advantage Timer for running repeated jobs.
[ OK ] Started Download data for …ailed at package install time.
[ OK ] Started Check to see wheth…w version of Ubuntu available.
[ OK ] Reached target Path Units.
[ OK ] Reached target Timer Units.
[ OK ] Listening on cloud-init hotplug hook socket.
[ OK ] Listening on D-Bus System Message Bus Socket.
[ OK ] Listening on Open-iSCSI iscsid Socket.
[ OK ] Listening on Socket unix for snap application lxd.daemon.
[ OK ] Listening on Socket unix f…p application lxd.user-daemon.
Starting Socket activation for snappy daemon...
[ OK ] Listening on UUID daemon activation socket.
[ OK ] Reached target Preparation for Remote File Systems.
[ OK ] Reached target Remote File Systems.
[ OK ] Finished Availability of block devices.
[ OK ] Listening on Socket activation for snappy daemon.
[ OK ] Reached target Socket Units.
[ OK ] Reached target Basic System.
Starting LSB: automatic crash report generation...
[ OK ] Started Regular background program processing daemon.
[ OK ] Started D-Bus System Message Bus.
[ OK ] Started Save initial kernel messages after boot.
Starting Remove Stale Onli…t4 Metadata Check Snapshots...
Starting Record successful boot for GRUB...
[ OK ] Started irqbalance daemon.
Starting Dispatcher daemon for systemd-networkd...
Starting Authorization Manager...
Starting System Logging Service...
Starting Service for snap application lxd.activate...
Starting Snap Daemon...
Starting OpenBSD Secure Shell server...
Starting User Login Management...
Starting Permit User Sessions...
Starting Disk Manager...
[ OK ] Finished Permit User Sessions.
Starting Hold until boot process finishes up...
Starting Terminate Plymouth Boot Screen...
[ OK ] Finished Hold until boot process finishes up.
[ OK ] Started Serial Getty on ttyS0.
Starting Set console scheme...
[ OK ] Finished Terminate Plymouth Boot Screen.
[ OK ] Finished Set console scheme.
[ OK ] Created slice Slice /system/getty.
[ OK ] Started Getty on tty1.
[ OK ] Reached target Login Prompts.
[ OK ] Finished Remove Stale Onli…ext4 Metadata Check Snapshots.
[ OK ] Started System Logging Service.
[ OK ] Finished Record successful boot for GRUB.
[ OK ] Started Authorization Manager.
Starting Modem Manager...
Starting GRUB failed boot detection...
[ OK ] Started LSB: automatic crash report generation.
[ OK ] Started User Login Management.
[ OK ] Started Unattended Upgrades Shutdown.
[ OK ] Finished GRUB failed boot detection.
[ OK ] Started OpenBSD Secure Shell server.
[ OK ] Started Modem Manager.
[ OK ] Started Disk Manager.
[ OK ] Started Dispatcher daemon for systemd-networkd.
Ubuntu 22.04 LTS test-ubuntu ttyS0
test-ubuntu login: [ 97.149059] cloud-init: Cloud-init v. 22.2-0ubuntu1~22.04.3 running 'modules:config' at Mon, 08 Aug 2022 23:41:01 +0000. Up 96.29 seconds.
[ 106.351885] cloud-init: Cloud-init v. 22.2-0ubuntu1~22.04.3 running 'modules:final' at Mon, 08 Aug 2022 23:41:05 +0000. Up 100.57 seconds.
[ 106.933178] cloud-init: Cloud-init v. 22.2-0ubuntu1~22.04.3 finished at Mon, 08 Aug 2022 23:41:11 +0000. Datasource DataSourceNoCloud [seed=/dev/sr0][dsmode=net]. s
qemu-system-x86_64: terminating on signal 15 from pid 3311366 ()
But I'm unable to login with either work
or root
:
# ssh 127.0.0.1 -p 5555 -vv
debug1: Offering public key: /root/.ssh/id_rsa RSA SHA256:xxxxxx
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey
debug1: Trying private key: /root/.ssh/id_ecdsa
debug1: Trying private key: /root/.ssh/id_ecdsa_sk
debug1: Trying private key: /root/.ssh/id_ed25519
debug1: Trying private key: /root/.ssh/id_ed25519_sk
debug1: Trying private key: /root/.ssh/id_xmss
debug1: Trying private key: /root/.ssh/id_dsa
debug2: we did not send a packet, disable method
debug1: No more authentication methods to try.
root@127.0.0.1: Permission denied (publickey).
What's wrong?
daisy
(55777 rep)
Aug 8, 2022, 11:43 PM
• Last activity: Aug 10, 2022, 04:15 AM
0
votes
1
answers
89
views
cloud-init: Getting the error: No such function check
I have this configuration, ```yaml disk_setup: /dev/vdb: table_type: gpt, layout: true fs_setup: - label: repo filesystem: ext4 device: /dev/vdb1 partition: auto ``` This is the error I get, ```text 2022-06-10 17:30:32,273 - util.py[WARNING]: Failed partitioning operation No such function check_part...
I have this configuration,
disk_setup:
/dev/vdb:
table_type: gpt,
layout: true
fs_setup:
- label: repo
filesystem: ext4
device: /dev/vdb1
partition: auto
This is the error I get,
2022-06-10 17:30:32,273 - util.py[WARNING]: Failed partitioning operation
No such function check_partition_gpt,_layout to call!
2022-06-10 17:30:32,274 - util.py[DEBUG]: Failed partitioning operation
No such function check_partition_gpt,_layout to call!
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_disk_setup.py", line 441, in get_dyn_func
return globals()[func_name](*func_args)
KeyError: 'check_partition_gpt,_layout'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_disk_setup.py", line 148, in handle
util.log_time(logfunc=LOG.debug,
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2472, in log_time
ret = func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_disk_setup.py", line 821, in mkpart
if check_partition_layout(table_type, device, layout):
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_disk_setup.py", line 547, in check_partition_layout
found_layout = get_dyn_func(
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_disk_setup.py", line 446, in get_dyn_func
raise Exception("No such function %s to call!" % func_name) from e
Exception: No such function check_partition_gpt,_layout to call!
How can I resolve the above error?
----
At the top of my log file, I have
* Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'init-local' at Fri, 10 Jun 2022 17:30:22 +0000. Up 8.53 seconds.
Evan Carroll
(34663 rep)
Jun 10, 2022, 06:31 PM
• Last activity: Jun 12, 2022, 02:23 PM
0
votes
2
answers
754
views
DHCP lease renewal fails on AWS lightsail server
Tuesday the 12th of September 2021 out of the blue a root server of mine running debian hosted in AWS Lightsail crashed. After a reboot it would run fine for approximately 20-60 minutes, after that it would crash again. As it turned out the server was still running but it had lost all network connec...
Tuesday the 12th of September 2021 out of the blue a root server of mine running debian hosted in AWS Lightsail crashed. After a reboot it would run fine for approximately 20-60 minutes, after that it would crash again. As it turned out the server was still running but it had lost all network connections. Since AWS Lightsail does not have a serial console and only supports SSH for server administration this means the only method of regaining access to it after a crash was a reboot.
An analysis of the syslog and other logs didn't reveal any relevant clues. Also no relevant packages were installed/removed in the last days.
Because I already had issues with dhcp in the past on that server I suspected that the issue might be related to the dhcp lease renewal. So I did run some diagnostic with dhclient and it turned out that dhcp was indeed the issue that caused the server to crash. The AWS dhcp server gives very short lease times of ~20-60 minutes and as soon as the lease runs out the server will loose all networking.
I was able to create a workaround by running
dhclient -d
in an endless loop:
#!/bin/bash
while true; do
bash -c 'dhclient -d 2>&1 | tee -a /var/log/dhclient.log' &
sleep 60
kill $(jobs -p) 2>/dev/null || true
kill -9 $(jobs -p) 2>/dev/null || true
done
I just run that script in a screen session and just let it run. This successfully kept the server from crashing. However this is just a dirty workaround and the real reasons for this issue still hadn't been found.
Gellweiler
(153 rep)
Oct 16, 2021, 09:16 AM
• Last activity: Jun 7, 2022, 06:51 PM
2
votes
0
answers
799
views
Can't boot arm64 cloud image with qemu
I first downloaded [ubuntu-22.04-server-cloudimg-arm64.img](https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-arm64.img), then I started it with qemu ``` qemu-system-aarch64 -m 2G -M virt -cpu max -bios /usr/share/qemu-efi-aarch64/QEMU_EFI.fd -drive if=none,file=ubu...
I first downloaded [ubuntu-22.04-server-cloudimg-arm64.img](https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-arm64.img) , then I started it with qemu
qemu-system-aarch64 -m 2G -M virt -cpu max -bios /usr/share/qemu-efi-aarch64/QEMU_EFI.fd -drive if=none,file=ubuntu-22.04-server-cloudimg-arm64.img,id=hd0 -device virtio-blk-device,drive=hd0 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp:127.0.0.1:5555-:22 -nographic -cdrom cidata.iso
And it just fails here:
[ 159.518738] EXT4-fs (vda1): re-mounted. Opts: discard,errors=remount-ro. Quota mode: none.
Press Enter for maintenance
(or press Control-D to continue):
This error is unexpected, is the image broken?
daisy
(55777 rep)
May 14, 2022, 11:32 AM
Showing page 1 of 20 total questions