Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
1
answers
85
views
What is the best practice to change variables in automations
I am trying to create auto install Ubuntu OS with packer on Hyper-V and I have project like this: ``` packer-main/ | |--->http/ | |--->user-data | |--->templates/ | |--->build.pkr.hcl | |--->variables | |--->pkrvars.hcl ``` content in user-data ``` #cloud-config autoinstall: version: 1 identity: hos...
I am trying to create auto install Ubuntu OS with packer on Hyper-V and I have project like this:
packer-main/
|
|--->http/
| |--->user-data
|
|--->templates/
| |--->build.pkr.hcl
|
|--->variables
| |--->pkrvars.hcl
content in user-data
#cloud-config
autoinstall:
version: 1
identity:
hostname: jammy-daily
username: ubuntu
content of pkrvars.hcl
hostname = "jammy-daily"
username = "ubuntu"
and build.pkr.hcl
...
ssh_username = "${var.username}"
vm_name = "${var.hostname}"
...
My challenge lies in the need to dynamically modify variables within the user-data before initiating the build process. While I am aware that PowerShell scripts or Bash scripts can serve this purpose, I am curious if there are specialized tools available for streamlining such automations. I am also interested in learning about best practices for creating automated infrastructures.
robotiaga
(111 rep)
Sep 12, 2023, 11:50 AM
• Last activity: Sep 13, 2023, 01:52 AM
1
votes
1
answers
422
views
I need help with packer (vagrant box) build(er) config/scripts so UEFI boot order is properly configured
**Setup** *Host* ``` OS: Manjaro XFCE x86_64 Apps: packer (plugins: virtualbox-iso), ``` *Guest* ``` OS: Arch Linux Hypervisor: Virtualbox Architecture: x64 ``` I have built a Vagrant box using Packer. My goal is to have the VMbox run one application like a docker container app, but since the applic...
**Setup**
*Host*
I've noticed that checking the boot order given for my box does not correspond what virtualbox can find, so I'm wondering why this is.
I have only two options in the Virtualbox of which the former is the harddisk boot, which it can't find and the second is starting some kind of network boot?
I just want to get into GRUB.
During the building of the vagrant box, I call
And I can add the boot option so that it will work, but I want box to work.. well.. out of the box!
My packer build file
https://raw.githubusercontent.com/safenetwork-community/SE_bastille-installer-box/arch/bib-base/SE_bastille-installer-box.pkr.hcl
What am I doing wrong?
OS: Manjaro XFCE x86_64
Apps: packer (plugins: virtualbox-iso),
*Guest*
OS: Arch Linux
Hypervisor: Virtualbox
Architecture: x64
I have built a Vagrant box using Packer.
My goal is to have the VMbox run one application like a docker container app, but since the application needs to use partitioning, I can't use docker for it without some risk that I'm trying to avoid.
A vagrant box is a self-customized virtual machine.
This way other people can have the same OS setup as you do yourself.
I've built one for Virtualbox, but I'm not completely there yet.
One of the things that I want in my VM is for it to be modern,
so I want to use of UEFI, but I haven't been able to fully figure out,
how to construct it for VMs.
Which is why I'm confronted with this issue:
UEFI fails to load properly. BdsDxe: failed to load Boot0001 "UEFI VBOX Harddisk"
See below in image:


efibootmgr
to see what my Disk should be giving as it's boot order.
*output of efibootmgr*
SE_bastille-installer-box.virtualbox-iso.archlinux: ==> bootloader.sh: Check boots..
SE_bastille-installer-box.virtualbox-iso.archlinux: BootCurrent: 0001
SE_bastille-installer-box.virtualbox-iso.archlinux: Timeout: 0 seconds
SE_bastille-installer-box.virtualbox-iso.archlinux: BootOrder: 0005,0000,0002,0003,0004
SE_bastille-installer-box.virtualbox-iso.archlinux: Boot0000* UiApp FvVol(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(462caa21-7614-4503-836e-8ab6f4662331)
SE_bastille-installer-box.virtualbox-iso.archlinux: Boot0002* UEFI VBOX HARDDISK PciRoot(0x0)/Pci(0xf,0x0)/SCSI(0,0){auto_created_boot_option}
SE_bastille-installer-box.virtualbox-iso.archlinux: Boot0003* UEFI PXEv4 (MAC:0800277E9510) PciRoot(0x0)/Pci(0x3,0x0)/MAC(0800277e9510,1)/IPv4(0.0.0.00.0.0.0,0,0){auto_created_boot_option}
SE_bastille-installer-box.virtualbox-iso.archlinux: Boot0004 EFI Internal Shell FvVol(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(7c04a583-9e3e-4f1c-ad65-e05268d0b4d1)
SE_bastille-installer-box.virtualbox-iso.archlinux: Boot0005* GRUB HD(1,GPT,1e8f8680-99c0-4c28-b83a-eb601805d4c4,0x800,0x96000)/File(\EFI\GRUB\grubx64.efi)
Below is the script that builds the boot loader.
*scripts/bootloader.sh*
#!/usr/bin/env bash
. /root/vars.sh
NAME_SH=bootloader.sh
# stop on errors
set -eu
echo "==> ${NAME_SH}: Installing grub packages.."
/usr/bin/arch-chroot ${ROOT_DIR} /usr/bin/pacman --noconfirm -S edk2-ovmf efibootmgr grub os-prober >/dev/null
echo "==> ${NAME_SH}: Pre-configure grub.."
/usr/bin/arch-chroot ${ROOT_DIR} sed -i 's/#GRUB_DISABLE_OS_PROBER/GRUB_DISABLE_OS_PROBER/' /etc/default/grub
echo "==> ${NAME_SH}: Installing grub.."
/usr/bin/arch-chroot ${ROOT_DIR} grub-install --target=x86_64-efi --efi-directory=${ESP_DIR} --bootloader-id=GRUB &>/dev/null
/usr/bin/arch-chroot ${ROOT_DIR} grub-mkconfig -o /boot/grub/grub.cfg &>/dev/null
echo "==> ${NAME_SH}: Check boots.."
if [[ $PACKER_BUILDER_TYPE == "virtualbox-iso" ]]; then
/usr/bin/arch-chroot ${ROOT_DIR} efibootmgr --delete-bootnum --bootnum 1
fi
/usr/bin/arch-chroot ${ROOT_DIR} efibootmgr
The EFI file can be found in shell, so I'm doing at least something correctly.

Folaht
(1156 rep)
Jun 19, 2023, 05:38 AM
• Last activity: Aug 1, 2023, 12:44 AM
0
votes
1
answers
266
views
Packer-plugin-qemu, Box is of type raw, despite having chosen qcow2 format
After doing a `packer build`, when I check it's file format, `qemu-img` and `virt-install` both tell me that the file format is raw, despite having chosen qcow2 format. _qemu-img command right after packer build command_ ``` $ sudo qemu-img info output/bastillebox-installer-box_qemu_archlinux-2023-0...
After doing a
packer build
, when I check it's file format, qemu-img
and virt-install
both tell me that the file format is raw, despite having chosen qcow2 format.
_qemu-img command right after packer build command_
$ sudo qemu-img info output/bastillebox-installer-box_qemu_archlinux-2023-05.qcow2
image: ./bastillebox-installer-box_qemu_archlinux-2023-05.qcow2
file format: raw
virtual size: 1.42 GiB (1522309120 bytes)
disk size: 1.42 GiB
Child node '/file':
filename: ./bastillebox-installer-box_qemu_archlinux-2023-05.qcow2
protocol type: file
file length: 1.42 GiB (1522309120 bytes)
disk size: 1.42 GiB
Below is my packer_template file:
_my_packer_template.pkr.hcl_
packer {
required_plugins {
qemu = {
version = ">= 1.0.9"
source = "github.com/hashicorp/qemu"
}
}
}
variable "ssh_private_key_file" {
type = string
default = "~/.ssh/id_bas"
}
variable "ssh_timeout" {
type = string
default = "20m"
validation {
condition = can(regex("[0-9]+[smh]", var.ssh_timeout))
error_message = "The ssh_timeout value must be a number followed by the letter s(econds), m(inutes), or h(ours)."
}
}
variable "ssh_username" {
description = "Unpriviledged user to create."
type = string
default = "bas"
}
locals {
boot_command_qemu = [
"",
"curl -O http://{{ .HTTPIP }}:{{ .HTTPPort }}/${local.kickstart_script} && chmod +x ${local.kickstart_script} && ./${local.kickstart_script} {{ .HTTPPort }}",
]
boot_command_virtualbox = [
"",
"curl -O http://{{ .HTTPIP }}:{{ .HTTPPort }}/${local.kickstart_script} && chmod +x ${local.kickstart_script} && ./${local.kickstart_script} {{ .HTTPPort }}",
]
cpus = 1
disk_size = "4G"
efi_firmware_code = "/usr/share/edk2/x64/OVMF_CODE.4m.fd"
efi_firmware_vars = "/usr/share/edk2/x64/OVMF_VARS.4m.fd"
headless = "false"
iso_checksum = "sha256:329b00c3e8cf094a28688c50a066b5ac6352731ccdff467f9fd7155e52d36cec"
iso_url = "https://mirror.cj2.nl/archlinux/iso/2023.05.03/archlinux-x86_64.iso "
kickstart_script = "initLiveVM.sh"
machine_type = "q35"
memory = 4096
http_directory = "srv"
vm_name = "bastille-installer"
write_zeros = "true"
}
source "qemu" "archlinux" {
accelerator = "kvm"
boot_command = local.boot_command_qemu
boot_wait = "1s"
cpus = local.cpus
disk_interface = "virtio"
disk_size = local.disk_size
efi_boot = true
efi_firmware_code = local.efi_firmware_code
efi_firmware_vars = local.efi_firmware_vars
format = "qcow2"
headless = local.headless
http_directory = local.http_directory
iso_url = local.iso_url
iso_checksum = local.iso_checksum
machine_type = local.machine_type
memory = local.memory
net_device = "virtio-net"
shutdown_command = "sudo systemctl start poweroff.timer"
ssh_handshake_attempts = 500
ssh_port = 22
ssh_private_key_file = var.ssh_private_key_file
ssh_timeout = var.ssh_timeout
ssh_username = var.ssh_username
ssh_wait_timeout = var.ssh_timeout
vm_name = "${local.vm_name}.qcow2"
}
build {
name = "bastille-installer"
sources = ["source.qemu.archlinux"]
provisioner "file" {
destination = "/tmp/"
source = "./files"
}
provisioner "shell" {
only = ["qemu.archlinux"]
execute_command = "{{ .Vars }} sudo -E -S bash '{{ .Path }}'"
expect_disconnect = true
scripts = [
"scripts/liveVM.sh",
"scripts/tables.sh",
"scripts/partitions.sh",
"scripts/base.sh",
"scripts/bootloader.sh",
"scripts/pacman.sh",
"scripts/setup.sh"
]
}
provisioner "shell" {
execute_command = "{{ .Vars }} WRITE_ZEROS=${local.write_zeros} sudo -E -S bash '{{ .Path }}'"
script = "scripts/cleanup.sh"
}
post-processor "vagrant" {
output = "output/${local.vm_name}_${source.type}_${source.name}-${formatdate("YYYY-MM", timestamp())}.qcow2"
vagrantfile_template = "templates/vagrantfile.tpl"
}
}
My full project below
https://github.com/safenetwork-community/bastillebox-installer-box/tree/arch
Why is the [packer-plugin-qemu](https://github.com/hashicorp/packer-plugin-qemu) not producing a box in the qcow2 format, despite me having configured it as such?
Folaht
(1156 rep)
May 31, 2023, 07:22 AM
• Last activity: Jul 16, 2023, 07:48 AM
0
votes
1
answers
216
views
I build a vagrant VM (Arch Linux + EFI) using Hashicorp packer, but the screen flickers
**Setup** *Host* ``` OS: Manjaro XFCE x86_64 Apps: packer (plugins: virtualbox-iso), ``` *Guest* ``` OS: Arch Linux Hypervisor: Virtualbox Architecture: x64 ``` This is my issue: [![enter image description here][1]][1] *My packer build file (qemu build removed for simplicity sake)* ``` packer { requ...
**Setup**
*Host*
*My packer build file (qemu build removed for simplicity sake)*
OS: Manjaro XFCE x86_64
Apps: packer (plugins: virtualbox-iso),
*Guest*
OS: Arch Linux
Hypervisor: Virtualbox
Architecture: x64
This is my issue:

packer {
required_plugins {
virtualbox = {
version = ">=1.0.4"
source = "github.com/hashicorp/virtualbox"
}
}
}
variable "ssh_private_key_file" {
type = string
default = "~/.ssh/id_bas"
}
variable "ssh_timeout" {
type = string
default = "20m"
validation {
condition = can(regex("[0-9]+[smh]", var.ssh_timeout))
error_message = "The ssh_timeout value must be a number followed by the letter s(econds), m(inutes), or h(ours)."
}
}
variable "ssh_username" {
description = "Unpriviledged user to create."
type = string
default = "bas"
}
locals {
boot_command_virtualbox = [
"",
"curl -O http://10.0.2.3:{{ .HTTPPort }}/${local.kickstart_script} && ",
"chmod +x ${local.kickstart_script} && ",
"LOCAL_IP=10.0.2.3 ",
"LOCAL_PORT={{ .HTTPPort }} ",
"PACKER_BUILDER_TYPE=iso-virtualbox ",
"./${local.kickstart_script}",
]
cpus = 1
disk_size = "4G"
disk_size_vb = "4000"
efi_firmware_code = "/usr/share/edk2/x64/OVMF_CODE.fd"
efi_firmware_vars = "/usr/share/edk2/x64/OVMF_VARS.fd"
headless = "false"
iso_checksum = "sha256:329b00c3e8cf094a28688c50a066b5ac6352731ccdff467f9fd7155e52d36cec"
iso_url = "https://mirror.cj2.nl/archlinux/iso/2023.06.03/archlinux-x86_64.iso "
kickstart_script = "initLiveVM.sh"
machine_type = "q35"
memory = 4096
http_directory = "srv"
vm_name = "SE_bastille-installer-box"
write_zeros = "true"
}
source "virtualbox-iso" "archlinux" {
boot_command = local.boot_command_virtualbox
boot_wait = "2s"
communicator = "ssh"
cpus = 1
disk_size = local.disk_size_vb
firmware = "efi"
format = "ovf"
guest_additions_mode = "disable"
guest_os_type = "Arch"
hard_drive_interface = "virtio"
headless = local.headless
http_directory = local.http_directory
iso_checksum = local.iso_checksum
iso_interface = "virtio"
iso_url = local.iso_url
memory = local.memory
nic_type = "virtio"
shutdown_command = "sudo systemctl start poweroff.timer"
ssh_port = 22
ssh_private_key_file = var.ssh_private_key_file
ssh_timeout = var.ssh_timeout
ssh_username = var.ssh_username
vm_name = "${local.vm_name}.ovf"
}
build {
name = "SE_bastille-installer-box"
sources = ["source.virtualbox-iso.archlinux"]
provisioner "file" {
destination = "/tmp/"
source = "./files"
}
provisioner "shell" {
only = ["virtualbox-iso.archlinux"]
execute_command = "{{ .Vars }} sudo -E -S bash '{{ .Path }}'"
expect_disconnect = true
scripts = [
"scripts/liveVM.sh",
"scripts/virtualbox.sh",
"scripts/tables.sh",
"scripts/partitions.sh",
"scripts/base.sh",
"scripts/bootloader.sh",
"scripts/pacman.sh",
"scripts/setup.sh"
]
}
provisioner "shell" {
execute_command = "{{ .Vars }} WRITE_ZEROS=${local.write_zeros} sudo -E -S bash '{{ .Path }}'"
script = "scripts/cleanup.sh"
}
post-processor "vagrant" {
output = "output/${local.vm_name}_${source.type}_${source.name}-${formatdate("YYYY-MM", timestamp())}.box"
vagrantfile_template = "templates/vagrantfile.tpl"
}
}
What am I doing wrong?
Folaht
(1156 rep)
Jun 18, 2023, 03:01 PM
• Last activity: Jun 19, 2023, 05:28 AM
0
votes
1
answers
2028
views
VM boots into UEFI Interactive Shell with the filesystem missing. (custom virt-install boot parameters)
## My Setup ### Host ```plain OS: Manjaro XFCE x86_64 Apps: packer (plugins: qemu), virt-manager, virt-install virt-viewer ``` ### Guest OS: Arch Linux Hypervisor: QEMU KVM Architecture: x64 Machine Type: q35 EFI Firmware Code: /usr/share/edk2/x64/OVMF_CODE.4m.fd EFI Firmware Vars: /usr/share/edk2/x...
## My Setup
### Host
The FS0: in the mapping table is completely missing.
And after exiting to see the boot manager I find that several boot options are missing as well.
In case it's relevant, virt-install's verbose output looks like this:
https://gist.github.com/Folaht/d8d4366b79434069ff6e8a7b51abbd25
What am I doing wrong?
[EDIT]
OS: Manjaro XFCE x86_64
Apps: packer (plugins: qemu),
virt-manager, virt-install
virt-viewer
### Guest
OS: Arch Linux
Hypervisor: QEMU KVM
Architecture: x64
Machine Type: q35
EFI Firmware Code: /usr/share/edk2/x64/OVMF_CODE.4m.fd
EFI Firmware Vars: /usr/share/edk2/x64/OVMF_VARS.4m.fd
I created a customized Arch Linux image on my host using packer build
and log my boot options.
...
"==> bootloader.sh: Show boot options.."
BootCurrent: 0001
Timeout: 0 seconds
BootOrder: 0007,0000,0001,0002,0003,0004,0005,0006
Boot0000* UiApp FvVol(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(462caa21-7614-4503-836e-8ab6f4662331)
Boot0001* UEFI QEMU DVD-ROM QM00001 PciRoot(0x0)/Pci(0x1f,0x2)/Sata(0,65535,0){auto_created_boot_option}
Boot0002* UEFI QEMU DVD-ROM QM00005 PciRoot(0x0)/Pci(0x1f,0x2)/Sata(2,65535,0){auto_created_boot_option}
Boot0003* UEFI Misc Device PciRoot(0x0)/Pci(0x3,0x0){auto_created_boot_option}
Boot0004* UEFI PXEv4 (MAC:525400123456) PciRoot(0x0)/Pci(0x2,0x0)/MAC(525400123456,1)/IPv4(0.0.0.00.0.0.0,0,0){auto_created_boot_option}
Boot0005* UEFI PXEv6 (MAC:525400123456) PciRoot(0x0)/Pci(0x2,0x0)/MAC(525400123456,1)/IPv6([::]:[::]:,0,0){auto_created_boot_option}
Boot0006* EFI Internal Shell FvVol(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(7c04a583-9e3e-4f1c-ad65-e05268d0b4d1)
Boot0007* GRUB HD(1,GPT,2034b5d2-828a-4491-8d23-fe9439932a12,0x800,0x7d000)/File(\EFI\GRUB\grubx64.efi)
...
So I'm thinking that this should go fine, but instead when I run this:
sudo virt-install \
--name bastille-installer \
--vcpu 2 \
--machine q35 \
--memory 1024 \
--osinfo archlinux \
--debug \
--disk /var/lib/libvirt/images/bastille-installer_qemu_archlinux-2023-05.qcow2,format=qcow2 \
--import \
--boot loader=/usr/share/edk2/x64/OVMF_CODE.4m.fd,loader_ro=yes,loader_type=pflash,nvram_template=/usr/share/edk2/x64/OVMF_VARS.4m.fd,loader_secure=no


$ sudo qemu-img info ./bastille-installer_qemu_archlinux-2023-05.qcow2
image: ./bastille-installer_qemu_archlinux-2023-05.qcow2
file format: raw
virtual size: 1.42 GiB (1522309120 bytes)
disk size: 1.42 GiB
Child node '/file':
filename: ./bastille-installer_qemu_archlinux-2023-05.qcow2
protocol type: file
file length: 1.42 GiB (1522309120 bytes)
disk size: 1.42 GiB
Looks like I have a raw file issue.
Something must be causing packer to ignore building the box as a qcow2 file despite being configured as such.
*bastillebox-installer-box.pkr.hcl*
...
source "qemu" "archlinux" {
accelerator = "kvm"
boot_command = local.boot_command_qemu
boot_wait = "1s"
cpus = local.cpus
disk_interface = "virtio"
disk_size = local.disk_size
efi_boot = true
efi_firmware_code = local.efi_firmware_code
efi_firmware_vars = local.efi_firmware_vars
format = "qcow2"
headless = local.headless
http_directory = local.http_directory
iso_url = local.iso_url
iso_checksum = local.iso_checksum
machine_type = local.machine_type
memory = local.memory
net_device = "virtio-net"
shutdown_command = "sudo systemctl start poweroff.timer"
ssh_handshake_attempts = 500
ssh_port = 22
ssh_private_key_file = var.ssh_private_key_file
ssh_timeout = var.ssh_timeout
ssh_username = var.ssh_username
ssh_wait_timeout = var.ssh_timeout
vm_name = "${local.vm_name}.qcow2"
}
...
Folaht
(1156 rep)
May 30, 2023, 05:36 PM
• Last activity: May 30, 2023, 07:18 PM
1
votes
1
answers
358
views
Packer-build VM does not start: Unable to rename file '(null).new' to '(null)': Bad address
# Setup ## Host ### OS Manjaro XFCE x86_64 ### Apps [packer][1] (plugins: qemu) virt-install virt-viewer virt-manager ## Guest ``` OS: Arch Linux Hypervisor: QEMU KVM Architecture: x64 Machine Type: qc35 EFI Firmware Code: /usr/share/edk2-ovmf/x64/OVMF_CODE.fd EFI Firmware Vars: /usr/share/edk2-ovmf...
# Setup
## Host
### OS
Manjaro XFCE x86_64
### Apps
packer (plugins: qemu)
virt-install
virt-viewer
virt-manager
## Guest
OS: Arch Linux
Hypervisor: QEMU KVM
Architecture: x64
Machine Type: qc35
EFI Firmware Code: /usr/share/edk2-ovmf/x64/OVMF_CODE.fd
EFI Firmware Vars: /usr/share/edk2-ovmf/x64/OVMF_VARS.fd
See https://github.com/safenetwork-community/bastille-installer/tree/arch for the box I'm trying to build.
I'm trying to build a vagrant box with Arch Linux OS installed and some app that needs to use partitions, so I can't use docker for this.
I also like using EFI for this, although not really necessary, I'm a perfectionist when it comes to this project of mine.
The last time I worked on this, I took compromises and support dropped for one piece of software making the box permanently outdated.
So I want to do it right this time around and want nothing but the best of everything that I perceive as such.
The problem is with EFI.
After building the box with packer build
I can't run the box without receiving an error.
_error_
Starting install...
ERROR Unable to rename file '(null).new' to '(null)': Bad address
Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
virsh --connect qemu:///system start testvm1
otherwise, please restart your installation.
This happens when running the command below:
_virt-install command_
sudo virt-install \
--name bastille-installer \
--vcpu 2 \
--memory 1024 \
--osinfo archlinux \
--disk /var/lib/libvirt/images/bastille-installer_qemu_archlinux-2023-05.qcow2 \
--import \
--boot loader=/usr/share/edk2-ovmf/x64/OVMF_CODE.fd,loader.readonly=yes,loader.type=pflash,nvram.template=/usr/share/edk2-ovmf/x64/OVMF_VARS.fd,loader_secure=no
I'm at a loss as to why, what or where a '(null).new'
is being renamed to '(null)'.
Can anyone here help me out with this issue?
[EDIT]
I've discovered I could add a --debug parameter to virt-install.
I slightly altered the --disk option in this run from copy-pasting setups of people with similar issues.
The verbose error looks like so:
https://gist.github.com/Folaht/f5f337449800780c0da1d839171e078d
Folaht
(1156 rep)
May 2, 2023, 01:06 PM
• Last activity: May 29, 2023, 07:07 PM
0
votes
1
answers
99
views
Automated Ubuntu Desktop Build - DNS Failure on boot
I am trying to create an automated build of an Ubuntu Desktop 22.04.2 using the official guide (https://github.com/canonical/autoinstall-desktop) and then use ansible to configure it. The automated build works as expected, but the ansible configuration fails. After looking at the desktop, it has no...
I am trying to create an automated build of an Ubuntu Desktop 22.04.2 using the official guide (https://github.com/canonical/autoinstall-desktop) and then use ansible to configure it. The automated build works as expected, but the ansible configuration fails. After looking at the desktop, it has no DNS servers listed in etc/resolv.conf. Enabling and disabling/enabling the NIC from the cli or the GUI fixes the issue (router is now listed), but after a reboot the same issue is present again.
Disabling and reenabling the NIC, doing a apt update and apt upgrade, and then rebooting solves it across reboots. I am doing an apt update and apt upgrade in the automated build process as well as apt-get dist-upgrade, but there is a Software Update called "Ubuntu Base" that seems to only show up as a needed update after a user logs into the GUI for the first time. Once that update is applied and the desktop rebooted, the issue is resolved. I can then run the ansible playbook.
I have tried several methods to do an ipconfig eth0 down/up as the first step in the ansible playbook, but none of the methods seems to successfully work. Guessing the down works, but the desktop never receives the up when the network goes down. Not quite sure what else to try at this point. Hoping someone else has come across this.
Samcro1967
(1 rep)
May 9, 2023, 01:26 PM
• Last activity: May 11, 2023, 11:30 AM
1
votes
1
answers
1261
views
VM boots into UEFI Interactive Shell with the filesystem missing
# Setup ## Host ### OS Manjaro XFCE x86_64 ### Apps packer (plugins: qemu) virt-install virt-viewer virt-manager ## Guest ``` OS: Arch Linux Hypervisor: QEMU KVM Architecture: x64 Machine Type: qc35 EFI Firmware Code: /usr/share/edk2-ovmf/x64/OVMF_CODE.fd EFI Firmware Vars: /usr/share/edk2-ovmf/x64/...
# Setup
## Host
### OS
Manjaro XFCE x86_64
### Apps
packer (plugins: qemu)
virt-install
virt-viewer
virt-manager
## Guest
The FS0: in the mapping table is completely missing.
And after exiting to see the boot manager I find that several boot options are missing as well.
These are the scripts that my packer config loads:
_bastille-installer.pkr.hcl_
OS: Arch Linux
Hypervisor: QEMU KVM
Architecture: x64
Machine Type: qc35
EFI Firmware Code: /usr/share/edk2-ovmf/x64/OVMF_CODE.fd
EFI Firmware Vars: /usr/share/edk2-ovmf/x64/OVMF_VARS.fd
So I create a customized Arch Linux image
using packer build
and log my boot options.
_output of echo ">>>> ${NAME_SH}: Show boot option.."_
BootCurrent: 0001
Timeout: 0 seconds
BootOrder: 0007,0000,0001,0002,0003,0004,0005,0006
Boot0000* UiApp FvVol(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(462caa21-7614-4503-836e-8ab6f4662331)
Boot0001* UEFI QEMU DVD-ROM QM00001 PciRoot(0x0)/Pci(0x1f,0x2)/Sata(0,65535,0){auto_created_boot_option}
Boot0002* UEFI QEMU DVD-ROM QM00005 PciRoot(0x0)/Pci(0x1f,0x2)/Sata(2,65535,0){auto_created_boot_option}
Boot0003* UEFI Misc Device PciRoot(0x0)/Pci(0x3,0x0){auto_created_boot_option}
Boot0004* UEFI PXEv4 (MAC:525400123456) PciRoot(0x0)/Pci(0x2,0x0)/MAC(525400123456,1)/IPv4(0.0.0.00.0.0.0,0,0){auto_created_boot_option}
Boot0005* UEFI PXEv6 (MAC:525400123456) PciRoot(0x0)/Pci(0x2,0x0)/MAC(525400123456,1)/IPv6([::]:[::]:,0,0){auto_created_boot_option}
Boot0006* EFI Internal Shell FvVol(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(7c04a583-9e3e-4f1c-ad65-e05268d0b4d1)
Boot0007* GRUB HD(1,GPT,2034b5d2-828a-4491-8d23-fe9439932a12,0x800,0x7d000)/File(\EFI\GRUB\grubx64.efi)
So I'm thinking that this should go fine, but instead when I run this:
_virt-install command_
sudo virt-install \
--name bastille-installer \
--vcpu 2 \
--memory 1024 \
--osinfo archlinux \
--disk /var/lib/libvirt/images/bastille-installer_qemu_archlinux-2023-04.qcow2 \
--import \
--boot uefi
I get to see this:


...
provisioner "shell" {
only = ["qemu.archlinux"]
execute_command = "{{ .Vars }} sudo -E -S bash '{{ .Path }}'"
expect_disconnect = true
scripts = [
"scripts/configure-qemu.sh",
"scripts/configure-shared.sh",
"scripts/partition-table-gpt.sh",
"scripts/partition-ext4-efi.sh",
"scripts/setup.sh"
]
}
...
And this is the script that creates the bootloader.
_partition-ext4-efi.sh_
#!/usr/bin/env bash
. /tmp/files/vars.sh
NAME_SH=partition-ext4-efi.sh
# stop on errors
set -eu
echo ">>>> ${NAME_SH}: Writing Filesystem types.."
mkfs.ext4 -L BOHKS_BAZ ${ROOT_PARTITION}
mkfs.fat -F32 ${BOOT_PARTITION}
echo ">>>> ${NAME_SH}: Mounting partitions.."
/usr/bin/mount ${ROOT_PARTITION} ${ROOT_DIR}
/usr/bin/mkdir -p ${BOOT_DIR}
/usr/bin/mount ${BOOT_PARTITION} ${BOOT_DIR}
echo ">>>> ${NAME_SH}: Bootstrapping the base installation.."
/usr/bin/pacstrap ${ROOT_DIR} base pacman -Qq linux
echo ">>>> ${NAME_SH}: Updating pacman mirrors base installation.."
/usr/bin/arch-chroot ${ROOT_DIR} pacman -S --noconfirm reflector
/usr/bin/arch-chroot ${ROOT_DIR} reflector --latest 5 --protocol https --sort rate --save /etc/pacman.d/mirrorlist
tee /etc/xdg/reflector/reflector.conf &>/dev/null >>> ${NAME_SH}: Installing databases.."
/usr/bin/arch-chroot ${ROOT_DIR} pacman -Sy
echo ">>>> ${NAME_SH}: Installing basic packages.."
/usr/bin/arch-chroot ${ROOT_DIR} pacman -S --noconfirm sudo gptfdisk openssh grub efibootmgr dhcpcd netctl
echo ">>>> ${NAME_SH}: Generating the filesystem table.."
/usr/bin/genfstab -U ${ROOT_DIR} | tee -a "${ROOT_DIR}/etc/fstab" >/dev/null
echo ">>>> ${NAME_SH}: Installing grub.."
/usr/bin/arch-chroot ${ROOT_DIR} grub-install --target=x86_64-efi --efi-directory=${ESP_DIR} --bootloader-id=GRUB >/dev/null
/usr/bin/arch-chroot ${ROOT_DIR} grub-mkconfig -o /boot/grub/grub.cfg
echo ">>>> ${NAME_SH}: Show boot option.."
/usr/bin/arch-chroot ${ROOT_DIR} efibootmgr
echo ">>>> ${NAME_SH}: Generating the system configuration script.."
/usr/bin/install --mode=0755 /dev/null "${ROOT_DIR}${CONFIG_SCRIPT}"
In case it's relevant, this is how I create the partition table.
_partition-table-gpt.sh_
#!/usr/bin/env bash
. /tmp/files/vars.sh
# stop on errors
set -eu
NAME_SH=partition-table-gpt.sh
echo ">>>> ${NAME_SH}: Formatting disk.."
sed -e 's/\s*\([\+0-9a-zA-Z]*\).*/\1/' << EOF | gdisk ${DISK}
o
y
n
1
+250M
ef02
n
2
8304
p
w
y
q
EOF
Relevant var files:
_vars.sh_
...
HOME_DIR=/home/${USER}
SSH_DIR=/home/${USER}/.ssh
ROOT_DIR='/mnt'
BOOT_DIR='/mnt/boot/efi'
FILES_DIR='/tmp/files'
ESP_DIR='/boot/efi'
...
BOOT_PARTITION="${DISK}1"
ROOT_PARTITION="${DISK}2"
...
I'm at a loss as to why I'm not getting my UEFI boot options
and why the filesystem is missing in the mapping table.
Folaht
(1156 rep)
Apr 29, 2023, 04:15 PM
• Last activity: May 2, 2023, 01:09 PM
1
votes
0
answers
38
views
is it correct to configure virtualenv in that location (manjaro)
I'm using the "Vanessa" manjaro system. I need to use virtualenv but it is not starting to create the environment.export. I'm following this very common pattern on linux-based systems. **sudo nano ~/.bashrc** export WORKON_HOME=~/.virtualenvs source /usr/bin/virtualenvwrapper.sh **all files are in t...
I'm using the "Vanessa" manjaro system.
I need to use virtualenv but it is not starting to create the environment.export.
I'm following this very common pattern on linux-based systems.
**sudo nano ~/.bashrc**
export WORKON_HOME=~/.virtualenvs
source /usr/bin/virtualenvwrapper.sh
**all files are in that location**
**ls /usr/bin/vi**
vi@ virtualenv* virtualenvwrapper.sh*
view@ virtualenv3* visualinfo*
vigr@ virtualenv-clone* visudo*
vipw* virtualenvwrapper_lazy.sh*
but nothing happens, the script does not start and the **workon** command does not appear
cardosource
(11 rep)
Dec 5, 2022, 02:39 AM
• Last activity: Dec 5, 2022, 02:43 AM
0
votes
0
answers
250
views
How to achieve FIPS 140-2 with AWS CloudFormation & Packer
I am following this guide for AWS FIPS: https://aws.amazon.com/compliance/fips/ I have added successfully been able to FIPS on AWS AMI EC2 for ASG using the following guide in CloudFormation: https://aws.amazon.com/blogs/publicsector/enabling-fips-mode-amazon-linux-2/ The Jenkins Pipeline Bake AMI i...
I am following this guide for AWS FIPS: https://aws.amazon.com/compliance/fips/
I have added successfully been able to FIPS on AWS AMI EC2 for ASG using the following guide in CloudFormation: https://aws.amazon.com/blogs/publicsector/enabling-fips-mode-amazon-linux-2/
The Jenkins Pipeline Bake AMI is building successfully however still not fully working with following errors:
java.lang.NoClassDefFoundError: Could not initialize class sun.security.ssl.SignatureScheme
Any ideas as to how I can fix this? Could this be an NGINX issue?
ianh11
(1 rep)
Sep 12, 2022, 04:08 PM
• Last activity: Sep 12, 2022, 04:39 PM
5
votes
1
answers
12232
views
Errors during downloading metadata for repository 'epel'
>**What specific syntax must be changed in the `cloud-init` startup script excerpt below in order to handle the error message shown below by retrying something else until it correctly works without throwing an error?** #### Command That triggers Error: The command in our startup script that seems to...
>**What specific syntax must be changed in the
cloud-init
startup script excerpt below in order to handle the error message shown below by retrying something else until it correctly works without throwing an error?**
#### Command That triggers Error:
The command in our startup script that seems to be triggering the error is:
dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
#### Error Message:
The error message seems to be:
azure-arm: Errors during downloading metadata for repository 'epel':
azure-arm: - Status code: 503 for https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=x86_64&infra=$infra&content=$contentdir (IP: 123.45.678.901)
azure-arm: - Status code: 503 for https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=x86_64&infra=$infra&content=$contentdir (IP: 123.45.678.908)
azure-arm: - Status code: 503 for https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=x86_64&infra=$infra&content=$contentdir (IP: 98.765.43.21)
azure-arm: Error: Failed to download metadata for repo 'epel': Cannot prepare internal mirrorlist: Status code: 503 for https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=x86_64&infra=$infra&content=$contentdir (IP: 86.753.09.11)
#### The Context
A RHEL 7 image is being built in azure by packer using a cloud-init
startup script. Normally, the build works correctly. However, right now, the build is failing when the line given below throws the error given below due to some dependency problem.
How would we need to re-write the lines around the line that is breaking in order for the install to complete without error?
Our requirement is to do the dnf install
directly from a specific rpm
file as below, but what do we change to keep the process from failing in the rare occasions when the url given for the rpm is not responding correctly?
The automation that includes the build takes a long time to run before it gets to the point where this error is thrown.
Handling this error would thus eliminate a lot of wasted time by preventing the scenario of needing to re-run long automation processes.
#### Results of @Haxiel
's suggested code:
We tried the code suggested by @Haxiel
in an answer posted below, but we got the following error as a result.
>**What specific syntax must be changed to resove this error to solve the original problem posted in this OP?**
azure-arm: + for repourl in "https://fedora.cu.be/epel " "https://lon.mirror.rackspace.com/epel " "https://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora-projects/epel "
azure-arm: + curl --silent --fail --max-time 5 https://fedora.cu.be/epel
azure-arm: + echo 'Repository reachable.'
azure-arm: Repository reachable.
azure-arm: + for repourl in "https://fedora.cu.be/epel " "https://lon.mirror.rackspace.com/epel " "https://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora-projects/epel "
azure-arm: + curl --silent --fail --max-time 5 https://lon.mirror.rackspace.com/epel
azure-arm: + echo 'Repository reachable.'
azure-arm: Repository reachable.
azure-arm: + for repourl in "https://fedora.cu.be/epel " "https://lon.mirror.rackspace.com/epel " "https://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora-projects/epel "
azure-arm: + curl --silent --fail --max-time 5 https://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora-projects/epel
azure-arm: + echo 'Repository reachable.'
azure-arm: Repository reachable.
azure-arm: + sudo dnf --cacheonly -y install https://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora-projects/epel/epel-release-latest-8.noarch.rpm
azure-arm: Last metadata expiration check: 0:11:50 ago on Mon 07 Feb 2022 05:36:46 PM UTC.
azure-arm: epel-release-latest-8.noarch.rpm 37 kB/s | 23 kB 00:00
azure-arm: Dependencies resolved.
azure-arm: ================================================================================
azure-arm: Package Architecture Version Repository Size
azure-arm: ================================================================================
azure-arm: Installing:
azure-arm: epel-release (B noarch 8-13.el8 @commandline 23 k
azure-arm:
azure-arm: Transaction Summary
azure-arm: ================================================================================
azure-arm: Install 1 Package
azure-arm:
azure-arm: Total size: 23 k
azure-arm: Installed size: 35 k
azure-arm: Downloading Packages:
azure-arm: Running transaction check
azure-arm: Transaction check succeeded.
azure-arm: Running transaction test
azure-arm: Transaction test succeeded.
azure-arm: Running transaction
azure-arm: Preparing : 1/1
azure-arm: Installing : epel-release-8-13.el8.noarch 1/1
azure-arm: Running scriptlet: epel-release-8-13.el8.noarch 1/1
azure-arm: Verifying : epel-release-8-13.el8.noarch 1/1
azure-arm: Installed products updated.
azure-arm:
azure-arm: Installed:
azure-arm: epel-release-8-13.el8.noarch
azure-arm:
azure-arm: Complete!
azure-arm: + sed -i '/^metalink.*/d' /etc/yum.repos.d/epel-modular.repo /etc/yum.repos.d/epel-playground.repo /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel-testing-modular.repo /etc/yum.repos.d/epel-testing.repo
azure-arm: + sed -i 's|^#baseurl.*|baseurl=https://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora-projects/epel|g ' /etc/yum.repos.d/epel-modular.repo /etc/yum.repos.d/epel-playground.repo /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel-testing-modular.repo /etc/yum.repos.d/epel-testing.repo
azure-arm: + dnf install -y telnet
azure-arm: Extra Packages for Enterprise Linux 8 - x86_64 205 B/s | 196 B 00:00
azure-arm: Errors during downloading metadata for repository 'epel':
azure-arm: - Status code: 404 for https://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora-projects/epel/repodata/repomd.xml (IP: 123.45.678.90)
azure-arm: Error: Failed to download metadata for repo 'epel': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
CodeMed
(5357 rep)
Feb 5, 2022, 02:52 AM
• Last activity: Feb 8, 2022, 04:24 AM
1
votes
1
answers
3508
views
How to use json variable in shell script file?
I would like to use this json file to get jenkins version and java version numbers to include my `script.sh` file. How do we do that? I tried ``{{user `java_version`}}`` but it did not work. * `variable.json` file ```json { "region": "us-east-1", "jenkins_version": "2.263.4", "java_version": "1.8.0"...
I would like to use this json file to get jenkins version and java version numbers to include my
script.sh
file. How do we do that?
I tried `{{user
java_version}}
` but it did not work.
* variable.json
file
{
"region": "us-east-1",
"jenkins_version": "2.263.4",
"java_version": "1.8.0"
}
* script.sh
file
-bash
#!/bin/bash
sudo yum update -y
sudo yum install wget -y
sudo yum install java-{{user java_version
}}-openjdk-devel -y
sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
sudo yum install -y jenkins-{{user java_version
}}
sudo systemctl start jenkins
sudo systemctl status jenkins
#Get auth password from jenkins master
echo "authpwd="$(sudo cat /var/lib/jenkins/secrets/initialAdminPassword)
M.Turan
(13 rep)
Feb 26, 2021, 01:10 AM
• Last activity: Feb 26, 2021, 08:11 AM
0
votes
0
answers
218
views
Centos8 image creation through packer on azure results in to errors
Can somebody tell me what I am doing wrong here as I am unable to create the image? actually the commands, when I run on my host Centos machine, works fine but when I execute the same commands through packer result into error. JSON file: { "builders": [{ "type": "azure-arm", "client_id": "{{user `az...
Can somebody tell me what I am doing wrong here as I am unable to create the image? actually the commands, when I run on my host Centos machine, works fine but when I execute the same commands through packer result into error.
JSON file:
{
"builders": [{
"type": "azure-arm",
"client_id": "{{user
dnf install terminator xrdp error (resolved)
azure-client-id
}}",
"client_secret": "{{user azure-client-secret
}}",
"tenant_id": "{{user azure-tenant-id
}}",
"subscription_id": "{{user azure-subscription-id
}}",
"managed_image_resource_group_name": "{{user azure-resource-group
}}",
"managed_image_name": "CentOS8-Packer",
"os_type": "Linux",
"image_publisher": "OpenLogic",
"image_offer": "CentOS",
"image_sku": "8_2-gen2",
"location": "{{user azure-region
}}",
"vm_size": "{{user vm-size
}}"
}],
"provisioners": [
{
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"script":"ami-script.sh"
"type": "shell"
}
]
}
ami-script.sh content:
#!/bin/bash -e
dnf update -y
dnf install terminator xrdp -y
Error:
Updated part:
dnf update -y
error:



zuri_nahk
(13 rep)
Aug 27, 2020, 07:35 PM
• Last activity: Aug 27, 2020, 09:41 PM
0
votes
2
answers
1190
views
Missing chrony package in Ubuntu Bionic
I am using HashiCorp Packer to build a new AWS AMI. I want to preinstall the NTP client Chrony (it's popular in our organization and it will get config support from people outside our team). But when I use the AMI and do apt-get update apt-get install -y chrony I get Package 'chrony' has no installa...
I am using HashiCorp Packer to build a new AWS AMI. I want to preinstall the NTP client Chrony (it's popular in our organization and it will get config support from people outside our team). But when I use the AMI and do
apt-get update
apt-get install -y chrony
I get
Package 'chrony' has no installation candidate
with some other interest bits from the packer build log:
amazon-ebs: Reading package lists...
amazon-ebs: Building dependency tree...
amazon-ebs: Reading state information...
amazon-ebs: Package chrony is not available, but is referred to by another package.
amazon-ebs: This may mean that the package is missing, has been obsoleted, or
amazon-ebs: is only available from another source
Which is odd, I'm not touching the
/etc/apt/sources.list
. If I cat
it from the packer environment it looks like (edited to remove Ubuntu inline comments):
amazon-ebs: deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ bionic main restricted
amazon-ebs: deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ bionic-updates main restricted
amazon-ebs: deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ bionic universe
amazon-ebs: deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ bionic-updates universe
amazon-ebs: deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ bionic multiverse
amazon-ebs: deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ bionic-updates multiverse
amazon-ebs: deb http://us-west-2.ec2.archive.ubuntu.com/ubuntu/ bionic-backports main restricted universe multiverse
amazon-ebs: deb http://security.ubuntu.com/ubuntu bionic-security main restricted
amazon-ebs: deb http://security.ubuntu.com/ubuntu bionic-security universe
amazon-ebs: deb http://security.ubuntu.com/ubuntu bionic-security multiverse
Seems like that should be sufficient to find chrony
? I have also confirmed that chrony is in the bionic distro, it has a package page here: https://packages.ubuntu.com/bionic/chrony .
Does apt have different rules for resolving dependencies when run from packer?
xrl
(101 rep)
Jun 17, 2020, 02:24 PM
• Last activity: Aug 12, 2020, 11:19 PM
1
votes
0
answers
590
views
packer - hyperv - centos dracut initqueue timeout errors
facing errors while using packer to build an image with hyperv for centos8 attaching logs at ends 1. its picking the ks.cfg, 2. its getting the ip as well from dhcp, i could see in the rdsosreport.txt as well 3. but still the dracut initqueue remains the same error could not boot the centos8 to buil...
facing errors while using packer to build an image with hyperv for centos8
attaching logs at ends
1. its picking the ks.cfg,
2. its getting the ip as well from dhcp, i could see in the rdsosreport.txt as well
3. but still the dracut initqueue remains the same error could not boot the centos8 to build image using packer on hyperv

snehid
(23 rep)
Jan 27, 2020, 11:42 PM
• Last activity: Jan 29, 2020, 02:59 AM
3
votes
0
answers
1183
views
QEMU: How to convert -net flags into -device & -netdev
I'm trying to emulate Raspberry Pi via QEMU and the following works for me: ```sh qemu-system-arm \ -append "root=/dev/sda2 panic=1 rootfstype=ext4 rw" \ -boot c \ -cpu arm1176 \ -drive "file=2019-04-08-raspbian-stretch-lite.img,if=scsi,cache=none,discard=ignore,format=raw" \ -kernel ./kernel-qemu-4...
I'm trying to emulate Raspberry Pi via QEMU and the following works for me:
qemu-system-arm \
-append "root=/dev/sda2 panic=1 rootfstype=ext4 rw" \
-boot c \
-cpu arm1176 \
-drive "file=2019-04-08-raspbian-stretch-lite.img,if=scsi,cache=none,discard=ignore,format=raw" \
-kernel ./kernel-qemu-4.4.34-jessie \
-m 256M \
-machine type=versatilepb,accel=tcg \
-name packer-qemu \
-no-reboot \
-vnc 127.0.0.1:4 \
-net nic \
-net user,id=user.0,hostfwd=tcp::5555-:22
and I'm able to both VNC in via 5904
and SSH in via 5555
(after starting SSHd via VNC). In other words network seems to be set up correctly.
As I discovered [-net
option has been deprecated](https://wiki.qemu.org/Documentation/Networking#The_legacy_-net_option) in favour of -device
& -netdev
, so I'd like to translate the above two last flags into "new QEMU".
It appears that the new -device
flag forces me to pick a driver, which isn't the case with -net
. I like explicitness, but how do I know what is the default/implicit driver?
Port forwarding in the following example doesn't seem to work anymore (I can't SSH in; connection times out):
qemu-system-arm \
-append "root=/dev/sda2 panic=1 rootfstype=ext4 rw" \
-boot c \
-cpu arm1176 \
-drive "file=2019-04-08-raspbian-stretch-lite.img,if=scsi,cache=none,discard=ignore,format=raw" \
-kernel ./kernel-qemu-4.4.34-jessie \
-m 256M \
-machine type=versatilepb,accel=tcg \
-name packer-qemu \
-no-reboot \
-vnc 127.0.0.1:4 \
-device e1000,netdev=user.0 \
-netdev user,id=user.0,hostfwd=tcp::5555-:22
Am I just using wrong driver?
----
QEMU 3.1.0
(installed from Homebrew)
(Host) MacOS 10.14.4
Radek Simko
(131 rep)
Apr 17, 2019, 07:24 PM
• Last activity: Jan 22, 2020, 07:39 AM
1
votes
0
answers
569
views
SSH server not booting in Packer QEMU Debian VM
I'm using Packer to boot a pre-installed Debian Buster image on QEMU to provision it for deployment, but whenever Packer boots the image it is unable to connect over SSH. The Debian image is the default cd install fully updated and with `openssh-server` installed and `PermitRootLogin` set to `yes`....
I'm using Packer to boot a pre-installed Debian Buster image on QEMU to provision it for deployment, but whenever Packer boots the image it is unable to connect over SSH.
The Debian image is the default cd install fully updated and with
openssh-server
installed and PermitRootLogin
set to yes
. Packer will boot the image and try and connect over SSH but the SSH server doesn't start until I connect to the VNC.
Here is the QEMU command being run:
/usr/bin/qemu-system-x86_64 -name packer-qemu -machine type=pc,accel=tcg -netdev user,id=user.0,hostfwd=tcp::3841-:22 -device virtio-net,netdev=user.0 -drive file=image/packer-qemu,if=virtio,cache=writeback,discard=ignore,format=qcow2 -boot c -m 1024 -vnc 127.0.0.1:0
I have confirmed this issue with multiple machines and multiple versions of QEMU.
I can reproduce the issue without Packer, just running the QEMU commands myself.
Looking at systemd-analyze
, the ssh.service
just hangs until the VNC is connected.
If I put the ssh server into debug mode I get a **Missing privilege separation directory: /run/sshd** error. The directory exists if the VNC is connected and the SSH server is not in debug.
My hunch is that QEMU isn't creating a console port, or something along that line, but I'm not sure how to fix it.
Here is my Packer file:
{
"builders": [
{
"type": "qemu",
"iso_url": "file:/home/user/buster.img",
"iso_checksum_type": "none",
"disk_image": "true",
"vm_name": "base.raw",
"headless": "true",
"cpus": "1",
"memory": "1024",
"boot_wait": "2m",
"shutdown_command": "systemctl poweroff",
"ssh_timeout": "2m",
"ssh_username": "root",
"ssh_password": "password",
"output_directory": "image"
}
]
}
Cirrith
(11 rep)
Jan 22, 2020, 01:37 AM
• Last activity: Jan 22, 2020, 07:36 AM
Showing page 1 of 17 total questions