Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
0
answers
87
views
Is the TASK-PID in trace-cmd output the TID of the thread handling TAP interface I/O?
I'm working on an networking lab tool leveraging `QEMU`-based VM virtualization and `docker` technology to run VMs and containers respectively on a Linux `host`. The underlying lab connectivity is implemented by using linux `bridges`. I have a linux Ubuntu `guest` running inside a `QEMU VM` that fea...
I'm working on an networking lab tool leveraging
QEMU
-based VM virtualization and docker
technology to run VMs and containers respectively on a Linux host
. The underlying lab connectivity is implemented by using linux bridges
.
I have a linux Ubuntu guest
running inside a QEMU VM
that features a virtio-net
paravirualized interface with TAP
backend. Such TAP
interface is connected to a linux bridge's port
on the host.
root@eve-ng62-28:~# brctl show vnet0_3
bridge name bridge id STP enabled interfaces
vnet0_3 8000.d63b1f37e4ba no vnet0_9_2
vunl0_3_3
vunl0_7_0
vunl0_9_2
root@eve-ng62-28:~#
root@eve-ng62-28:~# ethtool -i vunl0_7_0
driver: tun
version: 1.6
firmware-version:
expansion-rom-version:
bus-info: tap
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root@eve-ng62-28:~#
I'm using Linux ftrace
via trace-cmd
frontend to dig into some details, see also https://unix.stackexchange.com/questions/797717/tcp-checksum-offloading-on-virtio-net-paravirtualized-interfaces
root@eve-ng62-28:~# trace-cmd start -e net:netif_receive_skb_entry -f "name == 'vunl0_7_0'"
root@eve-ng62-28:~#
root@eve-ng62-28:~# trace-cmd show
# tracer: nop
#
# entries-in-buffer/entries-written: 1/1 #P:48
#
# _-----=> irqs-off/BH-disabled
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / _-=> migrate-disable
# |||| / delay
# TASK-PID CPU# ||||| TIMESTAMP FUNCTION
# | | | ||||| | |
qemu-system-x86-600348 b.... 66505.777999: netif_receive_skb_entry: dev=vunl0_7_0 napi_id=0x0 queue_mapping=1 skbaddr=0000000006a1cc35 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=3 hash=0x00000000 l4_hash=0 len=60 data_len=0 truesize=768 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0x0
As you can see, linux guest sends outgoing TCP
packets to the virtio-net
network interface setting CHECKSUM_PARTIAL(3)
tag in the ip_summed
field within sk_buff
struct.
My question is related to the TASK-PID
field shown by trace-cmd show
. 600348
is the PID
of the qemu-system-x86_64
process's instance associated to the VM.
As required I edit this to provide the question: is the TASK-PID
shown the PID
or TID
of the process/thread that is the context the TAP
driver runs into ?
CarloC
(385 rep)
Jul 9, 2025, 01:03 PM
• Last activity: Jul 12, 2025, 07:42 PM
0
votes
0
answers
24
views
Linux bridge forwarding from/to TAP interfaces
As explained [here][1] in my own Q&A, reconsider the following scenario. A Linux host with a two port Linux `bridge` and two Linux guest `VMs` connected to it: the first bridge's port is connected to `TAP` interface `tap0` while the second to `tap1`. `tap0` and `tap1` are backend `TAP` interfaces as...
As explained here in my own Q&A, reconsider the following scenario.
A Linux host with a two port Linux
bridge
and two Linux guest VMs
connected to it: the first bridge's port is connected to TAP
interface tap0
while the second to tap1
. tap0
and tap1
are backend TAP
interfaces associated to virtio-net
(frontend) interfaces each exposed to a QEMU
based VM
(lets say VM0
and VM1
).
As far as I can tell, when VM0
sends a frame/packet targeted to VM1
, VM0
QEMU
process's userland code calls write()
syscall on the fd
virtio-net
interface is associated to. From tap0
driver-code viewpoint, the RX
path is involved (basically tap0
is receiving a packet/frame from its "logical wires"), hence for instance the kernel netif_receive_skb()
function is executed in the context of VM0
QEMU
's process.
Furthermore the packet/frame is forwarded from the Linux bridge to the tap1
interface hence, from tap1
driver-code viewpoint, the TX
path is involved (basically tap1
is transmitting a packet/frame on its "logical wires"), hence for instance the kernel net_dev_xmit()
function is executed/run in the context of VM0
QEMU
's process as well.
Does it makes sense ? Thanks.
CarloC
(385 rep)
Jul 11, 2025, 10:19 AM
• Last activity: Jul 11, 2025, 11:58 AM
2
votes
1
answers
207
views
TCP checksum offloading on virtio-net paravirtualized interfaces
Consider the topology where 2 QEMU VMs running Linux Ubuntu `16.04` kernel version `4.4.0-210` have both `virtio-net` interfaces with `TAP` backends connected to the same (host) Linux `bridge` and an `SSH` connection between them. ubuntu@VM1:~$ uname -a Linux VM1 4.4.0-210-generic #242-Ubuntu SMP Fr...
Consider the topology where 2 QEMU VMs running Linux Ubuntu
16.04
kernel version 4.4.0-210
have both virtio-net
interfaces with TAP
backends connected to the same (host) Linux bridge
and an SSH
connection between them.
ubuntu@VM1:~$ uname -a
Linux VM1 4.4.0-210-generic #242-Ubuntu SMP Fri Apr 16 09:57:56 UTC 2021 x86_64 x86_64 x8x
ubuntu@VM1:~$
Both VMs use paravirtualized virtio-net
interfaces defaulting to TX and RX checksum
offloading.
ubuntu@VM1:~$ ethtool -i eth0
driver: virtio_net
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:00:03.0
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
ubuntu@VM1:~$
ubuntu@VM1:~$ ethtool -k eth0 | grep -i sum
rx-checksumming: on [fixed]
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
ubuntu@VM1:~$
ubuntu@VM2:~$ ethtool -k eth0 | grep -i sum
rx-checksumming: on [fixed]
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
ubuntu@VM2:~$
That actually means:
- kernel network stack sends out SSH/TCP
packets without computing & filling the relevant TCP checksum
field inside them (i.e. basically the TCP checksum
inside the packets sent is either zeroed out or incorrect)
- kernel network stack assumes the virtio-net
interface has already checked/verified the TCP checksum
for SSH/TCP
received packets and is therefore allowed to skip it
Hence the SSH
connection works even though traveling SSH/TCP
packets have an *incorrect* TCP
checksum (tcpdump
run inside both VM confirms this).
Later, changing the topology by connecting each VM to a different linux bridge with a virtual router
in the middle, suddenly the SSH
connection stop working. I double checked that virtual router actually forwards TCP/SSH
packets *as-is* from a bridge to the the other (in both directions), so I don't understand why the SSH
connection stopped working this time.
What is going on in the latter case ? Thanks.
CarloC
(385 rep)
Jul 7, 2025, 05:59 AM
• Last activity: Jul 7, 2025, 09:09 AM
0
votes
0
answers
47
views
Screen glitching when running VM on qemu/kvm
I'm having this weird behavior when i start a VM on Arch Linux under wayland with Virtio, 3D acceleration enabled, Spice server and openGL enabled. Am I missing some kind of driver? When I start the VM with OpenGL disabled and it works ok. - OS: Arch Linux - Wayland - Memory: 62.2 GiB - Architecture...
I'm having this weird behavior when i start a VM on Arch Linux under wayland with Virtio, 3D acceleration enabled, Spice server and openGL enabled.
Am I missing some kind of driver? When I start the VM with OpenGL disabled and it works ok.
- OS: Arch Linux - Wayland
- Memory: 62.2 GiB
- Architecture: x86_64
- GPU: Intel(R) Arc(tm) Graphics (MTL) || Intel open-source Mesa driver || Mesa 25.1.2-arch1.1



Parker
(191 rep)
Jun 7, 2025, 10:04 PM
1
votes
0
answers
51
views
AIX 7.2 on QEMU, virtio devices
I've successfully install an AIX 7.2 vm on QEMU for some time. I've recently started to experiment to make it a bit faster. Switching from qcow2 to raw plus io_uring made a good difference. I'm also trying to switch from spapr-vscsi & spapr-vlan devices to virtio. Virtio-scsi works fine, but I have...
I've successfully install an AIX 7.2 vm on QEMU for some time. I've recently started to experiment to make it a bit faster. Switching from qcow2 to raw plus io_uring made a good difference.
I'm also trying to switch from spapr-vscsi & spapr-vlan devices to virtio. Virtio-scsi works fine, but I have issues with virtio-net.
While booting, I can see the virtio-net device on firmware:
Populating /pci@800000020000000
00 0000 (D) : 1234 1111 qemu vga
00 0800 (D) : 1033 0194 serial bus [ usb-xhci ]
00 1000 (D) : 1af4 1004 virtio [ scsi ]
Populating /pci@800000020000000/scsi@2
SCSI: Looking for devices
100000000000000 DISK : "QEMU QEMU HARDDISK 2.5+"
00 1800 (D) : 1af4 1000 virtio [ net ]
On AIX:
$ lsdev -Cc adapter
ent0 Defined Virtual I/O Ethernet Adapter (l-lan)
ent1 Defined Virtual I/O Ethernet Adapter (l-lan)
ent2 Available 00-18 Virtio NIC Client Adapter (f41a0100)
hdcrypt Available Data encryption
pkcs11 Available ACF/PKCS#11 Device
scsi0 Available 00-10 Virtio SCSI Client Adapter (f41a0800)
vsa0 Available LPAR Virtual Serial Adapter
vscsi0 Defined Virtual SCSI Client Adapter
(I have only one NIC on the vm, but have the same result with spapr-vlan enabled, where networking works fine)
$ prtconf
* ent2 qemu_virtio-net-pci:0000:00:03.0 Virtio NIC Client Adapter f41a0100)
But unfortunately ifconfig -a
shows only loopback, lo0.
Any ideas how to enable NIC (en2 I suppose) to be listed in ifconfig and being able to configure?
Krackout
(2887 rep)
Mar 24, 2025, 01:20 PM
1
votes
1
answers
896
views
How to get write permissions on Linux host from Windows 10 guest, using virtiofs shared folders
I am trying to share a folder from an Ubuntu 20.04.3 host with a Windows 10 build 19042 (20H2) guest, using QEMU 5.2 / libvirt 7.0.0 on the host and virtio-win 0.1.208 (driver 100.85.104.20800 and associated virtiofs service) on the guest. So far I am able to read files in this host folder without p...
I am trying to share a folder from an Ubuntu 20.04.3 host with a Windows 10 build 19042 (20H2) guest, using QEMU 5.2 / libvirt 7.0.0 on the host and virtio-win 0.1.208 (driver 100.85.104.20800 and associated virtiofs service) on the guest.
So far I am able to read files in this host folder without problems from the guest. However I can only create/write/delete files if
1. I use a shell (Windows CMD or Cygwin bash) with *Administrator* rights on the guest
OR
2. I change the folder permissions on the host, giving write permissions to "other".
Neither of these options is acceptable as a permanent solution.
I already toyed with various settings for "user" in /etc/libvirt/qemu.conf, including root and the user owning the shared folder (myself), without success. I do struggle to understand what ultimately determines the write permissions on the host folder. I had assumed this to be related to the UID of one of the hypervisor processes, so I do not see why running as Administrator or not on the guest should make a difference.
Can anyone shed some light on this? Has anyone been more succesful?
For info: The relevant section of the QEMU domain configuration looks like this:
virtiofs requires accessmode='passthrough'
.
ohrenbaer
(11 rep)
Oct 23, 2021, 02:52 PM
• Last activity: Jan 7, 2025, 03:00 PM
1
votes
0
answers
34
views
What is needed to enable RPMsg IPC on SoC
I am working with an Intel SoC with a Hard Processor System (HPS), an Arm 64 core, and an FPGA fabric with a Nios soft processor. I would like to implement message passing between these two processors using RPMsg. Intel has a hardware mailbox IP which we have connected appropriately, and dual-port o...
I am working with an Intel SoC with a Hard Processor System (HPS), an Arm 64 core, and an FPGA fabric with a Nios soft processor. I would like to implement message passing between these two processors using RPMsg. Intel has a hardware mailbox IP which we have connected appropriately, and dual-port on-chip RAM has been instantiated in the hardware design and connected to the both the HPS and Nios. My understanding is that I need to develop a *remoteproc* driver which incorporates the mailbox notification setup, and more importantly, creates the resource table and instantiates the appropriate Virtio structures to enable the IPC. Overall, I would like:
- Statically allocate the necessary virtio resources (vrings, message buffers) in the shared OCRAM.
- Incorporate the platform specific mailbox driver (drivers/mailbox/mailbox-altera.c).
- Create an interface for RPMsg (probably a char device, expose a device file to userspace for reading and writing).
I have been digging in the kernel source, and I can't figure out what parts of this framework are platform agnostic and available for my use, and which components are board specific. My questions are:
1. Since I don't actually need to perform any life cycle management of the remote processor (Nios), I just need remoteproc to handle the virtio resources. Which
rproc_ops
do I need to implement?
2. How would I go about allocating my virtio resources statically in specific physical memory regions within my remoteproc driver? Do I need to make carveouts in the resource table, or is there another way to just force the virtio resources to be in the OCRAM? How is this communicated between the processors?
3. Assuming I have my remoteproc set up sufficiently, can I use /drivers/rpmsg/rpmsg_char.c off the shelf? Or do I need to create a different RPMsg client?
4. In general, what kernel source files in this whole framework are platform agnostic (and available for me to use)? I can't tell
The Nios will be running a RTOS with OpenAMP or rpmsg-lite, but I'll cross that bridge after I deal with the kernel side.
Any guidance would be greatly appreciated!
user667370
(11 rep)
Oct 30, 2024, 09:35 PM
3
votes
1
answers
3056
views
No network connectivity with virtio drivers under WinXP guest (libvirt/Qemu on Linux host)
I've been using `libvirt` for a couple of years now and it's worked a treat so far. Just until recently (probably after a few system updates on my Manjaro Linux host): none of my Windows (XP) guests have network connectivity with virtio drivers anymore. Instead I must switch to `rtl8139`, then netwo...
I've been using
libvirt
for a couple of years now and it's worked a treat so far. Just until recently (probably after a few system updates on my Manjaro Linux host): none of my Windows (XP) guests have network connectivity with virtio drivers anymore. Instead I must switch to rtl8139
, then network connectivity works fine. As a corollary, I have to wait a very long time in my Windows guests until I can finally check the network adapters settings; otherwise no network icon appears nor does the Network connections
window when I right click on Network Favourites
and select Properties
. The wait period occurs regardless of whether I set IP addresses manually to the interfaces or use DHCP.
I started to notice this issue while booting my old Windows XP virtual machine. It was virtio drivers version **0.1.106** (or close) installed back then. So I upgraded the virtio
network drivers first, like I did in the past. Something odd though: updating the driver took forever and I had to forcibly power off the VM and restart it again. I also uninstalled the drivers completely after I switched to rtl8139
then re-installed them again (using Windows Device Manager Non Present Devices trick). No change.
I have tried virtio
drivers from the Fedora project version **0.1.135** (latest) and **0.1.126** (stable). No difference. The previous drivers, which used to work back then where from 2013. Needless to say they don't now either. It looks like only my Windows guests are affected. None of my old Linux guests exhibit that glitch as they all receive an IP address from my host's dnsmasq
daemon.
Does anyone have an idea?
(N.B.: The event log doesn't reveal nothing about anything going wrong. That said it's no surprise to me.)
user86969
Apr 20, 2017, 09:04 PM
• Last activity: Oct 16, 2024, 09:03 PM
5
votes
1
answers
1566
views
QEMU kernel for raspberry pi 3 with networking and virtio support
I used the QEMU(qemu-system-aarch64 -M raspi3) for emulating the Raspberry pi3 with the kernel from the working image. Everything was working but there was no networking. ``` qemu-system-aarch64 \ -kernel ./bootpart/kernel8.img \ -initrd ./bootpart/initrd.img-4.14.0-3-arm64 \ -dtb ./debian_bootpart/...
I used the QEMU(qemu-system-aarch64 -M raspi3) for emulating the Raspberry pi3 with the kernel from the working image.
Everything was working but there was no networking.
qemu-system-aarch64 \
-kernel ./bootpart/kernel8.img \
-initrd ./bootpart/initrd.img-4.14.0-3-arm64 \
-dtb ./debian_bootpart/bcm2837-rpi-3-b.dtb \
-M raspi3 -m 1024 \
-nographic \
-serial mon:stdio \
-append "rw earlycon=pl011,0x3f201000 console=ttyAMA0 loglevel=8 root=/dev/mmcblk0p3 fsck.repair=yes net.ifnames=0 rootwait memtest=1" \
-drive file=./genpi64lite.img,format=raw,if=sd,id=hd-root \
-no-reboot
I tried to add this option
-device virtio-blk-device,drive=hd-root \
-netdev user,id=net0,hostfwd=tcp::5555-:22 \
-device virtio-net-device,netdev=net0 \
But there would be an error
qemu-system-aarch64: -device virtio-blk-device,drive=hd-root: No 'virtio-bus' bus found for device 'virtio-blk-device'
I have referenced some forum, and used the "virt" machine instead of raspi3 in order of emulating virtio-network
qemu-system-aarch64 \
-kernel ./bootpart/kernel8.img \
-initrd ./bootpart/initrd.img-4.14.0-3-arm64 \
-m 2048 \
-M virt \
-cpu cortex-a53 \
-smp 8 \
-nographic \
-serial mon:stdio \
-append "rw root=/dev/vda3 console=ttyAMA0 loglevel=8 rootwait fsck.repair=yes memtest=1" \
-drive file=./genpi64lite.img,format=raw,if=sd,id=hd-root \
-device virtio-blk-device,drive=hd-root \
-netdev user,id=net0,net=192.168.1.1/24,dhcpstart=192.168.1.234 \
-device virtio-net-device,netdev=net0 \
-no-reboot
There is nothing printed and the terminal was suspended. It means the kernel does not work with virt machine.
I decided to build for my own custom kernel.
Could anyone give me advice for options to build the kernel that works with both the QEMU and the virtio?
zero-point
(51 rep)
May 2, 2020, 07:06 AM
• Last activity: Sep 14, 2024, 02:44 PM
1
votes
0
answers
88
views
How to run samples/rpmsg_client_sample in Linux?
I am learning RPMSG in Linux, and I found there is a `samples/rpmsg_client_sample.c`. I built it into a kernel module, but I don't know how to make its `probe` function to be called? And is there any other examples/demos on RPMSG in Linux for my reference?
I am learning RPMSG in Linux, and I found there is a
samples/rpmsg_client_sample.c
.
I built it into a kernel module, but I don't know how to make its probe
function to be called?
And is there any other examples/demos on RPMSG in Linux for my reference?
wangt13
(631 rep)
Aug 19, 2024, 12:52 PM
0
votes
2
answers
121
views
Host: Manjaro, Guest: Artix Linux, Issue: No virtio internet/NAT connection on VM
Like the title says, for whatever reason my VM no longer connects to the host/internet. An e1000e connection *does* work. So the problem seems to be with virtio? On host: **ip a** ``` ... 4: virbr0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:8c:9c:74 brd ff:ff:ff:ff:...
Like the title says, for whatever reason my VM no longer connects to the host/internet.
An e1000e connection *does* work.
So the problem seems to be with virtio?
On host:
**ip a**
*network device*
...
4: virbr0: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:8c:9c:74 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
9: vnet4: mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:8d:cd:66 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe8d:cd66/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
**sudo virsh net-dumpxml default**
default
b728bff2-6215-4472-8d38-a5a797d328aa
On guest:
*virt-manager connection config in xml*
default
b728bff2-6215-4472-8d38-a5a797d328aa
**ip a**

Folaht
(1156 rep)
Aug 2, 2024, 09:50 AM
• Last activity: Aug 6, 2024, 05:12 PM
1
votes
1
answers
1039
views
Which Linux kernel config options are required to get QEMU virtio to work?
I am currently trying to boot a QEMU VM with a virtio-blk hard disk and I am failing. I cannot seem to get the kernel to recognize the virtio device. I am on a `linux-6.1.97` source tree and bootstrapped the configuration using the one provided by [Firecracker](https://raw.githubusercontent.com/fire...
I am currently trying to boot a QEMU VM with a virtio-blk hard disk and I am failing. I cannot seem to get the kernel to recognize the virtio device.
I am on a
linux-6.1.97
source tree and bootstrapped the configuration using the one provided by [Firecracker](https://raw.githubusercontent.com/firecracker-microvm/firecracker/main/resources/guest_configs/microvm-kernel-ci-x86_64-6.1.config
). At this point I enabled every configuration option which has VIRTIO
in its name but the kernel still does not recognize any virtio device that QEMU supposedly exposed.
(My rootfs is EXT4 and I have the required kernel configurations enabled)
Essentially the problem boils down to [this](https://github.com/cirosantilli/linux-kernel-module-cheat/tree/863a373a30cd3c7982e3e453c4153f85133b17a9#not-syncing-vfs) :
[ 0.083797] Please append a correct "root=" boot option; here are the available partitions:
[ 0.084323] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
(The kernel cannot even find the device. The mentioned kernel configuration options CONFIG_VIRTIO_BLK=y
and CONFIG_VIRTIO_PCI=y
are enabled.)
Here is my QEMU command line:
exec qemu-system-x86_64 \
-enable-kvm \
-cpu host \
-m 512 -smp 2 \
-kernel bzImage \
-append "console=ttyS0 reboot=t panic=0 root=/dev/vda" \
-nodefaults -no-user-config -nographic \
-serial stdio
-no-reboot \
-drive file=rootfs.img,format=raw,media=disk,if=virtio
Does anyone have an idea what might still be missing?
Edit:
Here is the list of all configs enabled related to VIRTIO:
$ grep VIRTIO .config
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_VIRTIO_VSOCKETS=y
CONFIG_VIRTIO_VSOCKETS_COMMON=y
CONFIG_VIRTIO_BLK=y
CONFIG_SCSI_VIRTIO=y
CONFIG_VIRTIO_NET=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_VIRTIO_ANCHOR=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI_LIB=y
CONFIG_VIRTIO_PCI_LIB_LEGACY=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_LEGACY=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MEM=y
CONFIG_VIRTIO_INPUT=y
CONFIG_VIRTIO_MMIO=y
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
CONFIG_VIRTIO_IOMMU=y
CONFIG_RPMSG_VIRTIO=y
CONFIG_VIRTIO_FS=y
Edit2:
I did a little bit more digging and made some progress.
When specifying -drive file=rootfs.img,format=raw,media=disk,if=virtio
the disk will be on PCI and then the boot indeed works out when specifying CONFIG_VIRTIO_PCI
. During experimentation I carelessly changed this, what I actually wanted was this:
-drive id=test,file=rootfs.img,format=raw,if=none \
-device virtio-blk-device,drive=test
Specifically this is running with the -M microvm
QEMU machine target.
And this indeed does not run the disk on PCI. Qemu adds the device to the kernel commandline, f.e. virtio_mmio.device=512@0xfeb00e00:12
.
We can see the device being registered during boot:
[ 0.041667] virtio-mmio: Registering device virtio-mmio.0 at 0xfeb00e00-0xfeb00fff, IRQ 12.
However eventually something fails prior to the message that no root fs could be found:
[ 0.041667] virtio_blk virtio0: 1/0/0 default/read/poll queues
[ 0.041667] virtio_blk: probe of virtio0 failed with error -22
I did a little bit of debugging, and found that this is caused by an error in virtio_mmio.c:vm_find_vqs
, the call to request_irq
([link](https://elixir.bootlin.com/linux/v6.1.97/source/drivers/virtio/virtio_mmio.c#L487)) fails due to irq_to_desc
returning NULL
([link](https://elixir.bootlin.com/linux/v6.1.97/source/kernel/irq/manage.c#L2168))
milck
(171 rep)
Jul 8, 2024, 05:58 PM
• Last activity: Jul 9, 2024, 12:57 PM
1
votes
0
answers
57
views
How can I use a specific folder on my host as the home folder for the primary user of my Debian VM guest in virt-manager
I have a Debian host running virt-manager. Once a VM is up and running, I know how to create a filesystem share to share `host:/home/user` with `vm:/blah`. What I want to do is make it so `vm:/home/person` is `host:/home/user`. Ideally I'd like to do this during the installation of the Debian VM. I'...
I have a Debian host running virt-manager. Once a VM is up and running, I know how to create a filesystem share to share
host:/home/user
with vm:/blah
.
What I want to do is make it so vm:/home/person
is host:/home/user
. Ideally I'd like to do this during the installation of the Debian VM.
I'm not sure how to go about doing this?
I feel like that, somehow, during the Debian VM installation, I'd need to tell the installer to mount host:/home/user
as vm:/home/person
. Is that possible?
IMTheNachoMan
(433 rep)
Oct 11, 2023, 12:28 PM
1
votes
1
answers
246
views
Install base Linux on KVM virtualized hardware (dom0)
**Goal**: I would like to install my main Linux distro on top of KVM, equivalent to `dom0` in Xen hypervisor parlance. I know that running an OS on top of a hypervisor is not uncommon and that KVM has been baked into the Linux kernel since 2.4 or so. The question is how to configure the system so th...
**Goal**: I would like to install my main Linux distro on top of KVM, equivalent to
dom0
in Xen hypervisor parlance. I know that running an OS on top of a hypervisor is not uncommon and that KVM has been baked into the Linux kernel since 2.4 or so. The question is how to configure the system so that it does so automatically on boot. Ideally, the bootloader would start KVM before mounting the disks and booting into the OS.
**Question**: How does one accomplish this?
Adam Erickson
(270 rep)
Jul 7, 2023, 07:23 PM
• Last activity: Jul 7, 2023, 07:50 PM
11
votes
3
answers
8625
views
Does VirtIO storage support discard (fstrim)?
$ uname -r 5.0.9-301.fc30.x86_64 $ findmnt / TARGET SOURCE FSTYPE OPTIONS / /dev/vda3 ext4 rw,relatime,seclabel $ sudo fstrim -v / fstrim: /: the discard operation is not supported Same VM, but after switching the disk from VirtIO to SATA: $ findmnt / TARGET SOURCE FSTYPE OPTIONS / /dev/sda3 ext4 rw...
$ uname -r
5.0.9-301.fc30.x86_64
$ findmnt /
TARGET SOURCE FSTYPE OPTIONS
/ /dev/vda3 ext4 rw,relatime,seclabel
$ sudo fstrim -v /
fstrim: /: the discard operation is not supported
Same VM, but after switching the disk from VirtIO to SATA:
$ findmnt /
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda3 ext4 rw,relatime,seclabel
$ sudo fstrim -v /
/: 5.3 GiB (5699264512 bytes) trimmed
The virtual disk is backed by a QCOW2 file. I am using virt-manager / libvirt. libvirt-daemon is version 4.7.0-2.fc29.x86_64. My host is currently running a vanilla kernel build 5.1 (ish), so it's a little bit "customized" at the moment, but I built it starting from a stock Fedora kernel configuration.
Is there a way to enable discard support on VirtIO somehow? Or does the code just not support it yet? I don't necessarily require the exact instructions how to enable it, but I am surprised and curious and I would like a solid answer :-).
sourcejedi
(53222 rep)
May 10, 2019, 12:37 PM
• Last activity: Jun 30, 2023, 08:47 PM
1
votes
2
answers
1789
views
How to merge separate logical volumes in to one physical disk?
i can see my server disk like this Disk /dev/vda: 50.0 GB Disk /dev/vdb: 50.0 GB I need to combine these two virtual block devices into single physical disk as 100GB. I have installed CentOS 7. Are there any method to combine it?
i can see my server disk like this
Disk /dev/vda: 50.0 GB
Disk /dev/vdb: 50.0 GB
I need to combine these two virtual block devices into single physical disk as 100GB. I have installed CentOS 7.
Are there any method to combine it?
user201696
(11 rep)
Nov 22, 2016, 10:35 AM
• Last activity: May 5, 2023, 11:05 PM
0
votes
2
answers
1190
views
KVM windows 11 guest won't boot when `bus="sata" and address type="drive"` changed to `bus="virtio" and address type="pci"?
I am using a prebuilt qcow2 windows 11 image which and when change from `bus="sata" and address type="drive"` to `bus="virtio" and address type="pci"` my KVM guest windows 11 does not boot .virtio drivers are already installed in the guest. I am using RHEL 9 as guest. I have already a backup of the...
I am using a prebuilt qcow2 windows 11 image which and when change from
I wanted to do this cause of performace benefits but it seems windows does not boot when i make this change.
bus="sata" and address type="drive"
to bus="virtio" and address type="pci"
my KVM guest windows 11 does not boot .virtio drivers are already installed in the guest. I am using RHEL 9 as guest.
I have already a backup of the qcow2 image and i have done it multiple times by copying the backup qcow2 to /var/lib/libvirtd/images, i get the same result.

munish
(8227 rep)
Mar 4, 2023, 04:29 AM
• Last activity: Mar 4, 2023, 06:12 AM
1
votes
0
answers
1012
views
Stuck at booting kernel when running in VM with VirtIO GPU
I compiled the most recent version of the kernel along with the most recent version of busybox by following [this][1] tutorial. To test whether my build was successful, I used an Ubuntu-mate LiveCD to partition the disk and install grub, then booted from the virtual hard drive within my Proxmox serv...
I compiled the most recent version of the kernel along with the most recent version of busybox by following this tutorial. To test whether my build was successful, I used an Ubuntu-mate LiveCD to partition the disk and install grub, then booted from the virtual hard drive within my Proxmox server.
Using the default options for the VM worked fine, I could browse around the minimal distro, download stuff using wget, etc. However, when I changed the Display setting in Proxmox to VirtIO-GPU, and restarted the VM, I got stuck at the
Booting the kernel
message.
I checked my configuration and the DRM_VIRTIO_GPU
option was correctly configured to y
. I thought maybe it could be because I was using SeaBIOS, but running an Alpine VM with the same settings boots fine, and more importantly, actually shows a /dev/dri
file, so I don't think my problem has anything to do with SeaBIOS.
My system doesn't seem to have problems detecting other devices. I can add hard drives and network cards just fine and they show up in the /dev
directory. I read some other similar posts and they mentioned it could be because I had nomodeset
in my kernel command line, but the menuentry
in grub is
linux /boot/vmlinuz-5.19.2 root=/dev/sda1 ro quiet
When I change the quiet
to debug
it gets stuck on the line
[0.219589] pci_bus 0000:02: resource 2 [mem 0xfe000000-0xfe1fffff 64bit pref]
but I don't know what this means. Furthermore, when I change the Display
option in Proxmox to the default standard VGA card, everything works fine, but there is still no /dev/dri
entry like there is on Alpine with the same VGA card. Since I am using the busybox init, I wrote an echo
message as the first command to execute in my inittab
and this does not show up, so the error has to be coming before init runs. How can I narrow this down? Am I missing some drivers?
**EDIT**
It actually seems like the machine is still working? I tried ssh'ing into it from another computer and I could take a look at the output of dmesg
after the pci_bus
message:
[ 0.231742] pci 0000:00:01.0: PIIX3: Enabling Passive Release
Not sure if this is helpful.
GuPe
(111 rep)
Oct 29, 2022, 06:34 AM
• Last activity: Oct 29, 2022, 06:57 AM
0
votes
0
answers
531
views
KVM : virtio better?
I found many articles about changing to *virtio*, but I don't know the reason. Does *virtio* have higher performance? [![virtio][1]][1] [1]: https://i.sstatic.net/LJtIB.png
I found many articles about changing to *virtio*, but I don't know the reason.
Does *virtio* have higher performance?

张绍峰
(135 rep)
Jun 7, 2022, 03:10 AM
• Last activity: Jun 7, 2022, 11:46 AM
0
votes
1
answers
1408
views
Listing VirtIO devices from a shell in a linux guest
As the title already summarizes, is there a way (a tool or a simple command) to list available (thus recognized by a linux guest) VirtIO devices ?
As the title already summarizes, is there a way (a tool or a simple command) to list available (thus recognized by a linux guest) VirtIO devices ?
Yves Lhuillier
(11 rep)
Apr 26, 2021, 03:47 PM
• Last activity: Jun 1, 2022, 12:56 PM
Showing page 1 of 20 total questions