Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

1 votes
0 answers
37 views
VFIO single GPU passthrough - AMD-Vi: Completion-Wait loop timed out
I am trying to pass gpu to vm with libvirt following guides: - https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#Setting_up_an_OVMF-based_guest_virtual_machine - https://github.com/joeknock90/Single-GPU-Passthrough After domain is started the vfio driver is bound to gpu, but the screen is bl...
I am trying to pass gpu to vm with libvirt following guides: - https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#Setting_up_an_OVMF-based_guest_virtual_machine - https://github.com/joeknock90/Single-GPU-Passthrough After domain is started the vfio driver is bound to gpu, but the screen is black:
@ubuntu:~$ lspci -nnks 03:00.0
03:00.0 VGA compatible controller : Advanced Micro Devices, Inc. [AMD/ATI] Picasso/Raven 2 [Radeon Vega Series / Radeon Vega Mobile Series] [1002:15d8] (rev c2)
        Subsystem: Lenovo ThinkPad E595 [17aa:5124]
        Kernel driver in use: vfio-pci
        Kernel modules: amdgpu
dmesg gives
[  329.365209] Console: switching to colour dummy device 80x25
[  331.414644] VFIO - User Level meta-driver version: 0.3
[  331.518909] amdgpu 0000:03:00.0: amdgpu: amdgpu: finishing device.
[  331.537364] [drm] psp gfx command UNLOAD_TA(0x2) failed and response status is (0x117)
[  331.574817] [drm] amdgpu: ttm finalized
[  331.575932] ------------[ cut here ]------------
[  331.575939] sysfs group 'power' not found for kobject 'amdgpu_bl1'
[  331.575961] WARNING: CPU: 4 PID: 86 at fs/sysfs/group.c:282 sysfs_remove_group+0x85/0x90
[  331.575979] Modules linked in: vfio_pci vfio_pci_core vfio_iommu_type1 vfio iommufd ccm xt_CHECKSUM xt_MASQUERADE xt_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp nft_compat nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables bridge stp llc binfmt_misc intel_rapl_msr intel_rapl_common snd_sof_amd_acp63 amdgpu snd_sof_amd_vangogh snd_sof_amd_rembrandt snd_sof_amd_renoir snd_sof_amd_acp snd_sof_pci snd_sof_xtensa_dsp snd_sof edac_mce_amd snd_sof_utils kvm_amd snd_ctl_led snd_soc_core snd_hda_codec_conexant nls_iso8859_1 snd_hda_codec_generic snd_hda_codec_hdmi snd_compress ac97_bus amdxcp kvm snd_pcm_dmaengine drm_exec rtw88_8822be snd_pci_ps rtw88_8822b snd_hda_intel gpu_sched uvcvideo rtw88_pci btusb drm_buddy snd_rpl_pci_acp6x btrtl snd_intel_dspcfg rtw88_core irqbypass snd_intel_sdw_acpi drm_suballoc_helper btintel videobuf2_vmalloc snd_acp_pci uvc btbcm drm_ttm_helper videobuf2_memops btmtk snd_hda_codec ttm videobuf2_v4l2 rapl snd_acp_legacy_common bluetooth drm_display_helper think_lmi
[  331.576257]  videodev snd_pci_acp6x snd_pci_acp5x wmi_bmof mac80211 firmware_attributes_class snd_hda_core snd_rn_pci_acp3x videobuf2_common cec snd_hwdep snd_acp_config ecdh_generic rc_core mc snd_soc_acpi snd_pcm ecc k10temp i2c_piix4 i2c_algo_bit libarc4 snd_timer ccp snd_pci_acp3x i2c_scmi input_leds joydev cfg80211 serio_raw mac_hid sch_fq_codel dm_multipath efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic dm_crypt raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 crct10dif_pclmul crc32_pclmul polyval_clmulni polyval_generic ghash_clmulni_intel thinkpad_acpi nvme sha256_ssse3 sha1_ssse3 ahci nvram psmouse libahci snd ucsi_acpi nvme_core xhci_pci typec_ucsi xhci_pci_renesas soundcore nvme_auth typec video ledtrig_audio platform_profile wmi aesni_intel crypto_simd cryptd
[  331.576437] CPU: 4 PID: 86 Comm: kworker/4:1 Not tainted 6.8.0-71-generic #71-Ubuntu
[  331.576442] Hardware name: LENOVO 20NE000JPB/20NE000JPB, BIOS R11ET36W (1.16 ) 03/30/2020
[  331.576446] Workqueue: events drm_connector_free_work_fn
[  331.576453] RIP: 0010:sysfs_remove_group+0x85/0x90
[  331.576458] Code: c0 31 d2 31 f6 31 ff e9 59 79 c8 00 48 89 df e8 11 9b ff ff eb c1 49 8b 55 00 49 8b 34 24 48 c7 c7 28 57 09 bb e8 4b 35 b3 ff  0b eb cb 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90
[  331.576462] RSP: 0018:ffffa8b78041fd08 EFLAGS: 00010246
[  331.576467] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[  331.576470] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[  331.576473] RBP: ffffa8b78041fd20 R08: 0000000000000000 R09: 0000000000000000
[  331.576476] R10: 0000000000000000 R11: 0000000000000000 R12: ffffffffbaafe2a0
[  331.576478] R13: ffff9c5c4b961490 R14: ffff9c5c6c380268 R15: ffff9c5c48044c00
[  331.576481] FS:  0000000000000000(0000) GS:ffff9c5cf8800000(0000) knlGS:0000000000000000
[  331.576485] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  331.576488] CR2: 00007fe81402d838 CR3: 0000000123732000 CR4: 00000000003506f0
[  331.576491] Call Trace:
[  331.576494]  
[  331.576500]  ? show_regs+0x6d/0x80
[  331.576507]  ? __warn+0x89/0x160
[  331.576513]  ? sysfs_remove_group+0x85/0x90
[  331.576518]  ? report_bug+0x17e/0x1b0
[  331.576525]  ? handle_bug+0x6e/0xb0
[  331.576532]  ? exc_invalid_op+0x18/0x80
[  331.576538]  ? asm_exc_invalid_op+0x1b/0x20
[  331.576547]  ? sysfs_remove_group+0x85/0x90
[  331.576552]  ? sysfs_remove_group+0x85/0x90
[  331.576557]  dpm_sysfs_remove+0x60/0x70
[  331.576563]  device_del+0xa2/0x3e0
[  331.576568]  ? srso_return_thunk+0x5/0x5f
[  331.576574]  ? __radix_tree_delete+0x9e/0x150
[  331.576581]  device_unregister+0x17/0x60
[  331.576586]  backlight_device_unregister.part.0+0x9d/0xb0
[  331.576595]  backlight_device_unregister+0x13/0x30
[  331.576601]  amdgpu_dm_connector_destroy+0xeb/0x140 [amdgpu]
[  331.577205]  ? drm_mode_object_unregister+0x6a/0xa0
[  331.577213]  drm_connector_free_work_fn+0x77/0xa0
[  331.577220]  process_one_work+0x184/0x3a0
[  331.577227]  worker_thread+0x306/0x440
[  331.577232]  ? srso_return_thunk+0x5/0x5f
[  331.577238]  ? _raw_spin_lock_irqsave+0xe/0x20
[  331.577244]  ? __pfx_worker_thread+0x10/0x10
[  331.577248]  kthread+0xf2/0x120
[  331.577255]  ? __pfx_kthread+0x10/0x10
[  331.577260]  ret_from_fork+0x47/0x70
[  331.577266]  ? __pfx_kthread+0x10/0x10
[  331.577271]  ret_from_fork_asm+0x1b/0x30
[  331.577282]  
[  331.577284] ---[ end trace 0000000000000000 ]---
[  331.577838] vfio-pci 0000:03:00.0: vgaarb: deactivate vga console
[  331.577846] vfio-pci 0000:03:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[  333.700319] kauditd_printk_skb: 100 callbacks suppressed
[  333.700328] audit: type=1400 audit(1754269730.212:112): apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44" pid=1422 comm="apparmor_parser"
[  333.700338] audit: type=1400 audit(1754269730.213:113): apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44//passt" pid=1422 comm="apparmor_parser"
[  333.799726] audit: type=1400 audit(1754269730.312:114): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44" pid=1425 comm="apparmor_parser"
[  333.810460] audit: type=1400 audit(1754269730.323:115): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44//passt" pid=1425 comm="apparmor_parser"
[  333.921948] audit: type=1400 audit(1754269730.435:116): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44" pid=1429 comm="apparmor_parser"
[  333.933480] audit: type=1400 audit(1754269730.446:117): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44//passt" pid=1429 comm="apparmor_parser"
[  334.041433] audit: type=1400 audit(1754269730.554:118): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44" pid=1433 comm="apparmor_parser"
[  334.041829] audit: type=1400 audit(1754269730.554:119): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44//passt" pid=1433 comm="apparmor_parser"
[  334.060569] virbr0: port 1(vnet0) entered blocking state
[  334.060586] virbr0: port 1(vnet0) entered disabled state
[  334.060612] vnet0: entered allmulticast mode
[  334.060789] vnet0: entered promiscuous mode
[  334.061279] virbr0: port 1(vnet0) entered blocking state
[  334.061291] virbr0: port 1(vnet0) entered listening state
[  334.154377] audit: type=1400 audit(1754269730.667:120): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44" pid=1441 comm="apparmor_parser"
[  334.160364] audit: type=1400 audit(1754269730.673:121): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44//passt" pid=1441 comm="apparmor_parser"
[  334.455362] kvm: SMP vm created on host with unstable TSC; guest TSC will not be reliable
[  335.521105] AMD-Vi: Completion-Wait loop timed out
[  335.679874] AMD-Vi: Completion-Wait loop timed out
[  336.119893] virbr0: port 1(vnet0) entered learning state
[  336.522658] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7bf0]
[  336.804874] AMD-Vi: Completion-Wait loop timed out
[  337.015900] AMD-Vi: Completion-Wait loop timed out
[  337.335904] AMD-Vi: Completion-Wait loop timed out
[  337.524658] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7c20]
[  337.399944] AMD-Vi: Completion-Wait loop timed out
[  337.654878] AMD-Vi: Completion-Wait loop timed out
[  337.834757] AMD-Vi: Completion-Wait loop timed out
[  338.168892] virbr0: port 1(vnet0) entered forwarding state
[  338.168905] virbr0: topology change detected, propagating
[  338.168889] AMD-Vi: Completion-Wait loop timed out
[  338.363888] AMD-Vi: Completion-Wait loop timed out
[  338.526667] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7c50]
[  338.551950] AMD-Vi: Completion-Wait loop timed out
[  338.950868] AMD-Vi: Completion-Wait loop timed out
[  339.127921] AMD-Vi: Completion-Wait loop timed out
[  339.137211] AMD-Vi: Completion-Wait loop timed out
[  339.275191] AMD-Vi: Completion-Wait loop timed out
[  339.528679] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7c80]
[  339.447945] AMD-Vi: Completion-Wait loop timed out
[  339.590874] AMD-Vi: Completion-Wait loop timed out
[  339.831958] AMD-Vi: Completion-Wait loop timed out
[  339.959934] AMD-Vi: Completion-Wait loop timed out
[  340.215929] AMD-Vi: Completion-Wait loop timed out
[  340.530700] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7cb0]
[  340.599897] AMD-Vi: Completion-Wait loop timed out
[  340.727929] AMD-Vi: Completion-Wait loop timed out
[  341.033943] AMD-Vi: Completion-Wait loop timed out
[  341.119231] AMD-Vi: Completion-Wait loop timed out
[  341.257066] AMD-Vi: Completion-Wait loop timed out
[  341.532721] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7ce0]
[  342.534749] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7d10]
[  343.536781] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7d40]
[  344.538818] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7d70]
[  345.540857] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7da0]
[  346.542901] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7dd0]
[  347.544950] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7e20]
[  348.547000] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7e50]
[  349.549052] iommu ivhd0: AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=0000:03:00.0 address=0x1002b7e80]
Iommu was successfully loaded.
sudo dmesg | grep iommu
[    0.000000] Command line: BOOT_IMAGE=/vmlinuz-6.8.0-71-generic root=/dev/mapper/ubuntu--vg-ubuntu--lv ro iommu=pt amd_iommu=on
[    0.027226] Kernel command line: BOOT_IMAGE=/vmlinuz-6.8.0-71-generic root=/dev/mapper/ubuntu--vg-ubuntu--lv ro iommu=pt amd_iommu=on
[    0.360022] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.402401] pci 0000:00:01.0: Adding to iommu group 0
[    0.402426] pci 0000:00:01.1: Adding to iommu group 1
[    0.402458] pci 0000:00:01.6: Adding to iommu group 2
[    0.402500] pci 0000:00:08.0: Adding to iommu group 3
[    0.402525] pci 0000:00:08.1: Adding to iommu group 4
[    0.402548] pci 0000:00:08.2: Adding to iommu group 3
[    0.402589] pci 0000:00:14.0: Adding to iommu group 5
[    0.402611] pci 0000:00:14.3: Adding to iommu group 5
[    0.402703] pci 0000:00:18.0: Adding to iommu group 6
[    0.402725] pci 0000:00:18.1: Adding to iommu group 6
[    0.402748] pci 0000:00:18.2: Adding to iommu group 6
[    0.402774] pci 0000:00:18.3: Adding to iommu group 6
[    0.402797] pci 0000:00:18.4: Adding to iommu group 6
[    0.402820] pci 0000:00:18.5: Adding to iommu group 6
[    0.402843] pci 0000:00:18.6: Adding to iommu group 6
[    0.402867] pci 0000:00:18.7: Adding to iommu group 6
[    0.402903] pci 0000:01:00.0: Adding to iommu group 7
[    0.402929] pci 0000:02:00.0: Adding to iommu group 8
[    0.402963] pci 0000:03:00.0: Adding to iommu group 9
[    0.403045] pci 0000:03:00.1: Adding to iommu group 10
[    0.403079] pci 0000:03:00.2: Adding to iommu group 10
[    0.403113] pci 0000:03:00.3: Adding to iommu group 10
[    0.403148] pci 0000:03:00.4: Adding to iommu group 10
[    0.403182] pci 0000:03:00.5: Adding to iommu group 10
[    0.403217] pci 0000:03:00.6: Adding to iommu group 10
[    0.403230] pci 0000:04:00.0: Adding to iommu group 3
[    0.404218] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
The GPU has its own iommu group (9).
hashibane@ubuntu:~$ ls /sys/bus/pci/devices/0000\:03\:00.0/iommu_group/devices
0000:03:00.0
I also modified grub
GRUB_CMDLINE_LINUX_DEFAULT="iommu=pt amd_iommu=on"
Domain config
xubuntu
  400eec47-fd10-4b05-abd3-8ee6552e8a44
  
    
      
    
  
  4194304
  4194304
  2
  
    /machine
  
  
    hvm
    
  
  
    
    
    
      
    
    
      
    
    
  
  
  
    
    
    
  
  destroy
  restart
  destroy
  
    
    
  
  
    /usr/bin/qemu-system-x86_64
    
      
      
      
      
      
      
    
    
      
      
      
      
      
      
    
    
      
      
      
      
      
    
    
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
      
      
    
    
      
      
    
    
      
      
    
    
      
      
    
    
      
      
      
      
      
      
    
    
      
      
        
      
      
    
    
      
      
      
    
    
      
      
      
      
    
    
      
    
    
      
    
    
      
      
    
    
    
      
      
      
    
    
      
      
        
      
      
      
    
    
      
    
    
      
      
    
    
      /dev/urandom
      
      
    
  
  
    libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44
    libvirt-400eec47-fd10-4b05-abd3-8ee6552e8a44
  
  
    +64055:+994
    +64055:+994
I am running ubuntu server - Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-71-generic x86_64) on a thinkpad e495 AMD Ryzen 5 3500U with Radeon Vega Mobile Gfx (integrated graphics). I am stuck as on whats the exact issue here. I have also searched through theese but haven't found much success: - https://lenovopress.lenovo.com/lp1470.pdf - https://clayfreeman.github.io/gpu-passthrough/
Hashibane (11 rep)
Aug 4, 2025, 01:56 AM
0 votes
1 answers
1970 views
Multi-GPU Setup in sway
I will have soon a setup, where my desktop has an integrated and a dedicated graphics card, both AMD, and I already searched but didn't find any good answer in how the multi-GPU setup works. I would like to have a setup where I primarily use my integrated graphics card and can select the GPU for the...
I will have soon a setup, where my desktop has an integrated and a dedicated graphics card, both AMD, and I already searched but didn't find any good answer in how the multi-GPU setup works. I would like to have a setup where I primarily use my integrated graphics card and can select the GPU for the starting application with an environment variable, similar to primusrun from NVIDIA. I've read that you can use the environment variable WLR_DRM_DEVICES so the first device will be used for rendering and copied to the other graphics cards. But is it possible that you can select another GPU on the fly for more computation intensive applications like games? Another thing I've read that it manages this automatically with GBM, but then again, how am I able to choose which GPU to use? Background information: I'm getting a new CPU which I chose to have integrated graphics, so I have 2 renderable devices. Reason is I want to be able to unplug my dedicated graphics card in software, so I can use it for my Windows VM setup via PCI passthrough, if applications (games in particular) don't work with wine. Before anyone mentions dual os setup, I already have this, but it's extremely annoying if you have to reboot the system to change the OS, especially if you're talking with someone over Discord, Teamspeak, etc. TL;DR: How does multi-GPU setup in sway work?
DigitalDragon (3 rep)
Dec 25, 2021, 10:43 AM • Last activity: May 28, 2025, 08:00 AM
0 votes
0 answers
264 views
Integrated Graphics Passthrough through QEMU
I'm having a laptop with i5-10210U, using it as a server-station. I am trying to passthrough the integrated Graphics Card to a VM using QEMU with this command, modified by [fallowing this tutorial][1]: virt-install --name windows11 --ram=8192 --vcpus=8 --machine q35 \ --device vfio-pci,host=00:02.0...
I'm having a laptop with i5-10210U, using it as a server-station. I am trying to passthrough the integrated Graphics Card to a VM using QEMU with this command, modified by fallowing this tutorial : virt-install --name windows11 --ram=8192 --vcpus=8 --machine q35 \ --device vfio-pci,host=00:02.0 --cpu host --hvm --disk path=w11vml,size=80 \ -object input-linux,id=kbd,evdev=/dev/input/by-path/platform-i8042-serio-0-event-kbd --cdrom W11.iso --graphics vnc,port=5901,listen=0.0.0.0,passwd='123456' response: virt-install: error: unrecognized arguments: --device vfio-pci,host=00:02.0 [alongside with -object input-linux] My problem is that I don't know how to translate the qemu-system-x86_64 arguments to virt-install after running the detaching script disabling the GPU on host, which would be: sudo qemu-system-x86_64 -machine q35 -m 2G -accel kvm -cpu host \ -device vfio-pci,host=00:02.0 -nographic -vga none \ -object input-linux,id=kbd,evdev=/dev/input/by-path/platform-i8042-serio-0-event-kbd \ -cdrom Fedora-Workstation-Live-x86_64-29-1.2.iso Additional info: VT-d is enabled in BIOS, OS: AlmaLinux9 I know the args could be very wrong, especially with graphics vnc args one, as it needs to be forced to fallback to the integrated graphics, but any corrections would be very helpfull.. I'm new to this "business" :D
Bestmank (1 rep)
Jan 26, 2025, 11:02 PM • Last activity: Jan 26, 2025, 11:19 PM
0 votes
0 answers
60 views
Trying to install and configure correctly Xorg and the nvidia drivers in/on Open Indiana virtualized with bhyve on FreeBSD 14.1
I've just virtualized Open Indiana in bhyve (FreeBSD hypervisor),using the following parameters : bhyve-win -S -c sockets=2,cores=2,threads=2 -m 4G -w -H -A \ -s 0,hostbridge \ -s 1,ahci-hd,/mnt/zroot2/zroot2/bhyve/img/Open-Indiana/efi/Minimal/openindiana-1.img \ -s 8:0,passthru,2/0/0 \ -s 8:1,passt...
I've just virtualized Open Indiana in bhyve (FreeBSD hypervisor),using the following parameters : bhyve-win -S -c sockets=2,cores=2,threads=2 -m 4G -w -H -A \ -s 0,hostbridge \ -s 1,ahci-hd,/mnt/zroot2/zroot2/bhyve/img/Open-Indiana/efi/Minimal/openindiana-1.img \ -s 8:0,passthru,2/0/0 \ -s 8:1,passthru,2/0/1 \ -s 8:2,passthru,2/0/2 \ -s 8:3,passthru,2/0/3 \ -s 13,virtio-net,tap5 \ -s 29,fbuf,tcp=0.0.0.0:5905,w=1600,h=950,wait \ -s 31,lpc \ -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI_CODE.fd \ vm0:5 < /dev/null & sleep 2 && vncviewer 0:5 & You can see that between the bhyve parameters I'm passing thru the 4 slots of my GPU,nvidia geforce 2080 ti,that are the following ones : 02:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] 02:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller 02:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller 02:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller What I did within the OI vm ? I've installed xorg ; mate and the nvidia drivers and then I've issued the command : "startx". Well,I expected that my external monitor (attached to the HDMI port of the GPU Geforce 2080 ti was able to turn on,but this didn't happen. Instead,I see some kind of error that prevents it from doing it : https://ibb.co/YbMW5F6 This is the xorg.conf file that I've created : Section "Device" Identifier "Card0" Driver "nvidia BusID "PCI:0:0:8" EndSection I hope that the address 0:0:8 is correct,according with the PCI address assigned to the GPU : https://ibb.co/KqFqfRS I don't understand why it wants to use the device 0:0:29 instead of the 0:0:8 when I have used the 0:0:8 and the 0:0:29 can't be used because it is the framebuffer....
Marietto (579 rep)
Nov 9, 2024, 02:55 PM
0 votes
0 answers
331 views
Kvm GPU passthrough Debian 12
I have a Debian 12 in a machine where I have installed Windows 10 as a Guest using KVM (Managed with Virt-Manager). The machine has 2 graphic cards. 1) GeForce RTX 2060 12GB (I want to use in the host) 2) AMD Radeon R5 220 2GB ddr3 (I wanted to use in a Guest Windows 10) I follow mainly this link [k...
I have a Debian 12 in a machine where I have installed Windows 10 as a Guest using KVM (Managed with Virt-Manager). The machine has 2 graphic cards. 1) GeForce RTX 2060 12GB (I want to use in the host) 2) AMD Radeon R5 220 2GB ddr3 (I wanted to use in a Guest Windows 10) I follow mainly this link [kvm-gpu-passthrough](https://drakeor.com/2022/02/16/kvm-gpu-passthrough-tutorial/) In the Windows the device manager shows the Amd Radeon graphic cards and the Red Hat QXL controller. In the Windows I have install the driver from [AMD driver r5-220](https://www.amd.com/pt/support/downloads/drivers.html/graphics/radeon-r9-r7-r5/radeon-r5-200-series/amd-radeon-r5-220.html) but when I try to run the AMD program it says "No AMD graphics driver is installed or the AMD driver is not functioning properly I have configured the /etc/default/grub file as
GRUB_CMDLINE_LINUX_DEFAULT="quiet modprobe.blacklist=radeon"
GRUB_CMDLINE_LINUX="amd_iommu=on iommu=pt "
Using the Virt-Manager I have added 2 PCI associated to the AMD card
and
I have used the script
!/bin/bash
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
    echo "IOMMU Group ${g##*/}:"
    for d in $g/devices/*; do
        echo -e "\t$(lspci -nns ${d##*/})"
    done;
done;
in order to get this 2 "pci devices". This script gives me
IOMMU Group 28:
	0b:00.0 VGA compatible controller : NVIDIA Corporation TU106 [GeForce RTX 2060 12GB] [10de:1f03] (rev a1)
	0b:00.1 Audio device : NVIDIA Corporation TU106 High Definition Audio Controller [10de:10f9] (rev a1)
IOMMU Group 29:
	0c:00.0 VGA compatible controller : Advanced Micro Devices, Inc. [AMD/ATI] Cedar [Radeon HD 5000/6000/7350/8350 Series] [1002:68f9]
	0c:00.1 Audio device : Advanced Micro Devices, Inc. [AMD/ATI] Cedar HDMI Audio [Radeon HD 5400/6300/7300 Series] [1002:aa68]
and I have supposed that the Group 29 is the one I have to add to the Virtual Machine I do not know if I am missing something.
Fabio Paolini (21 rep)
Sep 27, 2024, 02:34 PM
0 votes
1 answers
199 views
libvirtd not starting after unsuccessful GPU passthrough
I attempted to follow this guide on Ubuntu 22.04: https://wiki.gentoo.org/wiki/GPU_passthrough_with_libvirt_qemu_kvm I didn't need to change my BIOS or my Kernel, and I skipped the step involving the GRUB bootloader. The guide also suggested to create a file /etc/modprobe.d/vfio.conf and add "option...
I attempted to follow this guide on Ubuntu 22.04: https://wiki.gentoo.org/wiki/GPU_passthrough_with_libvirt_qemu_kvm I didn't need to change my BIOS or my Kernel, and I skipped the step involving the GRUB bootloader. The guide also suggested to create a file /etc/modprobe.d/vfio.conf and add "options vfio-pci ids=xxxx:xxxx,xxxx:xxxx" with the proper IDs for my nVidia gpu (and onboard gpu audio) devices, which I did. Then I copied the steps involving libvirt's home directory, in this section of the guide: https://wiki.gentoo.org/wiki/GPU_passthrough_with_libvirt_qemu_kvm#Libvirt ... changing the libvirt-qemu user's home directory to /home/qemu, where my 'pulse' config folder was copied to using cp -r. I also made the recommended changes to /etc/libvirt/qemu.conf, in the cgroup_acl_device[] section, listing all four device entries for my Logitech kb and mouse. Then, I added the host nVidia hardware (gpu video + gpu sound) via the qemu graphical interface, and edited the XML to include evdev entries under , following the guide exactly. This is where the guide stopped being relevant to my Ubuntu 22.04 setup, so I tried starting the Windows domain. Qemu immediately became unresponsive, and libvirtd crashed after hanging for a moment. The first attempt to restart the service through systemctl hanged for a good minute without any output from the command, I had to ctrl+C it and try again. Now trying to restart the service via systemctl fails much more quickly, and it simply returns "exit-code" each time. **Troubleshooting attempted:** I removed all of the changes to /etc/libvirt/qemu.conf, restoring it to defaults. libvirtd still wouldn't load. Then, I removed all of the changes to the domain's xml by editing /etc/libvirt/qemu/win11.xml and removing the evdev entries for kb and mouse, but also the hostdevice entries for the two PCI devices on my nVidia GPU. Still no changes. Then, I deleted the vfio.conf file I created under /etc/modprobe.d, and still broken, systelctl can't reload libvirtd. Finally, I ran sudo apt install --reinstall qemu, after backing up all of my images and xml. It still doesn't work! Same goes after a purge / install. I don't know what to try next.
adamlaughlin (11 rep)
May 25, 2024, 08:05 PM • Last activity: May 25, 2024, 08:36 PM
0 votes
0 answers
340 views
Qemu GPU/Display passthrough
I want to passthrough my GPU to the QEMU Guest and it complete control of my monitor (seem as if it was the ONLY OS running) I have tried on a laptop with only the built-in monitor and the screen just went blank. I guess in a PC it would work if we connect the monitor directly to the GPU but I can't...
I want to passthrough my GPU to the QEMU Guest and it complete control of my monitor (seem as if it was the ONLY OS running) I have tried on a laptop with only the built-in monitor and the screen just went blank. I guess in a PC it would work if we connect the monitor directly to the GPU but I can't test it. **Questions**: 1) Is it possible to passthrough my GPU and give control of the monitor to my guest by connecting the monitor directly to the GPU? 2) Is it possible to give control of my laptop monitor to the guest and passthrough my GPU? **NOTES** * My laptop is runing Ubuntu Cinnamon * I'd like to specify the use of QEMU with KVM even though I think it's obvious? * I would like to NOT use any host/client applications **Similar Post** I found a question (https://unix.stackexchange.com/questions/632645/an-equivalent-of-looking-glass-where-vm-side-runs-linux) that requires a host/client application to basically have the host write the frames back. I would like to not resort to this tactic and give the guest complete control of the monitor. **EDIT:** **Goal** My goal is to have a VM running as a bare-metal machine (when it comes to the monitor) while passing the GPU to that VM (Which means no monitor for the host OS => Black screen on the laptop's monitor) **Example** Imagine I want to run a Mac VM (not bare-metal) on my laptop which has linux, If I want better performance I need to either find a way to add a second GPU to my laptop. If it is possible to display stuff to the monitor of the laptop directly from the VM I can passthrough my GPU without having a second one. Now the constraint of not using other software (like "Looking Glass"). If i want to add different guest (like a custom OS) where the software is not available I would need to either port the software or still fall back to getting a second GPU. So if it's possible I want to avoid something like this. **NOTE 2** The question is not specific for MacOS, I just found it easier to explain with this since you can't just install it without finding drivers etc, etc.
user18812922 (1 rep)
Apr 21, 2024, 01:42 PM • Last activity: Apr 27, 2024, 05:45 PM
0 votes
1 answers
460 views
Tryng to enable the passhtru of one Nvidia gpu to a Bhyve Linux vm on FreeBSD 14.0
For some years it worked,but now,maybe due to the recent changes added on the FreeBSD 13 and 14 code,it does not work anymore : I'm talking about the ability to passthru one Nvidia GpU from the host os (FreeBSD 14.0 in this case) to any Linux vm. The same procedure that worked until "yesterday" it d...
For some years it worked,but now,maybe due to the recent changes added on the FreeBSD 13 and 14 code,it does not work anymore : I'm talking about the ability to passthru one Nvidia GpU from the host os (FreeBSD 14.0 in this case) to any Linux vm. The same procedure that worked until "yesterday" it does not work anymore (for me). The developer does not reply to my message anymore. I would like to be sure that is true that it is bugged,as it seems,and not that I'm making some mistake. So,I will explain what I do to enable this functionality. I hope that someone wants to also try or that he tries a different procedure that works. The most important thing is that we will be able to enable the function. So,time ago the developer gave me 3 scripts to run in sequence. They are the following : a) setup_git_140.sh git clone https://github.com/beckhoff/freebsd-src /usr/corvin-src-140 b) build_branch_140.sh
#!/bin/sh
    
    usage() {
    	cat >&2  []
    	Checkouts to  and builds it with 
     (see build.sh for more information).
    EOF
    	exit 1
    }
    
    set -e
    set -u
    
    readonly script_path="$(cd "$(dirname "${0}")" && pwd)"
    readonly branch="${1?Missing $(usage)}"
    shift
    echo $branch
    
    cd /usr/corvin-src-140
    git fetch --all --prune
    git checkout -f "${branch}"
    
    ${script_path}/build_140.sh "$@"
c) build_build_140.sh
#!/bin/sh
    
    usage() {
    	cat >&2  "${cmd_redirect}" 2>&1
    	fi
    
    	# build module
    	make > "${cmd_redirect}" 2>&1
    
    	# install module
    	make install > "${cmd_redirect}"
    }
    
    build() {
    	build_module "${src_dir}/include"
    	build_module "${src_dir}/lib/libvmmapi"
    	build_module "${src_dir}/sys/modules/vmm"
    
    	# build kernel
    	if test "${with_kernel}" = "true"; then
    		cd "${src_dir}"
    		local kern_opts
    		kern_opts="-j$(sysctl -n hw.ncpu)"
    		if test "${with_bhf}" = "true"; then
    			kern_opts="${kern_opts} 
    KERNCONF=BHF"
    		fi
    		if ! test "${clean}" = "true"; then
    			kern_opts="${kern_opts} 
    NO_CLEAN=YES"
    		fi
    		make kernel ${kern_opts} > "${cmd_redirect}" 2>&1
    	fi
    
    	build_module "${src_dir}/usr.sbin/bhyve"
    	build_module "${src_dir}/usr.sbin/bhyvectl"
    	build_module "${src_dir}/usr.sbin/bhyveload"
    
    	if test "${with_reboot}" = "true"; then
    		reboot
    	fi
    }
    
    set -e
    set -u
    
    while test $# -gt 0; do
    	case "${1-}" in
    		--clean)
    			clean="true"
    			shift
    			;;
    		--reboot)
    			with_reboot="true"
    			shift
    			;;
    		--src-dir=*)
    			src_dir="${1#*=}"
    			shift
    			;;
    		--verbose)
    			cmd_redirect="/dev/stdout"
    			shift
    			;;
    		--without-bhf)
    			with_bhf="false"
    			shift
    			;;
    		--without-kernel)
    			with_kernel="false"
    			shift
    			;;
    		*)
    			usage
    			;;
    	esac
    done
    
    readonly clean="${clean-"false"}"
    readonly cmd_redirect="${cmd_redirect-"/dev/null"}"
    readonly src_dir="${src_dir-"/usr/corvin-src-140"}"
    echo $src_dir
    readonly with_bhf="${with_bhf-"true"}"
    readonly with_kernel="${with_kernel-"true"}"
    readonly with_reboot="${with_reboot-"false"}"
    
    build
Here we go. This is what I do to start the compilation that should produce the working bhyve system files that will give to use the passthru of one nvidia gpu on FreeBSD 14.0 : a) ./setup_git_140.sh b) ./build_branch_140.sh origin/phab/corvink/14.0/nvidia-wip --without-bhf --verbose ok. It compiled the code without giving errors,until a certain point,when it happens what you see below. I want to understand if the code is bugged. Please help me : /usr/corvin-src-140/usr.sbin/bhyve/pci_passthru.c:1174:21: error: use of undeclared identifier 'ctx' passthru_cfgwrite(ctx, vcpu, pi, offset - 0x88000, size, value); ^ /usr/corvin-src-140/usr.sbin/bhyve/pci_passthru.c:1174:26: error: use of undeclared identifier 'vcpu' passthru_cfgwrite(ctx, vcpu, pi, offset - 0x88000, size, value); ^ /usr/corvin-src-140/usr.sbin/bhyve/pci_passthru.c:1209:20: error: use of undeclared identifier 'ctx' passthru_cfgread(ctx, vcpu, pi, offset - 0x88000, size, (uint32_t *)&val); ^ /usr/corvin-src-140/usr.sbin/bhyve/pci_passthru.c:1209:25: error: use of undeclared identifier 'vcpu' passthru_cfgread(ctx, vcpu, pi, offset - 0x88000, size, (uint32_t *)&val); ^ /usr/corvin-src-140/usr.sbin/bhyve/pci_passthru.c:1302:29: error: use of undeclared identifier 'ctx' if (vm_unmap_pptdev_mmio(ctx, sc- >psc_sel.pc_bus, ^ /usr/corvin-src-140/usr.sbin/bhyve/pci_passthru.c:1309:27: error: use of undeclared identifier 'ctx' if (vm_map_pptdev_mmio(ctx, sc- >psc_sel.pc_bus, ^ /usr/corvin-src-140/usr.sbin/bhyve/pci_passthru.c:1327:29: error: use of undeclared identifier 'ctx' if (vm_unmap_pptdev_mmio(ctx, sc- >psc_sel.pc_bus, ^ /usr/corvin-src-140/usr.sbin/bhyve/pci_passthru.c:1334:27: error: use of undeclared identifier 'ctx' if (vm_map_pptdev_mmio(ctx, sc- >psc_sel.pc_bus, ^ 8 errors generated. *** Error code 1 The passthru of one nvidia card inside a linux vm WILL NOT work if you don't do what I have explained. The FreeBSD wiki does not talk about the nvidia gpu,but only about network interfaces. If you want to help me,if you have one nvidia card, if you want to pass it through inside a Linux VM,you SHOULD repeat the steps that I have explained. And tell me if you are able to compile the code successfully. Thanks
Marietto (579 rep)
Feb 23, 2024, 05:05 PM • Last activity: Feb 26, 2024, 06:46 PM
0 votes
0 answers
223 views
Can you remove nvidia_modeset without removing the rest of the driver?
I'm trying to prevent nvidia driver from modesetting as an integrated GPU (i915) is doing it. I'm only using this nvidia card on linux for rendering (PRIME), no modesetting. I've tried blacklisting it in modprobe.d and via kernel options, but it still gets loaded. Nvidia driver doesn't seem to have...
I'm trying to prevent nvidia driver from modesetting as an integrated GPU (i915) is doing it. I'm only using this nvidia card on linux for rendering (PRIME), no modesetting. I've tried blacklisting it in modprobe.d and via kernel options, but it still gets loaded. Nvidia driver doesn't seem to have any configuration option to tell it to stop trying to modeset the displays when they get connected. Is there any other way to force a module to not be loaded? I've tried to just remove the /usr/lib/modules/6.5.9-arch2-1/extramodules/nvidia-modeset.ko.xz module file, but then dracut-rebuild won't work (refuses to produce initrd).
Daniel Krajnik (371 rep)
Nov 3, 2023, 06:26 PM
0 votes
1 answers
42 views
GPU passtrhough, subsystem lost
When I do lspci on my **manjaro linux host** I can see that my Nvidia GPU is a part of `[1025:1409]` subsystem (as is everything else on my machine). The GPU is in its own IOMMU group and has vfio-pci driver. When I pass through my GPU into **windows 11 guest** in qemu the device instance path in wi...
When I do lspci on my **manjaro linux host** I can see that my Nvidia GPU is a part of [1025:1409] subsystem (as is everything else on my machine). The GPU is in its own IOMMU group and has vfio-pci driver. When I pass through my GPU into **windows 11 guest** in qemu the device instance path in windows device manager is PCI\VEN_10DE&DEV_1D52&SUBSYS_00000000&REV_A1\4&12829B10&0&0014. So subsystem is lost. Because of that I have problems with the drivers. First the driver installer couldn't find compatible hardware. Then I replaced all occurences of 1025:1409 sybsystem related to my device (1D52) to 0000:0000 subsystem in nvacig.inf (found no occurences in other *.inf files). Now driver installation starts but fails after some time. How do I fix it? It feels like if the subsystem was right then it would work.
grabantot (101 rep)
Apr 23, 2023, 02:53 PM • Last activity: Apr 23, 2023, 03:27 PM
1 votes
1 answers
1733 views
Sharing GPU between host and guest
Is there a way to make both host and guest use a singular GPU without any paid software (vGPU)? Bonus points if it: - Dynamically allocates GPU resources - Works for AMD and Intel GPUS I have an nVidia GPU, a Linux host and a Windows guest. Perhaps a Linux alternative to https://github.com/jamesstri...
Is there a way to make both host and guest use a singular GPU without any paid software (vGPU)? Bonus points if it: - Dynamically allocates GPU resources - Works for AMD and Intel GPUS I have an nVidia GPU, a Linux host and a Windows guest. Perhaps a Linux alternative to https://github.com/jamesstringerparsec/Easy-GPU-PV
Anm (113 rep)
Apr 7, 2023, 12:05 PM • Last activity: Apr 7, 2023, 03:45 PM
2 votes
2 answers
3415 views
An equivalent of Looking Glass where VM side runs Linux?
[Looking Glass](https://looking-glass.io/) is an open source application that allows the use of a KVM configured with a passthrough GPU without an attached physical monitor, keyboard or mouse. In Looking Glass terminology, the *host software* is the term for the piece of Looking Glass that runs in t...
[Looking Glass](https://looking-glass.io/) is an open source application that allows the use of a KVM configured with a passthrough GPU without an attached physical monitor, keyboard or mouse. In Looking Glass terminology, the *host software* is the term for the piece of Looking Glass that runs in the VM *guest* (the VM where the GPU is used). The *client software* is the term for the piece that runs on the Linux *host*, showing the rendered frames. The Looking Glass host is currently Windows-only, and covers the main use case: run Windows-only GPU-heavy software in a Windows VM, showing the result on the Linux host. I have a slightly different use case: I pass my beefier headless GPU through from a Linux host to a *Linux* VM guest. It works fine there for GPU computations based on OpenCL or CUDA or whatever. I'd also like to be able to run 3D software on that Linux VM guest, and display the result on my Linux host. Thus: Is there an equivalent technology for a Linux guest on a Linux host? Or, alternatively, are there any Looking Glass hosts for Linux?
gspr (208 rep)
Feb 4, 2021, 03:54 PM • Last activity: Feb 20, 2023, 07:28 AM
1 votes
1 answers
508 views
Gnome hangs with VFIO gpu passthrough
On my computer, I have 2 discrete GPUs. I've been using VFIO to pass the second GPU to a Windows VM to work with some programs. Now I want to pass my more powerful first GPU to the Windows VM in order to play some games.(I can create a second Windows VM if necessary, this is not a concern). I've che...
On my computer, I have 2 discrete GPUs. I've been using VFIO to pass the second GPU to a Windows VM to work with some programs. Now I want to pass my more powerful first GPU to the Windows VM in order to play some games.(I can create a second Windows VM if necessary, this is not a concern). I've checked with a [script](https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#Ensuring_that_the_groups_are_valid) if the first GPU has its own IOMMU group, and it has. The problem is, Gnome DE and gdm3 starts without an issue when I boot normally or pass the second GPU, but it doesn't when I pass the first GPU. The monitors do turn on/off depending on the passed GPU. I've tried to restart gdm3, kill gnome-shell, reboot, but it doesn't seem to have any effect getting to the DE. GDM3 started somehow, but it just got the password and returned to the user selection menu. The output of the script: (excluding non-necessary stuff, just the 2 GPUs)
IOMMU Group 12:
	00:15.0 PCI bridge : Advanced Micro Devices, Inc. [AMD/ATI] SB700/SB800/SB900 PCI to PCI bridge (PCIE port 0) [1002:43a0]
	05:00.0 VGA compatible controller : Advanced Micro Devices, Inc. [AMD/ATI] Cedar [Radeon HD 5000/6000/7350/8350 Series] [1002:68f9]
	05:00.1 Audio device : Advanced Micro Devices, Inc. [AMD/ATI] Cedar HDMI Audio [Radeon HD 5400/6300/7300 Series] [1002:aa68]
IOMMU Group 14:
	01:00.0 VGA compatible controller : Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 460/560D / Pro 450/455/460/555/555X/560/560X] [1002:67ef] (rev cf)
	01:00.1 Audio device : Advanced Micro Devices, Inc. [AMD/ATI] Baffin HDMI/DP Audio [Radeon RX 550 640SP / RX 560/560X] [1002:aae0]
journalctl -u gdm when it does start (No passthrough):
Feb 11 17:29:53 Alienus-PC systemd: Starting GNOME Display Manager...
Feb 11 17:29:54 Alienus-PC systemd: Started GNOME Display Manager.
Feb 11 17:29:57 Alienus-PC gdm-autologin]: gkr-pam: no password is available for user
Feb 11 17:30:00 Alienus-PC gdm-autologin]: pam_unix(gdm-autologin:session): session opened for user alienus by (uid=0)
journalctl -u gdm when it does not start (First GPU passthough):
Feb 11 17:25:58 Alienus-PC systemd: Starting GNOME Display Manager...
Feb 11 17:25:58 Alienus-PC systemd: Started GNOME Display Manager.
Feb 11 17:25:58 Alienus-PC gdm-autologin]: gkr-pam: no password is available for user
Feb 11 17:25:58 Alienus-PC gdm-autologin]: pam_unix(gdm-autologin:session): session opened for user alienus by (uid=0)
Feb 11 17:25:58 Alienus-PC gdm-autologin]: gkr-pam: couldn't unlock the login keyring.
Feb 11 17:25:59 Alienus-PC gdm-autologin]: pam_unix(gdm-autologin:session): session closed for user alienus
Feb 11 17:25:59 Alienus-PC gdm3: GdmDisplay: Session never registered, failing
Feb 11 17:25:59 Alienus-PC gdm-launch-environment]: pam_unix(gdm-launch-environment:session): session opened for user gdm by (uid=0)
Feb 11 17:25:59 Alienus-PC gdm-launch-environment]: pam_unix(gdm-launch-environment:session): session closed for user gdm
Feb 11 17:25:59 Alienus-PC gdm3: Child process -2688 was already dead.
journalctl -u gdm with debug enabled (/etc/gdm3/custom.conf, First GPU passthrough): https://paste.ubuntu.com/p/cSsDpBynyM/ (the output is about 52k characters, I can't post it here) Specs of the system: - Ubuntu 20.04.5 - Kernel 5.15.0-60-generic - [Gigabyte GA-970-D3 Motherboard](https://www.gigabyte.com/Motherboard/GA-970A-D3-rev-10-11#ov) - CPU [AMD FX 6100](https://www.amd.com/en/products/cpu/fx-6100) - First GPU [AMD RX 460 4G](https://www.amd.com/en/products/graphics/radeon-rx-460) - Second GPU [AMD Radeon 5450](https://www.amd.com/en/support/graphics/amd-radeon-hd/ati-radeon-hd-5000-series/ati-radeon-hd-5450)
Emre Talha (185 rep)
Feb 11, 2023, 02:43 PM • Last activity: Feb 12, 2023, 01:53 PM
0 votes
0 answers
2524 views
How to unbind a gpu /sys/bus/pci
How do I unbind an Nvidia GPU in `/sys/bus/pci/driver/nvidia/unbind` in order to use it for virtualization I am using `sudo echo "0000:09:00.0" > unbind` but I get `bash: unbind: Permission denied` I am following the docs from [IBM][1],but can't seem to figure out to unbind the gpu the os I am using...
How do I unbind an Nvidia GPU in /sys/bus/pci/driver/nvidia/unbind in order to use it for virtualization I am using sudo echo "0000:09:00.0" > unbind but I get bash: unbind: Permission denied I am following the docs from IBM ,but can't seem to figure out to unbind the gpu the os I am using is ubuntu 22.04 when running the command echo "0000:09:00.0" | sudo tee /sys/bus/pci/drivers/nvidia/unbind this cause the terminal to hang and I can't use it anymore even trying to kill the process fails
root        8646  0.0  0.0  14356  5912 pts/0    S+   09:07   0:00 sudo tee /sys/bus/pci/drivers/nvidia/unbind
root        8647  0.0  0.0  14356   956 pts/1    Ss+  09:07   0:00 sudo tee /sys/bus/pci/drivers/nvidia/unbind
root        8648 11.5  0.0   8380  1024 pts/1    R    09:07   0:14 tee /sys/bus/pci/drivers/nvidia/unbind
Weed Cookie (111 rep)
Aug 28, 2022, 06:56 PM • Last activity: Aug 29, 2022, 06:11 AM
0 votes
1 answers
1899 views
kvm:the difference between "blacklist" and "softdep"
I am newbie here and I can only find the blog or readme from github. Is there any official documents? Emm,someone wrote the "blacklist" on "/etc/modules-load.d/modules.conf" while someone wrote the "blacklist" on "/etc/modules-load.d/blacklist.conf". And someone wrote the "softdep" instead of "black...
I am newbie here and I can only find the blog or readme from github. Is there any official documents? Emm,someone wrote the "blacklist" on "/etc/modules-load.d/modules.conf" while someone wrote the "blacklist" on "/etc/modules-load.d/blacklist.conf". And someone wrote the "softdep" instead of "blacklist" For example, someone wrote echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf echo "blacklist amdgpu" >> /etc/modprobe.d/blacklist.conf echo "blacklist nvidiafb" >> /etc/modprobe.d/blacklist.conf echo "blacklist snd_hda_intel" >> /etc/modprobe.d/blacklist.conf and someone wrote echo "softdep nouveau pre: vfio-pci" >> /etc/modprobe.d/nvidia.conf echo "softdep nvidia pre: vfio-pci" >> /etc/modprobe.d/nvidia.conf echo "softdep nvidia* pre: vfio-pci" >> /etc/modprobe.d/nvidia.conf I really don't know the difference,any suggestion would be appreciated.
张绍峰 (135 rep)
Jun 3, 2022, 02:17 AM • Last activity: Jun 4, 2022, 09:02 AM
0 votes
0 answers
2690 views
Cant get vfio-pci driver to load for nvidia GPU
Okay, I'm not getting any further so asking for help.. I've tried everything I can think of or find online. I'm trying to get the GPU passthrough working so I can use it in a VM with virt-manager/KVM. I followed this guide mainly (below) set all files, updated kernel and set grub lines. I cant get a...
Okay, I'm not getting any further so asking for help.. I've tried everything I can think of or find online. I'm trying to get the GPU passthrough working so I can use it in a VM with virt-manager/KVM. I followed this guide mainly (below) set all files, updated kernel and set grub lines. I cant get any output from dmesg | grep vfio following another question (below), so maybe that's a clue. One answer said vfio modules are integrated into the kernel, so lsmod wont show, and my kernel config file shows vfio entries. I've used pre: commands to try to load before the nvidia driver. I was able to use a blocklist.conf to block it, but my display card is nvidia also, and I couldnt get to a shell in recovery mode. https://github.com/NVIDIA/deepops/blob/master/virtual/README.md#bootloader-changes https://askubuntu.com/questions/1247058/how-do-i-confirm-that-vfio-is-working-in-20-04 --- lspci -nn | grep NVIDIA 03:00.0 VGA compatible controller : NVIDIA Corporation GF108GL [Quadro 600] [10de:0df8] (rev a1) 03:00.1 Audio device : NVIDIA Corporation GF108 High Definition Audio Controller [10de:0bea] (rev a1) 08:00.0 3D controller : NVIDIA Corporation GF110GL [Tesla M2090] [10de:1091] (rev a1) 08:00.1 Audio device : NVIDIA Corporation GF110 High Definition Audio Controller [10de:0e09] (rev a1) --- lspci -nnk -d 10de:1091 08:00.0 3D controller : NVIDIA Corporation GF110GL [Tesla M2090] [10de:1091] (rev a1) Subsystem: NVIDIA Corporation GF110GL [Tesla M2090] [10de:0887] Kernel driver in use: nvidia Kernel modules: nvidiafb, nouveau, nvidia --- "linux /boot/vmlinuz root=UUID=$uuid acpi=noirq intel_iommu=on iommu=pt vfio-pci ids=10de:1091,10de:0e09 vfio_iommu_type1 allow_unsafe_interrupts=1" --- I tried both vfio_iommu_type1 allow_unsafe_interrupts=1 and vfio_iommu_type1.allow_unsafe_interrupts=1. --- CONFIG_VFIO_IOMMU_TYPE1=y CONFIG_VFIO_VIRQFD=y CONFIG_VFIO=y CONFIG_VFIO_NOIOMMU=y CONFIG_VFIO_PCI=y CONFIG_VFIO_PCI_VGA=y CONFIG_VFIO_PCI_MMAP=y CONFIG_VFIO_PCI_INTX=y CONFIG_VFIO_PCI_IGD=y CONFIG_VFIO_MDEV=m CONFIG_VFIO_MDEV_DEVICE=m --- grep -oE 'svm|vmx' /proc/cpuinfo | uniq vmx --- cat /etc/modules # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. bonding pci_stub vfio vfio_iommu_type1 vfio_pci kvm kvm_intel --- cat /etc/modules-load.d/vfio-pci.conf vfio-pci --- cat /etc/modprobe.d/vfio.conf options vfio-pci ids=10de:1091,10de:0e09 options vfio_iommu_type1 allow_unsafe_interrupts=1 --- cat /etc/modprobe.d/nvidia.conf softdep nvidia_384 pre: vfio-pci #softdep radeon pre: vfio-pci #softdep amdgpu pre: vfio-pci softdep snd_hda_intel pre: vfio-pci softdep nouveau pre: vfio-pci softdep nvidia pre: vfio-pci softdep nvidia* pre: vfio-pci #softdep drm pre: vfio-pci #softdep xhci_hdc pre: vfio-pci #options kvm_amd avic=1 --- modprobe -c | grep vfio options vfio_pci ids=10de:1091,10de:0e09 options vfio_iommu_type1 allow_unsafe_interrupts=1 softdep mdev post: vfio_mdev softdep nvidia_384 pre: vfio-pci softdep snd_hda_intel pre: vfio-pci softdep nouveau pre: vfio-pci softdep nvidia pre: vfio-pci softdep nvidia* pre: vfio-pci --- cat /etc/initramfs-tools/modules # List of modules that you want to include in your initramfs. # They will be loaded at boot time in the order below. # # Syntax: module_name [args ...] # # You must run update-initramfs(8) to effect this change. # # Examples: # # raid1 # sd_mod vfio vfio_iommu_type1 vfio_pci vfio_virqfd vhost-net --- journalctl -b | grep vfio Dec 10 19:35:17 osboxes kernel: Command line: BOOT_IMAGE=/boot/vmlinuz root=UUID=ef2ecb3b-8e9a-4b20-bf15-47e0c7c98a1f acpi=noirq intel_iommu=on iommu=pt vfio-pci ids=10de:1091,10de:0e09 vfio_iommu_type1 allow_unsafe_interrupts=1 Dec 10 19:35:17 osboxes kernel: Kernel command line: BOOT_IMAGE=/boot/vmlinuz root=UUID=ef2ecb3b-8e9a-4b20-bf15-47e0c7c98a1f acpi=noirq intel_iommu=on iommu=pt vfio-pci ids=10de:1091,10de:0e09 vfio_iommu_type1 allow_unsafe_interrupts=1 Dec 10 19:35:17 osboxes systemd-modules-load: Module 'vfio' is built in Dec 10 19:35:17 osboxes systemd-modules-load: Module 'vfio_iommu_type1' is built in Dec 10 19:35:17 osboxes systemd-modules-load: Module 'vfio_pci' is built in Dec 10 19:35:17 osboxes systemd-modules-load: Module 'vfio_pci' is built in EDIT: yeah, after only blacklisting nouveau, which still caused no driver to be loaded, I removed the all the settings except blacklist nouveau, and even nvidia driver doesn't show.. take of that blacklist and everything is fine.
alchemy (757 rep)
Dec 11, 2021, 03:20 AM • Last activity: Dec 23, 2021, 05:23 AM
1 votes
1 answers
2695 views
chrome: Passthrough is not supported, GL is swiftshader
I'm trying to run headless Chrome in a container using Alpine Linux, I'm getting > Passthrough is not supported, GL is swiftshader The commands to get this are pretty simple, ```sh podman run -ti alpine:3 /bin/sh <<EOF apk update; apk add chromium chromium-swiftshader; chromium-browser \ --headless...
I'm trying to run headless Chrome in a container using Alpine Linux, I'm getting > Passthrough is not supported, GL is swiftshader The commands to get this are pretty simple,
podman run -ti alpine:3 /bin/sh <
What I get is a log like this,
[1207/044552.896481:WARNING:dns_config_service_linux.cc(470)] Failed to read DnsConfig.
[1207/044552.903662:WARNING:vaapi_wrapper.cc(589)] VAAPI video acceleration not available for swiftshader
[1207/044552.903753:ERROR:gpu_init.cc(441)] Passthrough is not supported, GL is swiftshader
[1207/044552.942968:WARNING:dns_config_service_linux.cc(470)] Failed to read DnsConfig.
How can I run headless chrome? What am I doing wrong? What is "passthrough" and why isn't it supported?
Evan Carroll (34663 rep)
Dec 7, 2021, 05:03 AM • Last activity: Dec 7, 2021, 09:53 PM
1 votes
0 answers
635 views
How to passthrough GPU in QEMU-KVM? Failing to start
I have tried countless guides and always ended up with the same result, so it's time I ask those who know what's going on. I have all virtualisation IOMMU etc enabled in my BIOS. I am running the VM without issues when not passing through the GPU, no errors, great performance etc. When I do try to p...
I have tried countless guides and always ended up with the same result, so it's time I ask those who know what's going on. I have all virtualisation IOMMU etc enabled in my BIOS. I am running the VM without issues when not passing through the GPU, no errors, great performance etc. When I do try to passthrough the GPU (both GPU and same IOMMU group devices) and press start the VM, nothing happens. Literally nothing, as if I did not press start the VM. When I try to remove the PCI passthrough, virt-manager crashes. If I forcibly close it then start it again, I can't connect to the server (qemu://system) until I fully restart my PC (logout doesn't help). What am I missing? I have a second GPU of course, which is what I've connected my monitor to. I have Nvidia drivers installed. Both GPUs are recognised and functional (tested by switching the monitor). The GPUs are 3060 and 560. Using Debian 10 and everything is updated to the latest version.
Nero (11 rep)
Jul 20, 2021, 03:03 PM
1 votes
1 answers
342 views
Enabling Intel Integrated Graphics on CLEVO P775TM1-G for use as QEMU pass-through gpu
I'm wondering if it's possible to use my laptops i9-9900K's integrated gpu as a pass-through GPU for QEMU. The display in this laptop is directly connected to a RTX 2080, and is the only VGA device that shows up when running: ``` > lspci | grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporati...
I'm wondering if it's possible to use my laptops i9-9900K's integrated gpu as a pass-through GPU for QEMU. The display in this laptop is directly connected to a RTX 2080, and is the only VGA device that shows up when running:
> lspci | grep VGA
01:00.0 VGA compatible controller: NVIDIA Corporation TU104BM [GeForce RTX 2080 Mobile] (rev a1)
I'm not familiar with how integrated graphic cards are interacted with (via PCI, memory mapping, gpu specific instructions etc). If it's over the PCI bus, it could possible that it's simply not hooked up to it?
luveti (11 rep)
Jan 19, 2021, 09:26 PM • Last activity: Jul 12, 2021, 12:51 PM
0 votes
0 answers
182 views
Can I use GPU through the M.2 interface (support PCIe) together with GPU installed in normal PCIe x16 slot in Linux?
I have a Nvidia GPU workstation for multiple users. I use KVM to create 4 VMs and use GPU in VMs in passthrough mode. However, I find when I pass through all 4 GPUs to these VMs. The host system (Ubuntu server LTS 20.04.02) always freeze. So I guess I have to keep at least one GPU on the host (The C...
I have a Nvidia GPU workstation for multiple users. I use KVM to create 4 VMs and use GPU in VMs in passthrough mode. However, I find when I pass through all 4 GPUs to these VMs. The host system (Ubuntu server LTS 20.04.02) always freeze. So I guess I have to keep at least one GPU on the host (The CPU is TR3960x, does not have integrated GPU). There is no pcie x16 slot left on the motherboard, so I just install a low-end AMD gpu on a M.2 slot with a M.2 to pcie x16 adapter. But after this, I cannot boot into the host system, with error
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)]
I also tried to boot from a USB drive, but got
grub error: cannot allocate kernel buffer, you need to load kernel first.
x.y.z liu (33 rep)
Mar 11, 2021, 08:43 PM • Last activity: Mar 12, 2021, 08:37 AM
Showing page 1 of 20 total questions