Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
2 answers
50 views
transcode movies efficiently when 2 GPUs are available
I have a Kodi video library where each movie is in its own folder because I had to place `.nfo` files with links to TMDB in each folder to ensure correct identification. The movies are in their original Blu-ray resolution, stored on a drive shared over Samba on a gigabit LAN. I need to transcode all...
I have a Kodi video library where each movie is in its own folder because I had to place .nfo files with links to TMDB in each folder to ensure correct identification. The movies are in their original Blu-ray resolution, stored on a drive shared over Samba on a gigabit LAN. I need to transcode all these files with FFmpeg to max. 1334×750 px. Setup: Intel Core i-7 3930K, 2 x NVIDIA GTX 980 6 GB GPU, KDE on Debian Testing, custom FFmpeg compiled with h264_nvenc enabled. Although the GPUs are connected with an SLI bridge, they're not in SLI mode due to NVIDIA's Linux driver limitation (v550.163.01). GPU 0 is used by the system, GPU 1 is idle. How to do it efficiently?
likewise (680 rep)
Aug 4, 2025, 05:21 PM • Last activity: Aug 5, 2025, 09:29 AM
1 votes
1 answers
1020 views
How can one use Nvidia RTX IO on Linux to allow the GPU to directly access the storage (SSD) with only a slight CPU overhead?
I saw in the [2020-09-01 Official Launch Event for the NVIDIA GeForce RTX 30 Series](https://youtu.be/E98hC9e__Xs?t=1436) that the RTX IO in the [Nvidia GeForce RTX 30 Series](https://en.wikipedia.org/wiki/GeForce_30_series) allows the GPU to directly access the storage (SSD) with only a slight CPU...
I saw in the [2020-09-01 Official Launch Event for the NVIDIA GeForce RTX 30 Series](https://youtu.be/E98hC9e__Xs?t=1436) that the RTX IO in the [Nvidia GeForce RTX 30 Series](https://en.wikipedia.org/wiki/GeForce_30_series) allows the GPU to directly access the storage (SSD) with only a slight CPU overhead when using Microsoft DirectStorage (see screenshot below). How can one use RTX IO on Linux? enter image description here
Franck Dernoncourt (5533 rep)
Sep 8, 2020, 01:36 AM • Last activity: Aug 4, 2025, 05:21 PM
1 votes
0 answers
19 views
No signal on VGA monitor when booting Arch Linux with RX480 GPU
I have a problem with system boot. There is no signal communication ("Input not supported") on the monitor, but only on Linux. My specs are below: * Arch Linux * GPU: MSI RX480 (DVI-D) * Adapter: DVI-D to VGA * Monitor: VGA On Windows OS with the same configuration, everything is okay. The kernel pa...
I have a problem with system boot. There is no signal communication ("Input not supported") on the monitor, but only on Linux. My specs are below: * Arch Linux * GPU: MSI RX480 (DVI-D) * Adapter: DVI-D to VGA * Monitor: VGA On Windows OS with the same configuration, everything is okay. The kernel parameter nomodeset makes the system start, but it doesn't solve the problem. I also tried the parameter amdgpu.dc=0, but it doesn't work. With the same configuration on Windows, everything is fine. So I think it's a problem with Linux.
Janusz Kruszewicz (11 rep)
Aug 2, 2025, 01:34 PM • Last activity: Aug 2, 2025, 01:44 PM
1 votes
1 answers
90 views
Debian stopped using my Intel i915 GPU after upgrade to Trixie. How to fix?
My notebook, an older Apple MacBookAir 13" with Intel Core i7 an Intel HD Graphics 5000, used to work fine before the upgrade to Trixie (Debian GNU/Linux 13). Now, after the upgrade, all rendering is done using the CPU instead of the GPU. This is annoying and makes the machine feel slow. I would lik...
My notebook, an older Apple MacBookAir 13" with Intel Core i7 an Intel HD Graphics 5000, used to work fine before the upgrade to Trixie (Debian GNU/Linux 13). Now, after the upgrade, all rendering is done using the CPU instead of the GPU. This is annoying and makes the machine feel slow. I would like to have accelerated hardware rendering back, but I don't know where to start troubleshooting this. All relevant packages seems to be installed. The GPU is detected. But llvmpipe is used for all rendering instead. System info:
#uname -a
Linux mymachine 6.12.38+deb13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.38-1 (2025-07-16) x86_64 GNU/Linux

# ls -l /dev/dri
total 0
drwxr-xr-x  2 root root         80 Jul 28 01:32 by-path
crw-rw----+ 1 root video  226,   0 Jul 30 13:06 card0
crw-rw----+ 1 root render 226, 128 Jul 28 01:32 renderD128

# LANG=C inxi -Fxxxrzc0
(...)
Graphics:
  Device-1: Intel Haswell-ULT Integrated Graphics vendor: Apple driver: i915
    v: kernel arch: Gen-7.5 ports: active: DP-1,eDP-1
    empty: DP-2,HDMI-A-1,HDMI-A-2 bus-ID: 00:02.0 chip-ID: 8086:0a26
    class-ID: 0300
  Display: x11 server: X.Org v: 21.1.16 with: Xwayland v: 24.1.6
    compositor: xfwm4 v: 4.20.0 driver: X: loaded: intel dri: swrast gpu: i915
    display-ID: :0.0 screens: 1
  Screen-1: 0 s-res: 3360x1080 s-dpi: 96 s-size: 890x286mm (35.04x11.26")
    s-diag: 935mm (36.8")
  Monitor-1: DP-1 mapped: DP1 pos: right model: Dell P2214H serial: 
    res: mode: 1920x1080 hz: 60 scale: 100% (1) dpi: 102
    size: 480x270mm (18.9x10.63") diag: 547mm (21.5") modes: max: 1920x1080
    min: 720x400
  Monitor-2: eDP-1 mapped: eDP1 pos: primary,left model: Apple Color LCD
    res: mode: 1440x900 hz: 60 scale: 100% (1) dpi: 126
    size: 290x180mm (11.42x7.09") diag: 341mm (13.4") modes: 1440x900
  API: EGL v: 1.5 hw: drv: intel crocus platforms: device: 0 drv: crocus
    device: 1 drv: swrast gbm: drv: crocus surfaceless: drv: crocus x11:
    drv: swrast inactive: wayland
  API: OpenGL v: 4.6 compat-v: 4.5 vendor: mesa v: 25.0.7-2 glx-v: 1.4
    direct-render: yes renderer: llvmpipe (LLVM 19.1.7 256 bits)
    device-ID: ffffffff:ffffffff
  Info: Tools: api: eglinfo,glxinfo de: xfce4-display-settings gpu: gputop,
    intel_gpu_top, lsgpu wl: wayland-info x11: xdriinfo, xdpyinfo, xprop,
    xrandr
(...)

# grep -e WW -e EE /var/log/Xorg.0.log
[     8.834] Current Operating System: Linux MBAirFronczek 6.12.38+deb13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.38-1 (2025-07-16) x86_64
	(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[     8.905] (II) Initializing extension MIT-SCREEN-SAVER
[     8.907] (EE) AIGLX error: dlopen of /usr/lib/x86_64-linux-gnu/dri/i965_dri.so failed (/usr/lib/x86_64-linux-gnu/dri/i965_dri.so: cannot open shared object file: No such file or directory)
[     8.907] (EE) AIGLX error: unable to load driver i965
[    10.905] (WW) intel(0): Output eDP1: Strange aspect ratio (30/179), consider adding a quirk
[    34.463] (EE) event8  - bcm5974: kernel bug: Touch jump detected and discarded.
[  1267.808] (EE) event8  - bcm5974: kernel bug: Touch jump detected and discarded.
[  1710.292] (EE) event8  - bcm5974: kernel bug: Touch jump detected and discarded.
[  3134.814] (EE) event8  - bcm5974: kernel bug: Touch jump detected and discarded.
[ 11649.438] (EE) event8  - bcm5974: kernel bug: Touch jump detected and discarded.
[ 11649.438] (EE) event8  - bcm5974: WARNING: log rate limit exceeded (5 msgs per 24h). Discarding future messages.
[ 22436.276] (EE) event8  - bcm5974: client bug: event processing lagging behind by 32ms, your system is too slow
[ 27719.291] (EE) event1  - Power Button: client bug: event processing lagging behind by 454ms, your system is too slow
[ 27719.291] (EE) event3  - Power Button: client bug: event processing lagging behind by 454ms, your system is too slow
[ 27719.291] (EE) event4  - Apple Inc. Apple Internal Keyboard / Trackpad: client bug: event processing lagging behind by 382ms, your system is too slow
[ 41075.009] (EE) event1  - Power Button: client bug: event processing lagging behind by 1040ms, your system is too slow
[ 41075.009] (EE) event3  - Power Button: client bug: event processing lagging behind by 1040ms, your system is too slow
[ 60993.939] (EE) event8  - bcm5974: kernel bug: Touch jump detected and discarded.
[ 61022.742] (EE) event8  - bcm5974: kernel bug: Touch jump detected and discarded.
[ 61456.304] (EE) event8  - bcm5974: kernel bug: Touch jump detected and discarded.
[ 62833.756] (EE) event8  - bcm5974: kernel bug: Touch jump detected and discarded.
[ 63129.269] (EE) event8  - bcm5974: kernel bug: Touch jump detected and discarded.
[ 63129.269] (EE) event8  - bcm5974: WARNING: log rate limit exceeded (5 msgs per 24h). Discarding future messages.
[ 67081.490] (EE) event8  - bcm5974: client bug: event processing lagging behind by 28ms, your system is too slow
[ 69447.954] (EE) event8  - bcm5974: client bug: event processing lagging behind by 23ms, your system is too slow
[ 69983.294] (EE) event8  - bcm5974: client bug: event processing lagging behind by 23ms, your system is too slow
[ 78710.772] (EE) event8  - bcm5974: kernel bug: Touch jump detected and discarded.
[ 84611.777] (EE) event8  - bcm5974: client bug: event processing lagging behind by 25ms, your system is too slow
There is no /etc/X11/xorg.conf on my system. /etc/X11/xorg.conf.d/ contains a file called 20-displaylink.conf:
Section "Device"
    Identifier  "Intel"
    Driver      "intel"
EndSection
Plus, another file called 90-xpra-virtual.conf which was automatically created:
# Ignore all xpra virtual devices by default,
# these will be enabled explicitly when needed.
Section "InputClass"
        Identifier "xpra-virtual-device"
        MatchProduct "Xpra"
        Option "Ignore" "true"
EndSection
A *systematic troubleshooting guide* would be most helpful/appreciated. Unfortunately, I could not find a suiable one online = one that is specific to this piece of hardware and one that does include a fix, not only diagnosis. P.S.: I have another older computer (an older MacMini) that also has an Intel chipset/GPU, also uses the i915 driver and suffers from the same type of problem (renderer: llvmpipe instead of GPU, even though GPU seems to be "recognzied"). So I am pretty sure that this is related to the upgrade to Trixie/Debian testing. Other than that, I have no idea as where to start troubleshooting this.
Ferdinand Rau (66 rep)
Jul 30, 2025, 08:01 PM • Last activity: Aug 1, 2025, 07:18 PM
0 votes
0 answers
21 views
Firefox on RISCV-64 fails to detect GPU and falls back to software rendering
I have built Firefox for RISCV-64 architecture on Wayland in Ubuntu 22. sudo apt-get update Dependency packages:- sudo apt-get install -y curl libnspr4 libgtk-3-dev python3-dev llvm-14 llvm-dev libnspr4-dev clang libclang-dev libx11-xcb-dev libevent-dev libdbus-glib-1-dev m4 meson ninja-build libssl...
I have built Firefox for RISCV-64 architecture on Wayland in Ubuntu 22. sudo apt-get update Dependency packages:- sudo apt-get install -y curl libnspr4 libgtk-3-dev python3-dev llvm-14 llvm-dev libnspr4-dev clang libclang-dev libx11-xcb-dev libevent-dev libdbus-glib-1-dev m4 meson ninja-build libssl-dev libxcb-glx0-dev libxkbcommon-dev libgtk-3-dev libxcb-dri2-0-dev libvpx-dev libnss3-dev pip install -U pip pip install -U meson curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source $HOME/.cargo/env cargo install sccache echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.bashrc echo 'export RUSTC_WRAPPER="sccache"' >> ~/.bashrc echo 'export SCCACHE_DIR="$HOME/.cache/sccache"' >> ~/.bashrc echo 'export SCCACHE_CACHE_SIZE="10G"' >> ~/.bashrc source ~/.bashrc cargo install cbindgen nodejs(v17.1.0) cairo version upgrade (1.16.0 -> 1.18.4) git clone --branch FIREFOX_139_0_4_RELEASE --single-branch --depth 1 https://github.com/mozilla-firefox/firefox.git cd firefox -> apply the patch related to riscv64 vi mozconfig mk_add_options MOZ_PARALLEL_BUILD=4 ac_add_options --enable-optimize=-O3 ac_add_options --enable-application=browser ac_add_options --with-app-name=firefox ac_add_options --disable-release ac_add_options --enable-hardening ac_add_options --enable-rust-simd ac_add_options --enable-linker=bfd ac_add_options --disable-bootstrap ac_add_options --enable-official-branding ac_add_options --with-branding=browser/branding/official ac_add_options --enable-update-channel=release ac_add_options --with-unsigned-addon-scopes=app,system ac_add_options --allow-addon-sideload ac_add_options --enable-default-toolkit=cairo-gtk3-wayland ac_add_options --with-system-zlib ac_add_options --enable-strip ac_add_options --enable-system-ffi ac_add_options --with-system-libevent ac_add_options --with-system-nspr ac_add_options --disable-updater ac_add_options --disable-tests ac_add_options --enable-alsa ac_add_options --without-wasm-sandboxed-libraries ac_add_options --with-system-libvpx ac_add_options --with-ccache=sccache ac_add_options MOZ_WEBRENDER=1 MOZ_APP_REMOTINGNAME=Firefox export LDFLAGS="$LDFLAGS -L/usr/local/lib/riscv64-linux-gnu -L/usr/local/lib -lcairo -lX11 -lXext -lfreetype -lEGL -lGLESv2" export CFLAGS="$CFLAGS -I/usr/local/include -I/usr/local/include/cairo -I/usr/local/include/freetype2" export MOZ_ACCELERATED=1 export LIBVA_DRIVERS_PATH=/usr/lib/riscv64-linux-gnu/dri sudo chown -R $USER:$USER /home/user/firefox ./mach vendor rust --ignore-modified ./mach build I followed these steps to build my firefox Actual results: However, it is failing to detect my Mesa GPU driver.When I checked the about:support page in Firefox,under the Graphics section, the composition is listed as WebRender(software), the WebGL driver information is missing, and although the GPU is shown as active, no description is provided. Expected results: Firefox Should detect my GPU Mesa driver and Compositing should show Webrender (hardware),WebGL 1 Driver & WebGL 2 Driver information should be correctly listed and GPU details should display correct. Are there any code changes are required for firefox which is purely built on egl backend on wayland platform?enter image description here and during startup, i am getting the following logs Can someone help me to resolve this issue?
Saiteja (1 rep)
Jul 25, 2025, 05:32 AM • Last activity: Jul 29, 2025, 06:13 PM
0 votes
0 answers
15 views
Troubles with gpu drivers on Don't Starve
I installed Don't Starve through steam, and this appeared (https://i.sstatic.net/EjSGUPZP.png) It looks like there is a trouble with gpu drivers, i had same problem with kitty terminal, but solved it by installing mesa-amber instead of mesa. How can i solve this problem? lspci -k output: ``` 00:00.0...
I installed Don't Starve through steam, and this appeared (https://i.sstatic.net/EjSGUPZP.png) It looks like there is a trouble with gpu drivers, i had same problem with kitty terminal, but solved it by installing mesa-amber instead of mesa. How can i solve this problem? lspci -k output:
00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Host Bridge/DRAM Registers (rev 08)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: skl_uncore
lspci: Unable to load libkmod resources: error -2
00:02.0 VGA compatible controller: Intel Corporation Skylake GT2 [HD Graphics 520] (rev 07)
        DeviceName: Onboard IGD
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: i915
00:14.0 USB controller: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller (rev 21)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: xhci_hcd
00:14.2 Signal processing controller: Intel Corporation Sunrise Point-LP Thermal subsystem (rev 21)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: intel_pch_thermal
00:16.0 Communication controller: Intel Corporation Sunrise Point-LP CSME HECI #1 (rev 21)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: mei_me
00:17.0 SATA controller: Intel Corporation Sunrise Point-LP SATA Controller [AHCI mode] (rev 21)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: ahci
00:1c.0 PCI bridge: Intel Corporation Sunrise Point-LP PCI Express Root Port #1 (rev f1)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: pcieport
00:1c.4 PCI bridge: Intel Corporation Sunrise Point-LP PCI Express Root Port #5 (rev f1)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: pcieport
00:1c.5 PCI bridge: Intel Corporation Sunrise Point-LP PCI Express Root Port #6 (rev f1)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: pcieport
00:1d.0 PCI bridge: Intel Corporation Sunrise Point-LP PCI Express Root Port #9 (rev f1)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: pcieport
00:1f.0 ISA bridge: Intel Corporation Sunrise Point-LP LPC Controller (rev 21)
        Subsystem: Hewlett-Packard Company Device 8101
00:1f.2 Memory controller: Intel Corporation Sunrise Point-LP PMC (rev 21)
        Subsystem: Hewlett-Packard Company Device 8101
00:1f.3 Audio device: Intel Corporation Sunrise Point-LP HD Audio (rev 21)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: snd_hda_intel
00:1f.4 SMBus: Intel Corporation Sunrise Point-LP SMBus (rev 21)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: i801_smbus
01:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Topaz XT [Radeon R7 M260/M265 / M340/M360 / M440/M445 / 530/535 / 620/625 Mobile] (rev 83)
        Subsystem: Hewlett-Packard Company Radeon R7 M340
        Kernel driver in use: amdgpu
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet Controller (rev 15)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: r8169
03:00.0 Network controller: Intel Corporation Wireless 3165 (rev 81)
        Subsystem: Intel Corporation Dual Band Wireless AC 3165 [Stone Peak 1x1]
        Kernel driver in use: iwlwifi
04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS522A PCI Express Card Reader (rev 01)
        Subsystem: Hewlett-Packard Company Device 8101
        Kernel driver in use: rtsx_pci
system: arch, made pacman -Syu this morning(before it game didn't work too)
Межпространственный Голубь (1 rep)
Jul 19, 2025, 10:07 AM
1 votes
1 answers
1999 views
Where is /etc/initramfs-tools/modules in OpenSUSE?
I'm following a GPU passthrough guide online which requires me to add "pci-stub" to /etc/initramfs-tools/modules. But it doesn't exist. I'm using OpenSUSE Tumbleweed.
I'm following a GPU passthrough guide online which requires me to add "pci-stub" to /etc/initramfs-tools/modules. But it doesn't exist. I'm using OpenSUSE Tumbleweed.
Avery Alejandro (115 rep)
Jan 2, 2019, 12:36 AM • Last activity: Jul 6, 2025, 11:08 PM
0 votes
1 answers
3735 views
How to stress test a gpu?
How can I stress my gpu in order to test it? 100% load, fans screaming etc. I need to know if the GPU works. Glxgears, glmark2 return even worse fps than the i5 cpu - Arch Linux - Nvidia GTX 1080
How can I stress my gpu in order to test it? 100% load, fans screaming etc. I need to know if the GPU works. Glxgears, glmark2 return even worse fps than the i5 cpu - Arch Linux - Nvidia GTX 1080
jjk (445 rep)
Feb 22, 2021, 08:43 AM • Last activity: Jul 5, 2025, 09:05 PM
0 votes
0 answers
17 views
Touchscreen sometimes stops registering inputs during kiosk mode
I am working on a project using a yocto build for a compulab module for the imx8 processor. It contains a ft5x06 touchscreen, connected to using a custom out-of-tree usb-i2c bridge module. This all works fine in normal use. When running a chromium webapp, sometimes the touchscreen goes non-responsiv...
I am working on a project using a yocto build for a compulab module for the imx8 processor. It contains a ft5x06 touchscreen, connected to using a custom out-of-tree usb-i2c bridge module. This all works fine in normal use. When running a chromium webapp, sometimes the touchscreen goes non-responsive. I can still interact with the screen using a mouse and keyboard. If I run evtest, the touchscreen appears to still be responsive, but the chromium kiosk window does not register it. I am running wayland underneath. It is possible the custom driver is causing issues, but the fact that it still functions under evtest makes me sceptical. As an aside, chromium seems to trigger gpu errors. I don't know if this could be related or not.
[ 1815.795710] gcmkONERROR: status=-3(gcvSTATUS_OUT_OF_MEMORY) @ _GFPAlloc(491)
dmesg does not show anything useful. I've noticed that sometimes plugging in a mouse and scrolling is enough to get it working again (this worked 3 out of 4 times I have reproduced the issue). The issue is intermittent and not easily repeatable. I'm looking for any advice or suggestions of how to approach debugging this.
JAS (1 rep)
Jul 1, 2025, 07:56 AM • Last activity: Jul 1, 2025, 08:05 AM
1 votes
1 answers
3127 views
AMD GPU not being used by Debian
I'm running Debian 10 with kernel 5.10 via buster backports. The installed driver for amd is amdgpu and the output of ```dmesg``` from my understanding tells me the driver is being recognized, ``` [ 2.774224] [drm] amdgpu kernel modesetting enabled. [ 2.774409] amdgpu: Topology: Add CPU node [ 2.774...
I'm running Debian 10 with kernel 5.10 via buster backports. The installed driver for amd is amdgpu and the output of
from my understanding tells me the driver is being recognized,
[    2.774224] [drm] amdgpu kernel modesetting enabled.
[    2.774409] amdgpu: Topology: Add CPU node
[    2.774596] amdgpu 0000:03:00.0: amdgpu: Trusted Memory Zone (TMZ) feature not supported
[    2.792960] amdgpu 0000:03:00.0: amdgpu: Fetched VBIOS from ATRM
[    2.792962] amdgpu: ATOM BIOS: BR64533.001
[    2.811910] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_k_mc.bin
[    2.811918] amdgpu 0000:03:00.0: amdgpu: VRAM: 2048M 0x000000F400000000 - 0x000000F47FFFFFFF (2048M used)
[    2.811919] amdgpu 0000:03:00.0: amdgpu: GART: 256M 0x000000FF00000000 - 0x000000FF0FFFFFFF
[    2.811996] [drm] amdgpu: 2048M of VRAM memory ready
[    2.811997] [drm] amdgpu: 3072M of GTT memory ready.
[    2.812695] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_pfp_2.bin
[    2.812704] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_me_2.bin
[    2.812712] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_ce_2.bin
[    2.812721] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_rlc.bin
[    2.812776] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_mec_2.bin
[    2.812832] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_mec2_2.bin
[    2.813340] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_sdma.bin
[    2.813348] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_sdma1.bin
[    2.813383] amdgpu: hwmgr_sw_init smu backed is polaris10_smu
[    2.813465] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_uvd.bin
[    2.814530] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_vce.bin
[    2.815130] amdgpu 0000:03:00.0: firmware: direct-loading firmware amdgpu/polaris12_k_smc.bin
[    3.051566] amdgpu 0000:03:00.0: amdgpu: SE 2, SH per SE 1, CU per SH 5, active_cu_number 10
[    3.055642] [drm] Initialized amdgpu 3.40.0 20150101 for 0000:03:00.0 on minor 1
and
-c video
shows both the integrated and dedicated GPU
*-display                 
       description: VGA compatible controller
       product: Intel Corporation
       vendor: Intel Corporation
       physical id: 2
       bus info: pci@0000:00:02.0
       version: 02
       width: 64 bits
       clock: 33MHz
       capabilities: pciexpress msi pm vga_controller bus_master cap_list rom
       configuration: driver=i915 latency=0
       resources: irq:130 memory:c1000000-c1ffffff memory:a0000000-afffffff ioport:4000(size=64) memory:c0000-dffff
  *-display
       description: Display controller
       product: Advanced Micro Devices, Inc. [AMD/ATI]
       vendor: Advanced Micro Devices, Inc. [AMD/ATI]
       physical id: 0
       bus info: pci@0000:03:00.0
       version: c0
       width: 64 bits
       clock: 33MHz
       capabilities: pm pciexpress msi bus_master cap_list rom
       configuration: driver=amdgpu latency=0
       resources: irq:131 memory:b0000000-bfffffff memory:c0000000-c01fffff ioport:3000(size=256) memory:c2300000-c233ffff memory:c2340000-c235ffff
However when I run
-B
or
DRI_PRIME=1 glxinfo -B
the output is
name of display: :0
display: :0  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: VMware, Inc. (0xffffffff)
    Device: llvmpipe (LLVM 7.0, 256 bits) (0xffffffff)
    Version: 18.3.6
    Accelerated: no
    Video memory: 15817MB
    Unified memory: no
    Preferred profile: core (0x1)
    Max core profile version: 3.3
    Max compat profile version: 3.1
    Max GLES1 profile version: 1.1
    Max GLES profile version: 3.0
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 7.0, 256 bits)
OpenGL core profile version string: 3.3 (Core Profile) Mesa 18.3.6
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
 
OpenGL version string: 3.1 Mesa 18.3.6
OpenGL shading language version string: 1.40
OpenGL context flags: (none)
 
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 18.3.6
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
I have tried running programs that could use the GPU as well but they seem to keep using the VMWare llvmpipe gpu, the output of
always shows up as 0%. How do I get my machine to use amdgpu?
PC-02 (11 rep)
Jun 11, 2021, 06:32 PM • Last activity: Jun 20, 2025, 06:03 PM
3 votes
2 answers
9006 views
NVIDIA-SMI just shows one GPU instead of two
I have 2 GPUs, but my NVIDIA-SMI just shows me one. How can I make it recognize the other one? [![enter image description here][1]][1] [1]: https://i.sstatic.net/1LVyh.png
I have 2 GPUs, but my NVIDIA-SMI just shows me one. How can I make it recognize the other one? enter image description here
Zhoueeer (31 rep)
Jul 11, 2018, 06:02 PM • Last activity: Jun 17, 2025, 09:05 AM
0 votes
1 answers
1963 views
Multi-GPU Setup in sway
I will have soon a setup, where my desktop has an integrated and a dedicated graphics card, both AMD, and I already searched but didn't find any good answer in how the multi-GPU setup works. I would like to have a setup where I primarily use my integrated graphics card and can select the GPU for the...
I will have soon a setup, where my desktop has an integrated and a dedicated graphics card, both AMD, and I already searched but didn't find any good answer in how the multi-GPU setup works. I would like to have a setup where I primarily use my integrated graphics card and can select the GPU for the starting application with an environment variable, similar to primusrun from NVIDIA. I've read that you can use the environment variable WLR_DRM_DEVICES so the first device will be used for rendering and copied to the other graphics cards. But is it possible that you can select another GPU on the fly for more computation intensive applications like games? Another thing I've read that it manages this automatically with GBM, but then again, how am I able to choose which GPU to use? Background information: I'm getting a new CPU which I chose to have integrated graphics, so I have 2 renderable devices. Reason is I want to be able to unplug my dedicated graphics card in software, so I can use it for my Windows VM setup via PCI passthrough, if applications (games in particular) don't work with wine. Before anyone mentions dual os setup, I already have this, but it's extremely annoying if you have to reboot the system to change the OS, especially if you're talking with someone over Discord, Teamspeak, etc. TL;DR: How does multi-GPU setup in sway work?
DigitalDragon (3 rep)
Dec 25, 2021, 10:43 AM • Last activity: May 28, 2025, 08:00 AM
1 votes
1 answers
6858 views
PCI passthrough: vfio-pci ignores ids of devices
I have 3 GPUs in my dual XEON server. I followed [instructions on Arch wiki][1] and set up `vfio-pci` with `ids=10de:100c,10de:0e1a`: $ modprobe -c | grep vfio options vfio_iommu_type1 allow_unsafe_interrupts=1 options vfio_pci ids=10de:100c,10de:0e1a ... But according to `dmesg` vfio ignores that o...
I have 3 GPUs in my dual XEON server. I followed instructions on Arch wiki and set up vfio-pci with ids=10de:100c,10de:0e1a: $ modprobe -c | grep vfio options vfio_iommu_type1 allow_unsafe_interrupts=1 options vfio_pci ids=10de:100c,10de:0e1a ... But according to dmesg vfio ignores that option: [ 1.278976] VFIO - User Level meta-driver version: 0.3 [ 1.306193] vfio_pci: add [1002:7142[ffff:ffff]] class 0x000000/00000000 [ 1.326139] vfio_pci: add [1002:7162[ffff:ffff]] class 0x000000/00000000 Moreover when I unplugged card with 1002:7142 and 1002:7162 devices on-board and reboot I still have such entries in dmesg output and no more! I upgraded linux kernel version and vfio_pci started to add another card but still independent in ids option! I don't know what to do to resolve that problem. I want to setup certain GPU to be add as vfio_pci device. I don't even know where to look. List of GPUs: #IOMMU group 17 # 02:00.0 VGA compatible controller : Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X] [1002 :6798] # 02:00.1 Audio device : Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT HDMI Audio [Radeon HD 7970 Series] [1002:aaa0] #IOMMU group 18 # 03:00.0 VGA compatible controller : NVIDIA Corporation GK110B [GeForce GTX TITAN Black] [10de:100c] (rev a1) # 03:00.1 Audio device : NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1) #IOMMU group 30 # 83:00.0 VGA compatible controller : Advanced Micro Devices, Inc. [AMD/ATI] RV515 PRO [Radeon X1300/X1550 Series] [1002:7142] # 83:00.1 Display controller : Advanced Micro Devices, Inc. [AMD/ATI] RV515 PRO [Radeon X1300/X1550 Series] (Secondary) [1002:7162] Modprobe settings: $ cat /etc/modprobe.d/vfio.conf options vfio-pci ids=10de:100c,10de:0e1a Linux version: $ uname -a Linux localhost 4.4.21-1-lts #1 SMP Thu Sep 15 20:38:36 CEST 2016 x86_64 GNU/Linux
petRUShka (1342 rep)
Sep 21, 2016, 08:40 PM • Last activity: May 23, 2025, 08:06 AM
6 votes
2 answers
2236 views
radeon errors: GPU lockup: ring 0 stalled for more than x msec
I have newly installed machine with Debian Buster. The GPU is radeon `FirePro W2100`. After couple of hours of use, the machine suddenly freezes, the display switches to "white noise", and machine is unusable. In the logs, I see many errors like these: kernel: radeon 0000:65:00.0: ring 0 stalled for...
I have newly installed machine with Debian Buster. The GPU is radeon FirePro W2100. After couple of hours of use, the machine suddenly freezes, the display switches to "white noise", and machine is unusable. In the logs, I see many errors like these: kernel: radeon 0000:65:00.0: ring 0 stalled for more than 10240msec kernel: radeon 0000:65:00.0: GPU lockup (current fence id 0x0000000000039bff last fence id 0x0000000000039c42 on ring 0) kernel: adeon 0000:65:00.0: failed to get a new IB (-35) kernel: [drm:ffffffff816219d0] *ERROR* Couldn't update BO_VA (-35) kernel: radeon 0000:65:00.0: failed to get a new IB (-35) and then kernel: radeon 0000:65:00.0: ring 0 stalled for more than 10032msec kernel: radeon 0000:65:00.0: GPU lockup (current fence id 0x0000000000039bff last fence id 0x0000000000039c42 on ring 0) what do the errors mean, and how can I fix this ? Is this HW or SW problem ?
Martin Vegter (586 rep)
Dec 8, 2019, 07:18 PM • Last activity: May 17, 2025, 12:11 AM
1 votes
0 answers
96 views
How can I verify if GPU firmware was loaded on boot?
This is a follow-up to a question I asked a couple of days ago, relating to the `amdgpu` module not being loadable on boot after an Arch upgrade (https://unix.stackexchange.com/questions/794724/amdgpu-module-fails-to-load-after-arch-upgrade-cannot-allocate-memory). Having read around a bit online, I...
This is a follow-up to a question I asked a couple of days ago, relating to the amdgpu module not being loadable on boot after an Arch upgrade (https://unix.stackexchange.com/questions/794724/amdgpu-module-fails-to-load-after-arch-upgrade-cannot-allocate-memory) . Having read around a bit online, I suspect the issue may be that the GPU firmware is not being loaded on boot. So, I have a double-barrelled question related to this: 1) How can I verify if the firmware for my AMD GPU was loaded on boot or not? 2) If it wasn't, is it possible to load the GPU firmware manually, after boot? Is there a command for that? (I know that won't be a workable long-term solution, but it might help pinpoint where the issue is)
Time4Tea (2618 rep)
May 12, 2025, 12:59 PM • Last activity: May 12, 2025, 03:29 PM
0 votes
0 answers
18 views
Rendering using the Workbench engine using Blender as a Python module on Amazon Linux 2023 causes libEGL error and takes way too long
I am trying to get Blender cloud rendering with Nvidia GPUs to work. I am using Blender 4.1 as a module. When rendering using Workbench (or Eevee) I get an error, the render takes much much longer than it should (50 sec vs 2 sec), and after the render is done I get a segmentation fault. Here is the...
I am trying to get Blender cloud rendering with Nvidia GPUs to work. I am using Blender 4.1 as a module. When rendering using Workbench (or Eevee) I get an error, the render takes much much longer than it should (50 sec vs 2 sec), and after the render is done I get a segmentation fault. Here is the error before the render: libEGL warning: egl: failed to create dri2 screen libEGL warning: egl: failed to create dri2 screen libEGL warning: failed to open /dev/dri/card1: Permission denied EGL Error (0x3009): EGL_BAD_MATCH: Arguments are inconsistent (for example, a valid context requires buffers not supplied by a valid surface). The Nvidia drivers should be setup correctly, running nvidia-smi outputs this: +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA A10G On | 00000000:00:1E.0 Off | 0 | | 0% 21C P8 15W / 300W | 1MiB / 23028MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ Any ideas what could be the problem here?
Philipp Köhler (1 rep)
May 7, 2025, 10:29 AM
2 votes
2 answers
6089 views
Make NVIDIA gpu as a default gpu
I am using **Manjaro 20** gnome. When the linux is installed in my machine. A **Nvidia** driver was installed with **mhwd**. But `lspci` command does not show any nvidia gpu. **command:** ``` lspci | grep VGA ``` **output:** ``` 00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (r...
I am using **Manjaro 20** gnome. When the linux is installed in my machine. A **Nvidia** driver was installed with **mhwd**. But lspci command does not show any nvidia gpu. **command:**
lspci | grep VGA
**output:**
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02)
Any other commands like,
sudo mhwd -i pci nvidia-linux
or
sudo pacman -S nvidia
results in blank screen. Also **nvidia X server** does not shows any **openGl** or **x-screen** menu. Driver manually downloaded from nvidia did not work. machine is using intel gpu.
mhwd --listinstalled
> Installed PCI configs:
--------------------------------------------------------------------------------
                  NAME               VERSION          FREEDRIVER           TYPE
--------------------------------------------------------------------------------
     video-modesetting            2020.01.13                true            PCI
video-hybrid-intel-nvidia-prime            2020.11.30               false            PCI


Warning: No installed USB configs!
nvidia-smi
Tue Mar 16 22:39:35 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.56       Driver Version: 460.56       CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce 930MX       Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   41C    P0    N/A /  N/A |      0MiB /  2004MiB |      1%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
pacman --query | grep nvidia                                                                                                     
lib32-nvidia-utils 460.56-1
linux59-nvidia 460.56-1
mhwd-nvidia 460.56-1
mhwd-nvidia-390xx 390.141-1
nvidia-prime 1.0-4
nvidia-utils 460.56-1
neofetch
██████████████████  ████████   me_sajied@manjaro 
██████████████████  ████████   ----------------- 
██████████████████  ████████   OS: Manjaro Linux x86_64 
██████████████████  ████████   Host: HP ProBook 450 G4 
████████            ████████   Kernel: 5.9.16-1-MANJARO 
████████  ████████  ████████   Uptime: 3 hours, 6 mins 
████████  ████████  ████████   Packages: 1225 (pacman) 
████████  ████████  ████████   Shell: zsh 5.8 
████████  ████████  ████████   Resolution: 1366x768 
████████  ████████  ████████   DE: GNOME 3.38.3 
████████  ████████  ████████   WM: Mutter 
████████  ████████  ████████   WM Theme: Yaru 
████████  ████████  ████████   Theme: Arc [GTK2/3] 
████████  ████████  ████████   Icons: Yaru [GTK2/3] 
                               Terminal: gnome-terminal 
                               CPU: Intel i5-7200U (4) @ 3.100GHz 
                               GPU: NVIDIA GeForce 930MX 
                               GPU: Intel HD Graphics 620 
                               Memory: 1864MiB / 3819MiB
Sajied Shah Yousuf (41 rep)
Mar 16, 2021, 05:01 PM • Last activity: May 6, 2025, 06:04 PM
1 votes
2 answers
3275 views
Completely disable GPU in debian?
I have a motherboard without any internal GPU, so I have to have a GPU to get my server to start. The problem is that i have no use for it other than it doesnt boot without a gpu. The HD 5570 I am using at the moment produces about 65 degrees in idle (passive cooled). This is unecessary and heats up...
I have a motherboard without any internal GPU, so I have to have a GPU to get my server to start. The problem is that i have no use for it other than it doesnt boot without a gpu. The HD 5570 I am using at the moment produces about 65 degrees in idle (passive cooled). This is unecessary and heats up all other components. Is it possible to completely disable the GPU from drawing power when booting into Debian? There are no BIOS settings which I can disable this with, i have looked.
jockebq (111 rep)
Aug 13, 2020, 07:07 PM • Last activity: May 3, 2025, 07:00 AM
2 votes
1 answers
288 views
Understanding xrandr providers and source
I have a laptop with two GPUs (Intel and nvidia). `xrandr --listproviders` prints the following: ``` Providers: number : 3 Provider 0: id: 0x217 cap: 0x1, Source Output crtcs: 0 outputs: 0 associated providers: 2 name:NVIDIA-0 Provider 1: id: 0x319 cap: 0xf, Source Output, Sink Output, Source Offloa...
I have a laptop with two GPUs (Intel and nvidia). xrandr --listproviders prints the following:
Providers: number : 3
Provider 0: id: 0x217 cap: 0x1, Source Output crtcs: 0 outputs: 0 associated providers: 2 name:NVIDIA-0
Provider 1: id: 0x319 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 1 outputs: 1 associated providers: 1 name:modesetting
Provider 2: id: 0x243 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 4 outputs: 9 associated providers: 1 name:modesetting
When I disable the nvidia GPU, the first provider disappears and the two other providers change some ids and attributes. **Questions** - *What is a provider*? I read that a GPU is a provider with physical connections being the outputs. But then why do I have 3 providers? I also run a docking station that provides DisplayPort sockets, but xrandr --listproviders-output doesn't change when I disconnect the dock. - *What is a crtc*? I know [this related answer](https://superuser.com/a/1078359) , but why does the nvidia device have crtcs: 0? Maybe because it doesn't have a physical output (DisplayPort socket or similar)? - xrandr --auto fails with xrandr: Configure crtc 0 failed when the nvidia GPU is active. Well there is no crtc, so attempting to configure it is actually expected to fail. But I can still setup the monitors manually using xrandr --output ..., how is that supposed to work without any crtc and why can't xrandr --auto do the same? - Is there any good learning source for all of that?
pasbi (313 rep)
Sep 7, 2024, 04:20 PM • Last activity: Apr 29, 2025, 07:16 PM
1 votes
2 answers
3891 views
xorg fails to start on freebsd even after installing nvidia and drm-kmod
So i decided to use freebsd as my daily driver but the xorg seems not to work so first of all when i tried startx for the first time (i installed the nvidia drivers) it gave > (EE) Cannot run in framebuffer mode. Please specify busIDs for all framebuffer mode Heres the full log file - https://pasteb...
So i decided to use freebsd as my daily driver but the xorg seems not to work so first of all when i tried startx for the first time (i installed the nvidia drivers) it gave > (EE) Cannot run in framebuffer mode. Please specify busIDs for all framebuffer mode Heres the full log file - https://pastebin.com/sKCsm2Nn Then i tried nvidia-xconfig then it gave me this- > (EE) no screens found(EE) Heres the full log - https://pastebin.com/5kXndP8J I have a lenovo flex 2-14 heres my gpu specifications- > vgapci0@pci0:0:2:0: class=0x030000 card=0x397817aa chip=0x0a168086 rev=0x0b hdr=0x00 vendor = 'Intel Corporation' device = 'Haswell-ULT Integrated Graphics Controller' class = display -- vgapci1@pci0:4:0:0: class=0x030200 card=0x381717aa chip=0x114010de rev=0xa1 hdr=0x00 vendor = 'NVIDIA Corporation' device = 'GF117M [GeForce 610M/710M/810M/820M / GT 620M/625M/630M/720M]' class = display I have installed the nvidia-driver-390-390.138_1, drm-kmod(i915kms.ko).
sierra (11 rep)
Jan 13, 2021, 06:42 AM • Last activity: Apr 25, 2025, 11:05 AM
Showing page 1 of 20 total questions