Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
2
votes
1
answers
2158
views
KMS Terminal: How to disable second screen, or how to force a certaion resolution?
I have two screens with different native resolution connected to my Gentoo/systemd machine. since the VTs try to mirror the outputs, they do not use the whole size of the higher resolution screen. I never use virtual terminals on the lower resolution screen, so I'd like to have them use the whole hi...
I have two screens with different native resolution connected to my Gentoo/systemd machine. since the VTs try to mirror the outputs, they do not use the whole size of the higher resolution screen. I never use virtual terminals on the lower resolution screen, so I'd like to have them use the whole high resolution monitor.
If I disable the lower resolution screen with the video kernel command line parameter, I cannot switch it on in X11, since the kernel thinks the output is not connected. Nevertheless, on X11 I want to be able to enable the second monitor whenever I need it.
Is there an option on the kernel command line, in systemd, or somewhere I can't think of at them moment, to disable the virtual terminals on one output, to have different virtual terminals on different outputs (kind of multiseat), to force the vts to use the whole size of one connected screen, or to disable an output on virtual terminals in such a way that xrandr can reenable it?
soulsource
(375 rep)
Feb 22, 2014, 05:22 PM
• Last activity: Jul 24, 2025, 10:05 PM
0
votes
0
answers
134
views
What's the point of removing KMS with nvidia drivers? It seems to only cause errors
> Remove `kms` from the `HOOKS` array in `/etc/mkinitcpio.conf` and [regenerate the initramfs](https://wiki.archlinux.org/title/Regenerate_the_initramfs). This will prevent the initramfs from containing the `nouveau` module making sure the kernel cannot load it during early boot. The [nvidia-utils](...
> Remove
kms
from the HOOKS
array in /etc/mkinitcpio.conf
and [regenerate the initramfs](https://wiki.archlinux.org/title/Regenerate_the_initramfs) . This will prevent the initramfs from containing the nouveau
module making sure the kernel cannot load it during early boot. The [nvidia-utils](https://archlinux.org/packages/?name=nvidia-utils) package contains a file which blacklists the nouveau
module once you reboot.
The above quote from the Arch Linux wiki page for [NVIDIA](https://wiki.archlinux.org/title/NVIDIA) says to remove the kms hook after installing nvidia drivers so that nouveau can't conflict with them (or at least I think that's what it means!).
However upon removing it, I experience problems with my vconsole tty on boot, including the font not being set correctly and the login prompt being (visually) corrupted by a kernel error msg:
[[bottom 20% of an ascii art defined in /etc/issue that had displayed fully when kms wasn't deleted]]
emmy login: [ 5.559980] i915 0000:00:02.0: [drm] *ERROR* Failed to probe lspcon
Then I saw that even with the kms hook added lsmod | grep nouveau
shows nouveau isn't loaded. So what's the point of removing the kms hook when it seems that all it does is cause errors from weird loading orders of modules? And what does the hook actually do?
Here is my journalctl for both cases (I am using a intel i915 integrated graphics card + an nvidia GTX 1650 card):
KMS removed: journalctl -b | grep -E "nvidia|drm|fbcon|i915"
May 04 23:39:11 emmy kernel: Command line: initrd=\intel-ucode.img initrd=\initramfs-linux.img root=UUID=c0f85f88-6e8a-4002-8367-52c4141be5c6 rw nvidia_drm.modeset=1
May 04 23:39:11 emmy kernel: The simpledrm driver will not be probed
May 04 23:39:11 emmy kernel: Kernel command line: initrd=\intel-ucode.img initrd=\initramfs-linux.img root=UUID=c0f85f88-6e8a-4002-8367-52c4141be5c6 rw nvidia_drm.modeset=1
May 04 23:39:11 emmy kernel: ACPI: bus type drm_connector registered
May 04 23:39:11 emmy kernel: fbcon: Deferring console take-over
May 04 23:39:11 emmy kernel: fbcon: Taking over console
May 04 23:39:11 emmy systemd: Starting Load Kernel Module drm...
May 04 23:39:11 emmy systemd: modprobe@drm.service: Deactivated successfully.
May 04 23:39:11 emmy systemd: Finished Load Kernel Module drm.
May 04 23:39:11 emmy kernel: nvidia: loading out-of-tree module taints kernel.
May 04 23:39:11 emmy kernel: nvidia: module verification failed: signature and/or required key missing - tainting kernel
May 04 23:39:11 emmy kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 238
May 04 23:39:11 emmy kernel: nvidia 0000:01:00.0: enabling device (0000 -> 0003)
May 04 23:39:11 emmy systemd-modules-load: Inserted module 'nvidia_uvm'
May 04 23:39:12 emmy kernel: nvidia-modeset: Loading NVIDIA UNIX Open Kernel Mode Setting Driver for x86_64 570.144 Release Build (archlinux-builder@echo archlinux)
May 04 23:39:12 emmy kernel: [drm] [nvidia-drm] [GPU ID 0x00000100] Loading driver
May 04 23:39:13 emmy kernel: i915 0000:00:02.0: [drm] Found coffeelake (device ID 3e9b) integrated display version 9.00 stepping N/A
May 04 23:39:15 emmy kernel: i915 0000:00:02.0: vgaarb: deactivate vga console
May 04 23:39:15 emmy kernel: i915 0000:00:02.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:owns=io+mem
May 04 23:39:15 emmy kernel: i915 0000:00:02.0: [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
May 04 23:39:15 emmy kernel: mei_hdcp 0000:00:16.0-b638ab7e-94e2-4ea2-a552-d1c54b627f04: bound 0000:00:02.0 (ops i915_hdcp_ops [i915])
May 04 23:39:15 emmy kernel: [drm] Initialized nvidia-drm 0.0.0 for 0000:01:00.0 on minor 0
May 04 23:39:15 emmy kernel: nvidia 0000:01:00.0: [drm] No compatible format found
May 04 23:39:15 emmy kernel: nvidia 0000:01:00.0: [drm] Cannot find any crtc or sizes
May 04 23:39:15 emmy kernel: i915 0000:00:02.0: [drm] *ERROR* Failed to probe lspcon
May 04 23:39:15 emmy kernel: [drm] Initialized i915 1.6.0 for 0000:00:02.0 on minor 1
May 04 23:39:15 emmy kernel: fbcon: i915drmfb (fb0) is primary device
May 04 23:39:15 emmy kernel: snd_hda_intel 0000:00:1f.3: bound 0000:00:02.0 (ops i915_audio_component_bind_ops [i915])
May 04 23:39:15 emmy kernel: i915 0000:00:02.0: [drm] fb0: i915drmfb frame buffer device
KMS not removed (login screen and ascii art works fine):
May 04 23:47:03 emmy kernel: Command line: initrd=\intel-ucode.img initrd=\initramfs-linux.img root=UUID=c0f85f88-6e8a-4002-8367-52c4141be5c6 rw nvidia_drm.modeset=1
May 04 23:47:03 emmy kernel: The simpledrm driver will not be probed
May 04 23:47:03 emmy kernel: Kernel command line: initrd=\intel-ucode.img initrd=\initramfs-linux.img root=UUID=c0f85f88-6e8a-4002-8367-52c4141be5c6 rw nvidia_drm.modeset=1
May 04 23:47:04 emmy kernel: ACPI: bus type drm_connector registered
May 04 23:47:04 emmy kernel: fbcon: Deferring console take-over
May 04 23:47:04 emmy kernel: fbcon: Taking over console
May 04 23:47:04 emmy kernel: i915 0000:00:02.0: [drm] Found coffeelake (device ID 3e9b) integrated display version 9.00 stepping N/A
May 04 23:47:04 emmy kernel: i915 0000:00:02.0: vgaarb: deactivate vga console
May 04 23:47:04 emmy kernel: i915 0000:00:02.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:owns=io+mem
May 04 23:47:04 emmy kernel: i915 0000:00:02.0: [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
May 04 23:47:04 emmy kernel: i915 0000:00:02.0: [drm] *ERROR* Failed to probe lspcon
May 04 23:47:04 emmy kernel: [drm] Initialized i915 1.6.0 for 0000:00:02.0 on minor 0
May 04 23:47:04 emmy kernel: fbcon: i915drmfb (fb0) is primary device
May 04 23:47:04 emmy kernel: i915 0000:00:02.0: [drm] fb0: i915drmfb frame buffer device
May 04 23:47:04 emmy systemd: Starting Load Kernel Module drm...
May 04 23:47:04 emmy systemd: modprobe@drm.service: Deactivated successfully.
May 04 23:47:04 emmy systemd: Finished Load Kernel Module drm.
May 04 23:47:04 emmy kernel: nvidia: loading out-of-tree module taints kernel.
May 04 23:47:04 emmy kernel: nvidia: module verification failed: signature and/or required key missing - tainting kernel
May 04 23:47:04 emmy kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 236
May 04 23:47:04 emmy kernel: nvidia 0000:01:00.0: enabling device (0000 -> 0003)
May 04 23:47:04 emmy systemd-modules-load: Inserted module 'nvidia_uvm'
May 04 23:47:04 emmy kernel: nvidia-modeset: Loading NVIDIA UNIX Open Kernel Mode Setting Driver for x86_64 570.144 Release Build (archlinux-builder@echo archlinux)
May 04 23:47:04 emmy kernel: mei_hdcp 0000:00:16.0-b638ab7e-94e2-4ea2-a552-d1c54b627f04: bound 0000:00:02.0 (ops i915_hdcp_ops [i915])
May 04 23:47:04 emmy kernel: [drm] [nvidia-drm] [GPU ID 0x00000100] Loading driver
May 04 23:47:05 emmy kernel: snd_hda_intel 0000:00:1f.3: bound 0000:00:02.0 (ops i915_audio_component_bind_ops [i915])
May 04 23:47:06 emmy kernel: [drm] Initialized nvidia-drm 0.0.0 for 0000:01:00.0 on minor 1
May 04 23:47:06 emmy kernel: nvidia 0000:01:00.0: [drm] No compatible format found
May 04 23:47:06 emmy kernel: nvidia 0000:01:00.0: [drm] Cannot find any crtc or sizes
I don't see what the difference is because both end up with my i915 card using the framebuffer.
SmolScorbunny
(1 rep)
May 4, 2025, 11:08 PM
• Last activity: May 5, 2025, 08:24 AM
0
votes
1
answers
2096
views
set kms specific resolution at boot time
i'm trying to set a specific resolution for kms, at boot time. by default kms choose the highest resolution available (2500x1600) which is a bit hard to read. i'd like to set 1440x900 instead. i tried two things via grub, the first one: ``` GRUB_GFXMODE=1440x900 GRUB_GFXPAYLOAD_LINUX=keep GRUB_GFXPA...
i'm trying to set a specific resolution for kms, at boot time.
by default kms choose the highest resolution available (2500x1600) which is a bit hard to read.
i'd like to set 1440x900 instead.
i tried two things via grub, the first one:
GRUB_GFXMODE=1440x900
GRUB_GFXPAYLOAD_LINUX=keep
GRUB_GFXPAYLOAD_LINUX=1440x900
but that didn't help, the system act just the same, no matter if it is there or not, it just continue to use 2500x1600.
the other thing i tried instead is setting a kernel parameter like so:
GRUB_CMDLINE_LINUX_DEFAULT="video=1440x900"
that kind of helped a bit, the resolution changed, it's much better and readable but then, the virtual_size didn't change accordingly, it's still 2500x1600, meaning my tty is much bigger than the screen itself and so i only see the upper_left part of a larger term.
how can i force kms to a specific resolution ?
thanks
nsklaus
(1 rep)
May 24, 2019, 05:49 AM
• Last activity: Mar 9, 2025, 06:05 PM
2
votes
1
answers
796
views
How can I change the resolution for DRM?
I need to change the standard resolution for my `DRM` mode. How can I achieve that? When I do `cat` on ``` cat /sys/class/drm/card0/card0-HDMI-A-1/modes 1920x1080 1920x1080 1920x1080 1280x720 1280x720 1280x720 720x576 720x576 720x480 720x480 ``` The problem is my HDMI screen cannot support `1920x108...
I need to change the standard resolution for my
DRM
mode.
How can I achieve that?
When I do cat
on
cat /sys/class/drm/card0/card0-HDMI-A-1/modes
1920x1080
1920x1080
1920x1080
1280x720
1280x720
1280x720
720x576
720x576
720x480
720x480
The problem is my HDMI screen cannot support 1920x1080
. My HDMI screen support 1280x720
and below.
So how can I tell DRM
to not use 1920x1080
?
euraad
(219 rep)
Oct 11, 2024, 10:38 PM
• Last activity: Oct 13, 2024, 02:37 PM
2
votes
0
answers
294
views
delaying or speeding up drm/kms with amdgpu driver?
I have a amdgpu, using the open source driver. amdgpu driver claims to do early kms by default, but the early is very late. Exactly 2.5s! I wouldn't care much but it breaks and delays my disk encryption password prompt. the laptop already boots with EFI video which is perfectly usable and correct. t...
I have a amdgpu, using the open source driver. amdgpu driver claims to do early kms by default, but the early is very late. Exactly 2.5s!
I wouldn't care much but it breaks and delays my disk encryption password prompt.
the laptop already boots with EFI video which is perfectly usable and correct. then DRM/KMS loads, and screw ups the size and fonts for the tty. Then sd-vconsole starts and corrects it. Hence I'm trying to either delay kms or force it to be *really* early.
things that failed:
1. forcing edid
- # get-edid > myedid.bin
- do the dance to put file in initfs
- adding drm.edid_firmware=edid/myedid.bin
to kernel parameters
it fails with "invalid firmware" but doesn't matter, i still get the error right on top of my sd-encrypt prompt, so it would not help move it early/later.
2. early kms hook.
tried this weird mkinitcpio.conf
list for the hooks
HOOKS=(systemd kms autodetect modconf keyboard sd-vconsole block sd-encrypt microcode lvm2 filesystems fsck)
still happens late.
3. forcing the right tty size/font
thinking i can't win, i try to join kms and just tell it what to set things to when it comes crashing in.
Added any combination of video=efifb fbcon=font:iso01-12x22 fbcon=nodefer video=eDP-1:1920x1200@60.03
(which is what sd-vconfig sets)
it still ignores the font, and while it always set the display to the right resolution, the virtual resolution for the tty get's cut in 1/4 which breaks all the text for a while. And still starts at 2.5s into the boot.
4. bundling amdgpu firmware in initfs
reading on https://forums.gentoo.org/viewtopic-p-8789979.html i added FILES=(/usr/lib/firmware/amdgpu/*)
to mkinitcpio.conf
nothing changes.
gcb
(632 rep)
Apr 29, 2024, 06:17 PM
• Last activity: Apr 29, 2024, 06:49 PM
2
votes
2
answers
448
views
Xorg problem with Radeon Mobility X1600 on a 20" iMac LCD from 2006
My attempt to give a meaningful purpose to a 20-inch 2006 iMac 4.1 choked on an odd Xorg problem. Tried **FreeBSD 13.0 Release/amd64** and **elementaryOS 6.0 Odin** as up-to-date systems with their respective **Xorg** packages. Using the **radeon** driver *(only under Linux, as this driver was not p...
My attempt to give a meaningful purpose to a 20-inch 2006 iMac 4.1 choked on an odd Xorg problem. Tried **FreeBSD 13.0 Release/amd64** and **elementaryOS 6.0 Odin** as up-to-date systems with their respective **Xorg** packages. Using the **radeon** driver *(only under Linux, as this driver was not present under FreeBSD, nor was it available as a separate package or in ports)* seems to provide the correct graphics mode with the native resolution of the built-in LCD screen. Windows, icons, panels look all fine, but the background picture *(or the desktop background in general, even as a solid colour)* shows broken artifacts.

Its peculiarity is that ONLY the background is drawn flawed, and any GUI object over that looks perfectly fine, including translucent windows, menubars, icons, etc.

Using the **modesetting** driver seems to be unable to render the graphics screen properly, with the entire display drawn with off-shifted lines.

As the **modesetting** driver exists *(and behaves exactly the same way)* on both FreeBSD and Linux *(talking about recent releases as in late 2021)*, and also being the obvious way forward, I would prefer to get this working instead of relying on **radeon**.
So, I started by extracting the native resolution details from the EDID data of this 2006 iMac's built-in LCD.
Identifier "Color LCD"
ModelName "Color LCD"
VendorName "APP"
# Monitor Manufactured week 0 of 2005
# EDID version 1.3
# Digital Display
DisplaySize 430 270
Gamma 2.20
Modeline "Mode 0" 119.00 1680 1728 1760 1840 1050 1053 1059 1080 -hsync -vsync
The suggested **modeline** matches what **xorg** auto-detects when started without an **xorg.conf** file. As the graphics screen works well under this computer's intended *(and outdated, no longer supported)* operating system **Mac OS X 10.6** Snow Leopard, and looks fine under **Windows 10** too, I used a program named [PowerStrip](http://www.fredshack.com/docs/x.html) to collect whatever screen details I could under Windows. This allowed me to fabricate some promising custom modelines.
Section "Modes"
Identifier "LTM201M1-MODELINES"
###ModeLine "1680x1050" 147.136 1680 1784 1968 2256 1050 1051 1054 1087 +hsync +vsync
ModeLine "1600x1000" 133.142 1600 1704 1872 2144 1000 1001 1004 1035 +hsync +vsync
###ModeLine "1400x1050" 122.614 1400 1488 1640 1880 1050 1051 1054 1087 +hsync +vsync
ModeLine "1664x936" 128.373 1664 1760 1936 2208 936 937 940 969 +hsync +vsync
Modeline "1664x1040" 143.715 1664 1768 1944 2224 1040 1041 1044 1077 +hsync +vsync
ModeLine "1696x1060" 149.543 1696 1800 1984 2272 1060 1061 1064 1097 +hsync +vsync
###ModeLine "1678x1050" 146.745 1678 1784 1964 2250 1050 1051 1054 1087 +hsync +vsync
###ModeLine "1678x1048" 146.475 1678 1784 1964 2250 1048 1049 1052 1085 +hsync +vsync
###ModeLine "1680x1048" 146.866 1680 1784 1968 2256 1048 1049 1052 1085 +hsync +vsync
###ModeLine "k1" 119.000 1680 1728 1760 1840 1050 1053 1059 1080 +hsync +vsync
###ModeLine "k2" 149.543 1680 1800 1984 2272 1050 1061 1064 1097 +hsync +vsync
EndSection
Those with the triple hashtag comments, do not work well with the modesetting driver. They switch the display into an unreadable line-shifted mode as shown on the 3rd picture above. The last two lines were not produced by PowerStrip data, those were assembled by me manually, using one of the above lines but altering one detail or two in the hope that I can get the native resolution working.
Interestingly, the native resolution of 1680x1050 never seems to work using the **modesetting** driver. Not even the **modeline** that is read from EDID, which is what Windows 10, Mac OS X, and the **radeon** Xorg driver under Linux use successfully.
Also interesting that **the "1664x1040" and "1696x1060" modes work perfectly**, giving a nice, flicker free, solid display. These are one smaller, and larger resolutions than the native 1680x1050 while keeping the 16:10 aspect ratio. I am surprised that the 1696x1060 works fine too (obviously, the last few pixels are off screen, hence not visible, but the display is solid without any shifted/running lines).
Here is what the PCI details show.
vgapci0@pci0:1:0:0: class=0x030000 rev=0x00 hdr=0x00 vendor=0x1002 device=0x71c5 subvendor=0x106b subdevice=0x0080
vendor = 'Advanced Micro Devices, Inc. [AMD/ATI]'
device = 'RV530/M56-P [Mobility Radeon X1600]'
class = display
subclass = VGA
The Xorg.0.log from both FreeBSD and Linux, using either the radeon or the modesetting driver, shows nothing wrong. As far as Xorg is concerned, everything runs fine. The problem is that my human eyes see something I am not able to process (shown on the 3rd picture above).
[xrandr --verbose (radeon/Linux)](http://keve.maclab.org/pub/xrandr--verbose.imac41.radeon.linux.txt)
[xrandr --verbose (modesetting/FreeBSD)](http://keve.maclab.org/pub/xrandr--verbose.imac41.modesetting.fbsd.txt)
[Xorg.0.log (radeon/Linux)](http://keve.maclab.org/pub/xorg.0.log.iMac41.radeon.linux.txt)
[Xorg.0.log (modesetting/Linux)](http://keve.maclab.org/pub/xorg.0.log.iMac41.modesetting.linux.txt)
As I am unable to get the native 1680x1050 working with modesetting (failing the same way under FreeBSD and Linux), while the same settings are known to work with the proprietary ATI driver under Windows 10 and Mac OS X, plus the open source radeon driver under Linux, my conclusion is that something may be wrong with the modesetting driver (or the radeonkms module).
**Do you have any suggestion on what to try in order to get the native resolution working with the modesetting driver?**
Alternatively, **what would be the proper channel to report this issue to the developers of the modesetting driver** (or the radeonkms module)**?**
Keve
(123 rep)
Nov 22, 2021, 09:23 PM
• Last activity: Mar 26, 2024, 12:39 PM
3
votes
0
answers
521
views
How does this whole FrameBuffer, DRM, KMS stuff work in todays Linux / Kernel?
I'm confused about what is what nowadays with Linux and video support for the console interface vs X. Do the /dev/fb* items only relate to the old original framebuffer support? Does DRM create/support /dev/fb* items. How does KMS fit in to all of this? Do you still need at least the generic frame bu...
I'm confused about what is what nowadays with Linux and video support for the console interface vs X.
Do the /dev/fb* items only relate to the old original framebuffer support?
Does DRM create/support /dev/fb* items.
How does KMS fit in to all of this?
Do you still need at least the generic frame buffer support for VESA or EFI to have console support?
TIA!!
user3161924
(283 rep)
Dec 13, 2023, 06:10 PM
2
votes
1
answers
887
views
Unloading KMS driver / Replacing NVIDIA Linux drivers without rebooting
In the past you could trivially replace NVIDIA proprietary drivers on the fly after switching to the text console, killing X.org and unloading (`rmmod`) the appropriate NVIDIA modules, and installing new drivers. However nowadays NVIDIA recommends to run the driver with KMS support, `options nvidia-...
In the past you could trivially replace NVIDIA proprietary drivers on the fly after switching to the text console, killing X.org and unloading (
rmmod
) the appropriate NVIDIA modules, and installing new drivers.
However nowadays NVIDIA recommends to run the driver with KMS support, options nvidia-drm modeset=1
and that makes it impossible to unload kernel modules ("The device is busy").
The Linux kernel allows to unbind the graphical driver from the text console by running this command:
echo 0 > /sys/class/vtconsole/vtcon1/bind
However this command results in all the text consoles being completely dead. They just freeze, the system keeps running.
Looks like after this command, one needs to run another command to make the kernel re-enable built-in drivers which drove text consoles prior to NVIDIA drivers, only the Internet has no information on that.
This question is **not** limited to NVIDIA drivers. Would be nice to know how to unload an open source KMS driver, be it Intel, AMD or Nouveau.
Here's what I see on boot:
Console: colour dummy device 80x25
printk: console [tty0] enabled
fbcon: Deferring console take-over
fbcon: Taking over console
Console: switching to colour frame buffer device 128x48
Here's what I have in my .config
:
CONFIG_SYSFB=y
CONFIG_SYSFB_SIMPLEFB=y
CONFIG_DRM_FBDEV_EMULATION=y
CONFIG_DRM_FBDEV_OVERALLOC=200
CONFIG_FB_CMDLINE=y
CONFIG_FB_NOTIFY=y
CONFIG_FB=y
CONFIG_FB_DEFERRED_IO=y
CONFIG_FB_VESA=y
CONFIG_FB_EFI=y
CONFIG_FB_SIMPLE=y
Here's what I have in /sys
:
$ find /sys -iname '*fb*'
/sys/class/graphics/fb0
/sys/class/graphics/fbcon
/sys/devices/platform/simple-framebuffer.0/graphics/fb0
/sys/devices/virtual/graphics/fbcon
/sys/module/drm_kms_helper/parameters/drm_fbdev_overalloc
/sys/module/drm_kms_helper/parameters/fbdev_emulation
/sys/module/fb
/sys/module/fb/parameters/lockless_register_fb
Artem S. Tashkinov
(32730 rep)
Jun 21, 2023, 07:26 AM
• Last activity: Jun 29, 2023, 02:05 PM
9
votes
3
answers
5648
views
HDMI monitors not correctly detected after suspend if laptop lid closed
When my Dell XPS 15 9570 laptop is on, the monitor plugged in the HDMI port is correctly detected. Unplugging the monitor also works as expected. However, when waking up from suspend by briefly lifting the lid open, the HDMI port is not reconfigured. Whatever was plugged at the time it was suspended...
When my Dell XPS 15 9570 laptop is on, the monitor plugged in the HDMI port is correctly detected. Unplugging the monitor also works as expected.
However, when waking up from suspend by briefly lifting the lid open, the HDMI port is not reconfigured. Whatever was plugged at the time it was suspended is still considered connected after resuming.
That means that the resolution of the previously plugged monitor is kept, causing "not supported resolution" on the new monitor if the monitors expect different resolutions. Re-connecting the new monitor fixes the issue in this case.
I have not figured out exactly how, but leaving the lid open when suspended or after resuming seems to change this behavior.
**How can I force the HDMI ports to be scanned again on resume?** or otherwise work around this annoying issue.
### Some more context:
* Dual GPU, integrated Intel UHD 630 in use (i915)
* Discrete nvidia GPU disabled, no proprietary drivers loaded
* Debian 10 (buster)
* Linux 4.19.0-2:
nouveau.runpm=0 acpi_rev_override=1 acpi_osi=Linux nouveau.modeset=0 scsi_mod.use_blk_mq=1 mem_sleep_default=deep
* Wayland 1.16, Gnome 3.30
* /sys/power/mem_sleep: s2idle [deep]
### UPDATE
This keeps happening with newer BIOS and Kernel:
* Debian 11 (bullseye)
* Linux 5.2.0-3
* Wayland client 1.17, Gnome 3.30
* newest Dell XPS BIOS: 1.13.0
istepaniuk
(262 rep)
Apr 3, 2019, 09:39 AM
• Last activity: Oct 21, 2022, 03:02 AM
1
votes
1
answers
742
views
How to set Number of Lines in tty terminals
I running Manjaro Linux (21.2) with KDE. I have a large ultra-wide screen monitor with ideal resolution of 3400x1440. KDE and Konsole appear to be running this resolution fine but when I switch to tty2(-6) terminals I am stuck with 45 Lines which is far too few for the monitor. I expect somewhere be...
I running Manjaro Linux (21.2) with KDE. I have a large ultra-wide screen monitor with ideal resolution of 3400x1440.
KDE and Konsole appear to be running this resolution fine but when I switch to tty2(-6) terminals I am stuck with 45 Lines which is far too few for the monitor. I expect somewhere between 60-80 would be ideal.
If I run
inxi -Fx
In KDE (Konsole)
>Graphics: Device-1: Intel RocketLake-S GT1 [UHD Graphics 750] vendor: Micro-Star MSI driver: i915 v: kernel bus-ID: 00:02.0
>
>Display: x11 server: X.Org 1.21.1.1 driver: loaded: modesetting resolution: 3440x1440~50Hz
In tty2
>Graphics: Device-1: Intel RocketLake-S GT1 [UHD Graphics 750] vendor: Micro-Star MSI driver: i915 v: kernel bus-ID: 00:02.0
>
>Display: x11 server: X.Org 1.21.1.1 driver: loaded: modesetting tty: 215x45
I have checked the GRUB settings but they seem fine, I enable the GRUB menu and went into the loader. GRUB Loader displays very nicely in 3440x1400.
I also tried resizecons -lines 60
but got this error:
>invalid columns number 0
Attempting resizecons 215x60
gives this error:
> cannot find videomode file 215x60
I could go down the path of trying to find videomode files?
What is the correct way to do this?
I would like to work on the terminal where possible, when setup with a high resolution, the solid black background and crisp font is so much better than being in the DE.
MattP
(111 rep)
Dec 7, 2021, 09:36 PM
• Last activity: Dec 7, 2021, 10:16 PM
0
votes
0
answers
559
views
Low-Latency method to read Xorg or DRM framebuffer
I am trying to build an application that can measure the latency/processing time of graphics frameworks on Linux. My idea is to implement simple programs that react to an input event (e.g. mouse click) by changing the screen's color (e.g. from black to white) with different graphics and UI framework...
I am trying to build an application that can measure the latency/processing time of graphics frameworks on Linux.
My idea is to implement simple programs that react to an input event (e.g. mouse click) by changing the screen's color (e.g. from black to white) with different graphics and UI frameworks (e.g. SDL, OpenGL, Qt, ...).
To measure each program's latency, I want to implement a separate program that measures the time from an input event arriving at the machine (I'm using evdev for this) to a pixel being updated in some kind of framebuffer (as close to the application as possible). The second program should also be independent of vblank events, as I'm interested in the time the rendering is done, not the time users might be able to see something.
My problem is getting the framebuffer content with the second program. I already managed to get framebuffer content with fbdev or libdrm (based on this tutorial ), but both require the programs to be run in a terminal without an active XServer (which I would prefer due to external validity).
I have also already tried to use MIT-XShm to retrieve the X framebuffer's content, but it seems too slow for my problem, even when reading only one pixel (around 4 milliseconds with harsh outliers).
This is how I currently use XShm, in case it might help.
// get one pixel at coordinates x/y with XShm
XShmGetImage(display, rootWindow, image, x, y, 0x00ffffff);
// store red value into variable
unsigned int color = image->data[2] ;
if(color != lastcolor)
{
log(time_micros(), color);
}
Is there a quick and reliable way to retrieve framebuffer content (either from X or from DRM) with an XServer running? Or is XShm the best we can do at the moment?
Andreas Schmid
(1 rep)
Nov 9, 2021, 10:29 PM
32
votes
2
answers
13671
views
Kernel Mode Setting vs. Framebuffer?
With KMS, the graphics drivers are moved into the kernel. Since the framebuffer was already in the kernel, I wouldn't expect this to affect framebuffer operation. Yet, I read that KMS supercedes the fb, augments the fb, requires the fb, and requires fb support to be removed. What the heck? The answe...
With KMS, the graphics drivers are moved into the kernel. Since the framebuffer was already in the kernel, I wouldn't expect this to affect framebuffer operation. Yet, I read that KMS supercedes the fb, augments the fb, requires the fb, and requires fb support to be removed. What the heck? The answer I'm looking for is an explanation of the relationship between KMS and the framebuffer.
I have been using uvesafb to get native resolution at the tty. My purpose here is to understand how that is going to work on a system with KMS. It would also help to cover things like.. Is scrolling faster with KMS? Do utilities like fbterm and fbida work the same? Is stability better?
user5184
(715 rep)
Jun 28, 2011, 12:28 AM
• Last activity: Sep 27, 2021, 05:29 PM
0
votes
2
answers
1483
views
Linux issues on iMac
# Linux issues on iMac I have Ubuntu 18.04.4 LTS and Arch Linux installed on an iMac, which I think doesn't support KMS. I have gotten Ubuntu to work with several desktops, window managers, and display managers. When I attempted to install Manjaro, nothing worked, even with `nomodeset`, so I tried A...
# Linux issues on iMac
I have Ubuntu 18.04.4 LTS and Arch Linux installed on
an iMac, which I think doesn't support KMS. I have gotten Ubuntu to work with several desktops, window managers, and display managers. When I attempted to install Manjaro, nothing worked, even with
nomodeset
, so I tried Arch. I have gotten the TTYs to work on Arch, but not X (I haven't tried Wayland yet). The problems I have had so far:
- Booting without nomodeset
results in a black unresponsive screen
- I can't successfully startx
or xinit
on Arch even with nomodeset
- Suspending in Ubuntu results in a black unresponsive screen
- Ubuntu brightness keys don't work, but an image on the screen says it does
I have a bunch of information, please let me know if any is unnecessary, or if I need to add more details.
## More boot info
When I boot without nomodeset
I see some info ending with the following:
*Error* No UMS support in radeon module!
At this point, the resolution sharpens slightly, then the screen goes black. With nomodeset
, the screen doesn't go black but shows more logs, then proceeds to the login screen.
UPDATE: I used to be able to see that message, but I can't seem to bring it back. Everything else is the same, though.
## Ubuntu 18.04
System info (as found in settings GUI):
- Memory: 3.8 GiB
- Processor: Intel® Core™️ i3 CPU 540 @ 3.07GHz x 4
- Graphics: llvmpipe (LLVM 9.0, 128 bits)
- GNOME: 3.28.2
- OS type: 64-bit
- Disk: 376.9 GB
lspci | grep VGA
prints 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RV730/M96-XT [Mobility Radeon HD 4670]
cat /var/log/Xorg.0.log
prints a whole bunch of stuff, let me know if you need any of it.
## Arch
startx &
prints:
X.Org X Server 1.20.8
X Protocol Version 11, Revision 0
Build Operating System: Linux Arch Linux
Current Operating System: Linux amc-arch 5.6.5-arch3-1 #1 SMP PREEMPT Sun, 19, Apr 2-2- 13:14:25 +0000 x86_64
Kernel command line: BOOT_IMAGE=/boot/vmlinuz-linux root=/dev/sda3 nomodeset
Build Date: 30 March 2020 05:05:45AM
Current version of pixman: 0.38.4
before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(==) Log file:"/var/log/Xorg.0.log", Time: Mon Apr 20 19:53:22 2020
(==) Using system config directory "/usr/share/X11/xorg.conf.d"
ile at "/var/log/Xorg.0.log" for additional information.(EE) (EE) Server terminated with error (1). Closing log file.
xinit: giving up
xinit: unable to connect to X server: Connection refused
xinit: server error
Job 1, 'startx &' has ended
The log file is smaller, but still big. Let me know if it is necessary.
# KMS - Kernel Mode Setting
I have looked around the internet, and I believe the issue is kernel mode setting. If you know about this, I would appreciate help. Is the only solution an external graphics card that supports KMS? Can I use or make a window manager or desktop environment that doesn't require KMS? Can I make a suggestion to the Linux Kernel to make an alternative to KMS that will still work?
Why does Ubuntu work (mostly) but Manjaro doesn't?
Not me
(169 rep)
Apr 21, 2020, 01:43 AM
• Last activity: Jan 20, 2021, 07:23 PM
1
votes
0
answers
875
views
Struggling with display KMS options on Lenovo D330-10igm
I'm trying to replace the default Windows 10 installation with Linux on this hybrid tablet. What I tried ------------ First I tried with Ubuntu 18.04, blank screen, I thought about starting modifying the KMS settings but I read that Arch (and derivatives) are better supported so I ditched Ubuntu and...
I'm trying to replace the default Windows 10 installation with Linux on this hybrid tablet.
What I tried
------------
First I tried with Ubuntu 18.04, blank screen, I thought about starting modifying the KMS settings but I read that Arch (and derivatives) are better supported so I ditched Ubuntu and moved over to Manjaro.
Manjaro didn't work out of the box and I immediately ran into a blank screen, so I started tinkering with the KMS settings, first I changed all the modesets to 0 (
radeon
, nouveau
, i915
) changing i915.modeset=1
to 0
helped me getting rid of the black screen in favor of garbled text on screen (same as nomodeset
obviously), I also tried adding some other i915
options such as: i915.alpha_support=1
, i915.enable_guc=3
, etc.
At this point I realized that I might be dealing with some driver issues so I switched to the Intel customized Clear Linux
which brought me to the same state again (Starting without any modification leads to blank screen, garbled with nomodeset
).
So in order to make sure I'm not missing something I tried with Arch, exactly the same results.
Which flags did I use
-
In order to activate the basic framebuffer (I'm not sure about VESA) options I applied nomodeset
and since the screen is rotated I had to use fbcon=rotate:1
, these 2 options together resulted in a garbled text which is partially readable (this is how I know the text was rotated).
Additional KMS options I used but probably had no effect:
- gfxmode=(various combinations)
, including switching between width and length
- gfxpayload=text|vga|keep
Even if there was a slight change it's very hard to notice.
Lenovo D330 has several models, from what I figured the model I'm using is Miix D330-10igm 81h3 (As written on the bottom of it, where the keyboard connector is).
The graphics card is presumably Intel UHD 630 (which is the reason I tried enabling GUC in the first place and having all the tests with Clear Linux).
I'm not sure what else should I try and whether I'm looking at the right KMS options and settings.
Image showing the garbled display after logging
-
Supposed to be the initial Arch prompt:

Yaron
(233 rep)
Jul 2, 2019, 12:50 PM
• Last activity: Mar 26, 2020, 12:42 PM
2
votes
0
answers
1643
views
Freezing with nvidia-drm modeset=1
### Setup I have an Optimus laptop with a GTX 1050, Intel i7 7700HQ, a 128GB SSD and 8GB of RAM. Using Arch Linux and KDE Plasma. [![Neofetch][1]][1] Using Nvidia proprietary drivers version 396.24-19. $ kf5-config --version Qt: 5.11. KDE Frameworks: 5.48.0 kf5-config: 1.0 $ plasmashell --version pl...
### Setup
I have an Optimus laptop with a GTX 1050, Intel i7 7700HQ, a 128GB SSD and 8GB of RAM. Using Arch Linux and KDE Plasma.
Using Nvidia proprietary drivers version 396.24-19.
$ kf5-config --version
Qt: 5.11.
KDE Frameworks: 5.48.0
kf5-config: 1.0
$ plasmashell --version
plasmashell 5.13.3
$ uname -r
4.17.9-1-ARCH
I had tearing with the proprietary Nvidia drivers, and enabling DRM KMS seemed to fix it. I did it this way .
Sadly, I came across a couple of problems after this:
### Problems
- Resizing windows by dragging them to one of the edges of the screen sometimes freezes the window in place for about 20 seconds. I can only move my mouse, change to another tty, numlock and capslock still work. There are no logs in

journalctl -b
when this happens.
- There are also random freezes, where NOTHING responds. I can't use my mouse or my keyboard, even numlock and capslock don't work. When looking in the journalctl
log, I can always see these errors:
ArchLinux kernel: nvidia-modeset: Allocated GPU:0 (GPU-c07c20bb-45d1-9ef7-5dec-2ccd17eb1af2) @ PCI:0000:01:00.0
ArchLinux kernel: nvidia-modeset: Freed GPU:0 (GPU-c07c20bb-45d1-9ef7-5dec-2ccd17eb1af2) @ PCI:0000:01:00.0
So how do I fix these problems? What other logs do I need to provide?
---
**Edit**: all my issues have gone away since I changed from KDE Plasma with KWin to bspwm with compton. Nvidia KMS works without freezes.
zjeffer
(495 rep)
Jul 29, 2018, 12:54 PM
• Last activity: Jan 16, 2020, 02:55 PM
13
votes
2
answers
3544
views
Kernel modesetting hangs my boot, but the ATI driver requires it
I have a late 2011 MacBook Pro. It has an integrated Intel video card and a discrete ATI video card. Ideally, I'd like my Xorg to use the ATI card with the free driver (no Catalyst). Here's the problem: kernel modesetting hangs my boot (verified by adding `nomodeset` to kernel parameters), and I can...
I have a late 2011 MacBook Pro. It has an integrated Intel video card and a discrete ATI video card. Ideally, I'd like my Xorg to use the ATI card with the free driver (no Catalyst).
Here's the problem: kernel modesetting hangs my boot (verified by adding
When I try to start GDM using either the ATI or Intel drivers, booted without KMS, Xorg fails with a message about not finding a suitable driver (expected, since the Intel/AMD drivers need KMS). I've also tried using the
nomodeset
to kernel parameters), and I can't figure out why. However, the ATI driver _requires_ KMS, as does the Intel driver. What are my options for getting graphics with the desired setup as described above?
I'm on kernel 3.13.8, Arch GNU/Linux. I've also tried it with kernel 3.10.35, AKA the LTS kernel. No luck. As suggested in comments, I've tried to ping the affected machine after it locks up. I can't tell for sure, but it appears that it's completely frozen, not just the display.
I've also tried booting into Mac OS X and using gfxCardStatus to force using the Intel card. This did nothing.
In order to try to get more information, I've booted the MacBook with the following kernel parameters appended to my normal kernel line (the regular kernel, not the LTS kernel, and with quiet
removed), and with gfxCardStatus set to on-the-fly switching (this seemed to revert automatically on a reboot of OS X):
rootwait ignore_loglevel debug debug_locks_verbose=1 sched_debug initcall_debug mminit_loglevel=4 udev.log_priority=8 loglevel=8 earlyprintk=vga,keep log_buf_len=10M print_fatal_signals=1 apm.debug=Y i8042.debug=Y drm.debug=1 scsi_logging_level=1 usbserial.debug=Y option.debug=Y pl2303.debug=Y firewire_ohci.debug=1 hid.debug=1 pci_hotplug.debug=Y pci_hotplug.debug_acpi=Y shpchp.shpchp_debug=Y apic=debug show_lapic=all hpet=verbose lmb=debug pause_on_oops=5 panic=10 sysrq_always_enabled


xf86-video-vesa
package, but that fails with a message about having a suitable driver but not having a suitable configuration - something about the BIOS not being right.
I've tried using PRIME , but since I can't get Xorg to come up even without acceleration or anything fancy, xrandr
doesn't work and I can't even get past the first step.
I've thought about using vgaswitcheroo or something related, but I don't think that will do anything due to the fact that the underlying issue is, I believe, the fact that KMS is hanging.
The final thing I've tried is using the proprietary Catalyst driver, due to the fact that it has it's own KMS implementation, but I couldn't get it to install due to an Xorg server version mismatch. And honestly, I have less than zero desire to use a proprietary driver if I can help it, so I didn't try very hard.
I've sent the Linux Kernel Mailing List an email about this, and hopefully someone will get back to me.
Is it possible that I've run into a kernel bug or an Xorg bug worth reporting?
I've Googled, but nothing helpful's come up.
strugee
(15371 rep)
Apr 5, 2014, 01:21 AM
• Last activity: Oct 13, 2019, 12:45 AM
2
votes
0
answers
89
views
drm/kms: Limit graphics memory usage of process
Is there a way to limit the amount of graphics resources allocated by a process in the DRM/KMS subsystem, similar to the way one can limit system memory consumption with rlimits and cgroups? Is there a way to create a render node with such limits? If not, is there a way to do so in userspace, perhap...
Is there a way to limit the amount of graphics resources allocated by a process in the DRM/KMS subsystem, similar to the way one can limit system memory consumption with rlimits and cgroups? Is there a way to create a render node with such limits?
If not, is there a way to do so in userspace, perhaps by virtualizing the graphics device?
I apologize if my question involves any fundamental misconceptions, and please let me know if it does.
novice
(429 rep)
Sep 11, 2019, 02:09 AM
• Last activity: Sep 11, 2019, 02:16 AM
0
votes
0
answers
1564
views
How to set a large stty size?(default 80x24)
I'm really not sure where is the problem, so I'm putting all message i think helpful here. dmesg | grep drm [ 0.316388] fb0: switching to cirrusdrmfb from EFI VGA [ 0.316769] [drm] fb mappable at 0xC0000000 [ 0.316769] [drm] vram aper at 0xC0000000 [ 0.316769] [drm] size 33554432 [ 0.316770] [drm] f...
I'm really not sure where is the problem, so I'm putting all message i think helpful here.
dmesg | grep drm
[ 0.316388] fb0: switching to cirrusdrmfb from EFI VGA
[ 0.316769] [drm] fb mappable at 0xC0000000
[ 0.316769] [drm] vram aper at 0xC0000000
[ 0.316769] [drm] size 33554432
[ 0.316770] [drm] fb depth is 16
[ 0.316770] [drm] pitch is 2048
[ 0.316791] fbcon: cirrusdrmfb (fb0) is primary device
[ 0.322191] cirrus 0000:00:02.0: fb0: cirrusdrmfb frame buffer device
[ 0.322195] [drm] Initialized cirrus 1.0.0 20110418 for 0000:00:02.0 on minor 0
[ 0.322279] [drm] Initialized vgem 1.0.0 20120112 for vgem on minor 1
[ 0.322293] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[ 0.322294] [drm] Driver supports precise vblank timestamp query.
[ 0.322340] [drm] Initialized vkms 1.0.0 20180514 for vkms on minor 2
dmesg | grep modeset
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.2.1-lfs-20190714-systemd root=/dev/sda2 ro ro root=/dev/sda2 console=tty0 console=ttyS0 loglevel=6 modeset
[ 0.140663] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.2.1-lfs-20190714-systemd root=/dev/sda2 ro ro root=/dev/sda2 console=tty0 console=ttyS0 loglevel=6 modeset
stty size
24 80
kernel parameters
GRUB_CMDLINE_LINUX_DEFAULT="ro root=/dev/sda2 console=tty0 console=ttyS0 loglevel=6 modeset"
GRUB_GFXMODE=1600x900
GRUB_GFXPAYLOAD_LINUX=keep
tty
/dev/ttyS0
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
CONFIG_VGACON_SOFT_SCROLLBACK=y
CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=64
# CONFIG_VGACON_SOFT_SCROLLBACK_PERSISTENT_ENABLE_BY_DEFAULT is not set
CONFIG_DUMMY_CONSOLE=y
CONFIG_DUMMY_CONSOLE_COLUMNS=132
CONFIG_DUMMY_CONSOLE_ROWS=60
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
I use qemu via ssh, so i want the tty show more columns and rows to fit my ssh client.
I use -vga cirrus and already config virtual gpu driver for this, dependency module also config which is shown in dmesg above.
If you think more information is needed, please comment.
Hope you can help me out >_<
Obsessive
(11 rep)
Jul 19, 2019, 06:02 PM
2
votes
1
answers
4103
views
How to load a modified EDID into RAM after boot to fix defective monitor's EDID report?
I've acquired a couple of HP L1750 monitors, which have VGA & DVI inputs. The VGA inputs work without issue. However, the DVI input only works until Kernel Mode Setting (KMS) occurs, after which it declares it isn't receiving a signal and enters sleep mode. I've tested two different HP L1750 monitor...
I've acquired a couple of HP L1750 monitors, which have VGA & DVI inputs. The VGA inputs work without issue. However, the DVI input only works until Kernel Mode Setting (KMS) occurs, after which it declares it isn't receiving a signal and enters sleep mode. I've tested two different HP L1750 monitors, with different DVI cables and different DVI source providers (i.e. different video cards) and the same behaviour obtains.
I've also tried manually specifying the appropriate resolution via a kernel boot option, e.g.:
video=DVI-D-0:1280x1024@60e
As well as manually configuring xorg.conf
(relying on the the output of hwinfo --monitor
) to:
Section "Device"
Identifier "DefaultDevice"
EndSection
Section "Monitor"
Identifier "DefaultMonitor"
HorizSync 24-83
VertRefresh 50-77
Option "TargetRefreshRate" "60"
Option "DDC" "off"
Option "DPMS" "off"
Option "DefaultModes" "on"
Option "PreferredMode" "1280x1024"
EndSection
Section "Screen"
Identifier "DefaultScreen"
Device "DefaultDevice"
Monitor "DefaultMonitor"
EndSection
The issue seems to be that the DVI of this monitor is defective without a special Windows driver to fix it .
How do I determine what the appropriate EDID should be? And how do I go about loading it into RAM after boot?
EDIT:
Information about graphics card, kernel driver, X driver etc.:
$ inxi -Gxxxxx
Graphics: Device-1: Advanced Micro Devices [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590]
vendor: Micro-Star MSI driver: amdgpu v: kernel bus ID: 01:00.0 chip ID: 1002:67df
Display: x11 server: X.Org 1.20.4 driver: amdgpu unloaded: modesetting alternate: ati,fbdev,vesa
compositor: kwin_x11 resolution: 1920x1080~60Hz
OpenGL: renderer: AMD Radeon RX 470 Graphics (POLARIS10 DRM 3.30.0 5.1.4-arch1-1-ARCH LLVM 8.0.0)
v: 4.5 Mesa 19.0.5 direct render: Yes
In terms of xrandr
, it claims the DVI-D-0 is disconnected (even though it is connected and even though the monitor shows output via DVI-D-0 pre-KMS); I include here the modes it lists via VGA (note it's connected to VGA via a HDMI->VGA converter, so what shows up as HDMI-A-1
is actually the VGA connection)
HDMI-A-1 connected 1280x1024+0+696 (normal left inverted right x axis y axis) 340mm x 270mm
1280x1024 60.02 + 75.02*
1920x1080 60.00 59.94
1280x800 60.02
1152x864 75.00
1280x720 60.00 59.94
1024x768 75.03 70.07 60.00
832x624 74.55
800x600 72.19 75.00 60.32
720x480 60.00 59.94
640x480 75.00 72.81 60.00 59.94
720x400 70.08
DVI-D-0 disconnected (normal left inverted right x axis y axis)
Trying to set the display manually with xrandr
doesn't seem to work:
$ xrandr --output DVI-D-0 --mode 1280x1024
xrandr: cannot find mode 1280x1024
Doing xrandr --output DVI-D-0
doesn't result in a "cannot find mode" message, but it doesn't turn the display on via DVI either. I tried xrandr --output DVI-D-0 --mode [...]
for all of the resolutions listed above for the VGA connection, from 1280x1024 to 720x400, and all result in the "cannot find mode" message.
EDIT2: Xorg.0.log
information relevant to EDID:
[ 45.594] (II) AMDGPU(0): EDID for output HDMI-A-1
[ 45.594] (II) AMDGPU(0): Manufacturer: HWP Model: 26e9 Serial#: 16843009
[ 45.594] (II) AMDGPU(0): Year: 2008 Week: 2
[ 45.594] (II) AMDGPU(0): EDID Version: 1.3
[ 45.594] (II) AMDGPU(0): Digital Display Input
[ 45.594] (II) AMDGPU(0): Max Image Size [cm]: horiz.: 34 vert.: 27
[ 45.594] (II) AMDGPU(0): Gamma: 2.40
[ 45.594] (II) AMDGPU(0): DPMS capabilities: StandBy Suspend Off
[ 45.594] (II) AMDGPU(0): Supported color encodings: RGB 4:4:4 YCrCb 4:4:4
[ 45.594] (II) AMDGPU(0): Default color space is primary color space
[ 45.594] (II) AMDGPU(0): First detailed timing is preferred mode
[ 45.594] (II) AMDGPU(0): redX: 0.640 redY: 0.349 greenX: 0.284 greenY: 0.617
[ 45.594] (II) AMDGPU(0): blueX: 0.142 blueY: 0.067 whiteX: 0.313 whiteY: 0.329
[ 45.594] (II) AMDGPU(0): Supported established timings:
[ 45.594] (II) AMDGPU(0): 720x400@70Hz
[ 45.594] (II) AMDGPU(0): 640x480@60Hz
[ 45.594] (II) AMDGPU(0): 640x480@72Hz
[ 45.594] (II) AMDGPU(0): 640x480@75Hz
[ 45.594] (II) AMDGPU(0): 800x600@60Hz
[ 45.594] (II) AMDGPU(0): 800x600@72Hz
[ 45.594] (II) AMDGPU(0): 800x600@75Hz
[ 45.594] (II) AMDGPU(0): 832x624@75Hz
[ 45.594] (II) AMDGPU(0): 1024x768@60Hz
[ 45.594] (II) AMDGPU(0): 1024x768@70Hz
[ 45.594] (II) AMDGPU(0): 1024x768@75Hz
[ 45.594] (II) AMDGPU(0): 1280x1024@75Hz
[ 45.594] (II) AMDGPU(0): 1152x864@75Hz
[ 45.594] (II) AMDGPU(0): Manufacturer's mask: 0
[ 45.594] (II) AMDGPU(0): Supported standard timings:
[ 45.594] (II) AMDGPU(0): #0: hsize: 1280 vsize 1024 refresh: 60 vid: 32897
[ 45.594] (II) AMDGPU(0): Supported detailed timing:
[ 45.594] (II) AMDGPU(0): clock: 108.0 MHz Image Size: 340 x 270 mm
[ 45.594] (II) AMDGPU(0): h_active: 1280 h_sync: 1328 h_sync_end 1440 h_blank_end 1688 h_border: 0
[ 45.594] (II) AMDGPU(0): v_active: 1024 v_sync: 1025 v_sync_end 1028 v_blanking: 1066 v_border: 0
[ 45.594] (II) AMDGPU(0): Ranges: V min: 50 V max: 77 Hz, H min: 24 H max: 83 kHz, PixClock max 145 MHz
[ 45.594] (II) AMDGPU(0): Monitor name: HP L1750
[ 45.594] (II) AMDGPU(0): Serial No: CND8020JJG
[ 45.594] (II) AMDGPU(0): Supported detailed timing:
[ 45.594] (II) AMDGPU(0): clock: 27.0 MHz Image Size: 160 x 90 mm
[ 45.594] (II) AMDGPU(0): h_active: 720 h_sync: 736 h_sync_end 798 h_blank_end 858 h_border: 0
[ 45.594] (II) AMDGPU(0): v_active: 480 v_sync: 489 v_sync_end 495 v_blanking: 525 v_border: 0
[ 45.594] (II) AMDGPU(0): Number of EDID sections to follow: 1
[ 45.594] (II) AMDGPU(0): EDID (in hex):
[ 45.594] (II) AMDGPU(0): 00ffffffffffff0022f0e92601010101
[ 45.594] (II) AMDGPU(0): 0212010380221b8ceedc55a359489e24
[ 45.594] (II) AMDGPU(0): 115054adef8081800101010101010101
[ 45.594] (II) AMDGPU(0): 010101010101302a009851002a403070
[ 45.594] (II) AMDGPU(0): 1300540e1100001e000000fd00324d18
[ 45.594] (II) AMDGPU(0): 530e000a202020202020000000fc0048
[ 45.594] (II) AMDGPU(0): 50204c313735300a20202020000000ff
[ 45.594] (II) AMDGPU(0): 00434e44383032304a4a470a202001b0
[ 45.594] (II) AMDGPU(0): 02031b61230907078301000067030c00
[ 45.594] (II) AMDGPU(0): 2000802d43908402e2000f8c0ad08a20
[ 45.594] (II) AMDGPU(0): e02d10103e9600a05a00000000000000
[ 45.594] (II) AMDGPU(0): 00000000000000000000000000000000
[ 45.594] (II) AMDGPU(0): 00000000000000000000000000000000
[ 45.594] (II) AMDGPU(0): 00000000000000000000000000000000
[ 45.594] (II) AMDGPU(0): 00000000000000000000000000000000
[ 45.594] (II) AMDGPU(0): 00000000000000000000000000000029
[ 45.594] (--) AMDGPU(0): HDMI max TMDS frequency 225000KHz
[ 45.594] (II) AMDGPU(0): Printing probed modes for output HDMI-A-1
[ 45.594] (II) AMDGPU(0): Modeline "1280x1024"x60.0 108.00 1280 1328 1440 1688 1024 1025 1028 1066 +hsync +vsync (64.0 kHz eP)
[ 45.594] (II) AMDGPU(0): Modeline "1920x1080"x60.0 148.50 1920 2008 2052 2200 1080 1084 1089 1125 +hsync +vsync (67.5 kHz e)
[ 45.594] (II) AMDGPU(0): Modeline "1920x1080"x59.9 148.35 1920 2008 2052 2200 1080 1084 1089 1125 +hsync +vsync (67.4 kHz e)
[ 45.594] (II) AMDGPU(0): Modeline "1280x1024"x75.0 135.00 1280 1296 1440 1688 1024 1025 1028 1066 +hsync +vsync (80.0 kHz e)
[ 45.594] (II) AMDGPU(0): Modeline "1280x800"x60.0 108.00 1280 1328 1440 1688 800 1025 1028 1066 +hsync +vsync (64.0 kHz e)
[ 45.594] (II) AMDGPU(0): Modeline "1152x864"x75.0 108.00 1152 1216 1344 1600 864 865 868 900 +hsync +vsync (67.5 kHz e)
[ 45.594] (II) AMDGPU(0): Modeline "1280x720"x60.0 74.25 1280 1390 1430 1650 720 725 730 750 +hsync +vsync (45.0 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "1280x720"x59.9 74.18 1280 1390 1430 1650 720 725 730 750 +hsync +vsync (45.0 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "1024x768"x75.0 78.75 1024 1040 1136 1312 768 769 772 800 +hsync +vsync (60.0 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "1024x768"x70.1 75.00 1024 1048 1184 1328 768 771 777 806 -hsync -vsync (56.5 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "832x624"x74.6 57.28 832 864 928 1152 624 625 628 667 -hsync -vsync (49.7 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "800x600"x72.2 50.00 800 856 976 1040 600 637 643 666 +hsync +vsync (48.1 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "800x600"x75.0 49.50 800 816 896 1056 600 601 604 625 +hsync +vsync (46.9 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "720x480"x60.0 27.03 720 736 798 858 480 489 495 525 -hsync -vsync (31.5 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "720x480"x59.9 27.00 720 736 798 858 480 489 495 525 -hsync -vsync (31.5 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "640x480"x75.0 31.50 640 656 720 840 480 481 484 500 -hsync -vsync (37.5 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "640x480"x72.8 31.50 640 664 704 832 480 489 492 520 -hsync -vsync (37.9 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "640x480"x60.0 25.20 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "640x480"x59.9 25.18 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz e)
[ 45.595] (II) AMDGPU(0): Modeline "720x400"x70.1 28.32 720 738 846 900 400 412 414 449 -hsync +vsync (31.5 kHz e)
[ 45.595] (II) AMDGPU(0): EDID for output DVI-D-0
[ 45.595] (II) AMDGPU(0): Output DisplayPort-0 connected
[ 45.595] (II) AMDGPU(0): Output DisplayPort-1 disconnected
[ 45.595] (II) AMDGPU(0): Output HDMI-A-0 connected
[ 45.595] (II) AMDGPU(0): Output HDMI-A-1 connected
[ 45.595] (II) AMDGPU(0): Output DVI-D-0 disconnected
[ 45.595] (II) AMDGPU(0): Using user preference for initial modes
[ 45.595] (II) AMDGPU(0): Output DisplayPort-0 using initial mode 1280x1024 +0+0
[ 45.595] (II) AMDGPU(0): Output HDMI-A-0 using initial mode 1280x1024 +0+0
[ 45.595] (II) AMDGPU(0): Output HDMI-A-1 using initial mode 1280x1024 +0+0
[ 45.595] (II) AMDGPU(0): mem size init: gart size :ff973000 vram size: s:ff2e8000 visible:f2e8000
[ 45.595] (==) AMDGPU(0): DPI set to (96, 96)
[ 45.595] (==) AMDGPU(0): Using gamma correction (1.0, 1.0, 1.0)
emacsomancer
(509 rep)
May 25, 2019, 10:03 PM
• Last activity: May 26, 2019, 05:02 PM
0
votes
0
answers
283
views
How do I find the video memory region(s) representing what's on my screen, from within the Linux kernel?
About 5-20 times a day I am presented with brief visual glitches and heisenbugs caused by race conditions that only occur under high I/O load. These disappear off the screen far too quickly for me to grab a camera in time, so I am looking to find/build a screenshotting/screen-recording tool that act...
About 5-20 times a day I am presented with brief visual glitches and heisenbugs caused by race conditions that only occur under high I/O load. These disappear off the screen far too quickly for me to grab a camera in time, so I am looking to find/build a screenshotting/screen-recording tool that acts/responds with the lowest possible delay after I press a hotkey/shortcut.
Critically, this tool's high-responsiveness needs to be negligibly (ideally not at all) impacted by high I/O activity, like 10-second load averages of 20-40.
A fair argument could be made about loading
PREEMPT_RT
and running Xorg and a homemade screenshot dæmon as realtime. This would work... except for the bit about running X realtime; I actually do want to get work done on my computer. :)
Thing is, I can confidently run any code I like in realtime on my computer, just by putting it inside the Linux kernel. So, kernel module time!! To reiterate my question,
### **How do I find, access and grok the memory region(s) representing the pixels on my screen, all from within a Linux kernel module?**
I've found that trying to read /dev/fb0
while X is running just produces a black image, so that apparently won't work.
Unfortunately https://dri.freedesktop.org/docs/drm/gpu/index.html doesn't show anything obviously related to framebuffer read-back, but I have no experience with this API, so I don't really know what I'm looking for.
I accept that driver-specific code will likely be needed (since there's unlikely a driver-agnostic canonical spot in memory representing what's actually on screen), and this fine. I'm using an Intel-GPU-based machine at the moment, and I am happy to start specifically coding for that.
FWIW, [I asked a differently-worded version](https://stackoverflow.com/questions/42748390/directly-accessing-video-memory-within-the-linux-kernel-in-a-driver-agnostic-man) of this question just under two years ago. That question only attracted one comment about HDCP and was never answered, but as I am still dealing with this problem up to 20 times a day even two years later, I'm having another go.
(My current approach (scrot
launched by i3
's hotkey binder) very often takes up to 20 (!!) seconds to take a single screenshot on systems experiencing high I/O load.)
i336_
(1077 rep)
Feb 12, 2019, 12:53 PM
• Last activity: Feb 12, 2019, 01:01 PM
Showing page 1 of 20 total questions