Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

5 votes
2 answers
3361 views
Multiple M.2 NVMe SSDs on single slot using PCIe bifurcation: Set this up from Linux when BIOS does not?
I'd like to use a multi-M.2 NVMe PCIe carrier card like an [Asus Hyper M.2 x16 Card v2](https://www.asus.com/Motherboard-Accessories/HYPER-M-2-X16-CARD-V2/) in an [HP Z240 tower workstation](https://web.archive.org/web/20220414145018/https://support.hp.com/us-en/document/c04887696). Its C236 PCH and...
I'd like to use a multi-M.2 NVMe PCIe carrier card like an [Asus Hyper M.2 x16 Card v2](https://www.asus.com/Motherboard-Accessories/HYPER-M-2-X16-CARD-V2/) in an [HP Z240 tower workstation](https://web.archive.org/web/20220414145018/https://support.hp.com/us-en/document/c04887696) . Its C236 PCH and Skylake E3-1200V5 CPU support [PCIe bifurcation](https://web.archive.org/web/20170628175109/https://www.intel.com/content/www/us/en/intelligent-systems/maho-bay/core-i7-pcie-slot-bifurcation-demo.html) 1 of the x16 PCIe slot driven by the CPU. (Ref: [Intel 100 Series / C230 Series Chipset datasheet Vol 1](https://web.archive.org/web/20210304165912/https://www.intel.com/content/www/us/en/products/docs/chipsets/100-series-chipset-datasheet-vol-1.html) , p. 22; [Intel Xeon E3-1200V5 datasheet Vol 1](https://web.archive.org/web/20181030044706/https://www.intel.com/content/www/us/en/processors/xeon/xeon-e3-1200v5-vol-1-datasheet.html) , p. 24.) Configuring the CPU PCIe for bifurcated 1x8 + 2x4 mode would allow using 3 M.2 NVMe drives in the x16 slot. (The E3 CPU is incapable of 4x4 bifurcation, so one of the M.2 slots in a quad-carrier card like the aforementioned Asus must remain empty.) Unfortunately the Z240's BIOS Setup does not include options to configure PCIe bifurcation. Worse, it appears that [HP's *Sure Start*](https://web.archive.org/web/20241228141437/http://h10032.www1.hp.com/ctg/Manual/c05163901) dual BIOS [prevents BIOS modifications](https://web.archive.org/web/20211120042316/https://www.win-raid.com/t4593f36-HP-Pavillion-g-wm-AMI-Aptio-V-modding-trials.html#msg75578) which [could enable PCIe bifurcation](https://blog.donbowman.ca/2017/10/06/pci-e-bifurcation-explained/) . This [Intel video introduction to PCIe bifurcation](https://www.intel.com/content/www/us/en/intelligent-systems/maho-bay/core-i7-pcie-slot-bifurcation-demo.html) 1 states (at 02:34) that > The configuration of the CPU PCI Express bus is statically determined by the BIOS prior to initializaton. The BIOS determines the configuration by looking at the presence detect pins on the CPU, called CFG and CFG. Presumably this works even if BIOS Setup doesn't include any options to configure bifurcation. But this approach would require accessing and modifying connections to the physical CPU pads, which I'd prefer to avoid. Other BIOSes that do include options to configure bifurcation apparently override CFG and CFG. I have found no documentation as to how they do this, and would be grateful for any links you may be aware of. At this point I wonder: Is there a way to override CFG and CFG after the machine has booted to Linux? (I realize I probably won't be able to boot from one of the M.2 drives in this case, but that's not a requirement for this system.) I'd expect such a procedure may involve steps like [hot-resetting](https://unix.stackexchange.com/questions/73908/how-to-reset-cycle-power-to-a-pcie-device/474378#474378) the x16, x8 and x4 PCIe controllers and/or function-level-resetting PEG Root Ports 10, 11 & 12. (Ref: [Intel Xeon E3-1200V5 datasheet Vol 2](https://web.archive.org/web/20201020235947/https://www.intel.com/content/www/us/en/processors/xeon/xeon-e3-1200v5-vol-2-datasheet.html).) Maybe followed by [kexec](https://en.wikipedia.org/wiki/Kexec) ? Many thanks for any hints, tips or pointers you can provide! -------------- Editors note: 1 As of August 2025 the original _Configure PCI-Express\* Lanes for Simultaneous Applications_ Intel link is dead and the archived versions I found no longer play the embedded video. The 1st link has been changed to an archived version.
fmyhr (311 rep)
Dec 9, 2019, 01:43 PM • Last activity: Aug 5, 2025, 12:42 PM
9 votes
1 answers
5224 views
Linux 3.x fails assigning PCI BAR memory
I got an IBM x3850 type 8864 machine, I can successfully boot using a 2.6.32 kernel but when I try to use a 3.10 kernel or newer the kernel fails to initialize all PCI slots (I can fix this (manually), see below): pci 0000:19:00.0: BAR 14: can't assign mem (size 0x1a00000) pci 0000:19:00.0: BAR 13:...
I got an IBM x3850 type 8864 machine, I can successfully boot using a 2.6.32 kernel but when I try to use a 3.10 kernel or newer the kernel fails to initialize all PCI slots (I can fix this (manually), see below): pci 0000:19:00.0: BAR 14: can't assign mem (size 0x1a00000) pci 0000:19:00.0: BAR 13: can't assign io (size 0x3000) pci 0000:19:00.0: BAR 14: can't assign mem (size 0x1600000) pci 0000:19:00.0: BAR 13: can't assign io (size 0x3000) pci 0000:1a:00.0: BAR 14: can't assign mem (size 0x1600000) pci 0000:1a:00.0: BAR 13: assigned [io 0x7000-0x8fff] pci 0000:1b:02.0: BAR 14: can't assign mem (size 0xa00000) pci 0000:1b:04.0: BAR 14: can't assign mem (size 0xa00000) pci 0000:1b:02.0: BAR 13: assigned [io 0x7000-0x7fff] pci 0000:1b:04.0: BAR 13: assigned [io 0x8000-0x8fff] ... Which causes that my network card is not successfully loaded as the PCI bus is obviously not correctly instantiated. lspci yields the following: 00:00.0 Host bridge: IBM Calgary PCI-X Host Bridge (rev 04) 00:01.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RV100 [Radeon 7000 / Radeon VE] 00:03.0 USB controller: NEC Corporation OHCI USB Controller (rev 43) 00:03.1 USB controller: NEC Corporation OHCI USB Controller (rev 43) 00:03.2 USB controller: NEC Corporation uPD72010x USB 2.0 Controller (rev 04) 00:0f.0 Host bridge: Broadcom CSB6 South Bridge (rev a0) 00:0f.1 IDE interface: Broadcom CSB6 RAID/IDE Controller (rev a0) 00:0f.3 ISA bridge: Broadcom GCLE-2 Host Bridge 01:00.0 Host bridge: IBM Calgary PCI-X Host Bridge (rev 04) 01:01.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 10) 01:01.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 10) 01:02.0 RAID bus controller: Adaptec AAC-RAID (rev 02) 02:00.0 Host bridge: IBM Calgary PCI-X Host Bridge (rev 04) 06:00.0 Host bridge: IBM Calgary PCI-X Host Bridge (rev 04) 0a:00.0 PCI bridge: IBM CalIOC2 PCI-E Root Port (rev 01) 0f:00.0 PCI bridge: IBM CalIOC2 PCI-E Root Port (rev 01) 14:00.0 PCI bridge: IBM CalIOC2 PCI-E Root Port (rev 01) 19:00.0 PCI bridge: IBM CalIOC2 PCI-E Root Port (rev 01) 1a:00.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0c) 1b:02.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0c) 1b:04.0 PCI bridge: Integrated Device Technology, Inc. [IDT] PES12N3A PCI Express Switch (rev 0c) 1c:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 1c:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 1d:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 1d:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) # FIX # I can actually fix it by removing the root PCI bus 19:00.0 PCI bridge: IBM CalIOC2 PCI-E Root Port (rev 01) via echo 1 > /sys/bus/pci/devices/0000\:19\:00.0/remove and a rescan afterwards: echo 1 > /sys/bus/pci/rescan which causes the following output: pci _bus 0000:1c: busn_res: [bus 1c] is released pci _bus 0000:1d: busn_res: [bus 1d] is released pci _bus 0000:1b: busn_res: [bus 1b-1d] is released pci _bus 0000:1a: busn_res: [bus 1a-1d] is released pci 0000:19:00.0: [1014:0308] type 01 class 0x060401 pci 0000:19:00.0: supports D1 D2 pci 0000:19:00.0: PME# supported from D0 D1 D2 D3hot D3cold pci 0000:1a:00.0: [111d:8018] type 01 class 0x060400 pci 0000:1a:00.0: PME# supported from D0 D3hot D3cold pci 0000:19:00.0: pci bridge to [bus 1a-1d] (subtractive decode) pci 0000:19:00.0: bridge window [mem 0xea800000-0xea9fffff 64bit pref] pci 0000:19:00.0: bridge window [mem 0xea800000-0xebcfffff] (subtractive decode) pci 0000:19:00.0: bridge window [io 0x7000-0x8fff] (subtractive decode) pci 0000:1b:02.0: [111d:8018] type 01 class 0x060400 pci 0000:1b:02.0: PME# supported from D0 D3hot D3cold pci 0000:1b:04.0: [111d:8018] type 01 class 0x060400 pci 0000:1b:04.0: PME# supported from D0 D3hot D3cold pci 0000:1a:00.0: pci bridge to [bus 1b-1d] pci 0000:1a:00.0: bridge window [io 0x7000-0x8fff] pci 0000:1a:00.0: bridge window [mem 0xea800000-0xea9fffff 64bit pref] .... pci 0000:1b:04.0: bridge window [mem 0xea900000-0xea9fffff 64bit pref] pci 0000:1b:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 1c] add_size 100000 pci 0000:1b:04.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 1d] add_size 100000 pci 0000:1b:02.0: res=[mem 0x00100000-0x001fffff 64bit pref] get_res_add_size add_size 100000 pci 0000:1b:04.0: res=[mem 0x00100000-0x001fffff 64bit pref] get_res_add_size add_size 100000 pci 0000:1a:00.0: bridge window [mem 0x00100000-0x002fffff 64bit pref] to [bus 1b-1d] add_size 200000 pci 0000:19:00.0: bridge window [io 0x1000-0x2fff] to [bus 1a-1d] add_size 1000 pci 0000:1a:00.0: res=[mem 0x00100000-0x002fffff 64bit pref] get_res_add_size add_size 200000 pci 0000:19:00.0: bridge window [mem 0x00100000-0x002fffff 64bit pref] to [bus 1a-1d] add_size 200000 pci 0000:19:00.0: bridge window [mem 0x00200000-0x015fffff] to [bus 1a-1d] add_size 200000 pci 0000:19:00.0: res=[mem 0x00200000-0x015fffff] get_res_add_size add_size 200000 pci 0000:19:00.0: res=[mem 0x00100000-0x002fffff 64bit pref] get_res_add_size add_size 200000 pci 0000:19:00.0: res=[io 0x1000-0x2fff] get_res_add_size add_size 1000 pci 0000:19:00.0: BAR 14: can't assign mem (size 0x1600000) pci 0000:19:00.0: BAR 15: assigned [mem 0xea800000-0xeabfffff 64bit pref] pci 0000:19:00.0: BAR 13: can't assign io (size 0x3000) pci 0000:19:00.0: BAR 14: assigned [mem 0xea800000-0xebbfffff] pci 0000:19:00.0: BAR 15: can't assign mem pref (size 0x200000) pci 0000:19:00.0: BAR 13: assigned [io 0x7000-0x8fff] pci 0000:19:00.0: BAR 14: can't assign mem (size 0x1400000) pci 0000:19:00.0: failed to add 200000 res=[mem 0xea800000-0xebbfffff] pci 0000:19:00.0: BAR 13: can't assign io (size 0x2000) pci 0000:19:00.0: failed to add 1000 res=[io 0x7000-0x8fff] pci 0000:1a:00.0: res=[mem 0x00100000-0x002fffff 64bit pref] get_res_add_size add_size 200000 pci 0000:1a:00.0: BAR 14: assigned [mem 0xea800000-0xebbfffff] pci 0000:1a:00.0: BAR 15: can't assign mem pref (size 0x400000) pci 0000:1a:00.0: BAR 13: assigned [io 0x7000-0x8fff] pci 0000:1a:00.0: BAR 14: assigned [mem 0xea800000-0xebbfffff] pci 0000:1a:00.0: BAR 15: can't assign mem pref (size 0x200000) pci 0000:1b:02.0: res=[mem 0x00100000-0x001fffff 64bit pref] get_res_add_size add_size 100000 pci 0000:1b:04.0: res=[mem 0x00100000-0x001fffff 64bit pref] get_res_add_size add_size 100000 pci 0000:1b:02.0: BAR 14: assigned [mem 0xea800000-0xeb1fffff] pci 0000:1b:04.0: BAR 14: assigned [mem 0xeb200000-0xebbfffff] pci 0000:1b:02.0: BAR 15: can't assign mem pref (size 0x200000) pci 0000:1b:04.0: BAR 15: can't assign mem pref (size 0x200000) pci 0000:1b:02.0: BAR 13: assigned [io 0x7000-0x7fff] pci 0000:1b:04.0: BAR 13: assigned [io 0x8000-0x8fff] pci 0000:1b:02.0: BAR 14: assigned [mem 0xea800000-0xeb1fffff] pci 0000:1b:04.0: BAR 14: assigned [mem 0xeb200000-0xebbfffff] pci 0000:1b:02.0: BAR 15: can't assign mem pref (size 0x100000) pci 0000:1b:04.0: BAR 15: can't assign mem pref (size 0x100000) pci 0000:1c:00.0: reg 184: [mem 0xea800000-0xea803fff 64bit pref] pci 0000:1c:00.0: reg 190: [mem 0xea820000-0xea823fff 64bit pref] pci 0000:1c:00.0: reg 184: [mem 0xea800000-0xea803fff 64bit pref] pci 0000:1c:00.0: reg 184: [mem 0xea800000-0xea803fff 64bit pref] pci 0000:1c:00.0: reg 190: [mem 0xea820000-0xea823fff 64bit pref] pci 0000:1c:00.0: reg 184: [mem 0xea800000-0xea803fff 64bit pref] pci 0000:1c:00.0: reg 190: [mem 0xea820000-0xea823fff 64bit pref] pci 0000:1c:00.1: reg 184: [mem 0xea840000-0xea843fff 64bit pref] pci 0000:1c:00.0: reg 184: [mem 0xea800000-0xea803fff 64bit pref] pci 0000:1c:00.0: reg 190: [mem 0xea820000-0xea823fff 64bit pref] pci 0000:1c:00.1: reg 190: [mem 0xea860000-0xea863fff 64bit pref] pci 0000:1c:00.0: reg 184: [mem 0xea800000-0xea803fff 64bit pref] pci 0000:1c:00.0: reg 190: [mem 0xea820000-0xea823fff 64bit pref] pci 0000:1c:00.1: reg 184: [mem 0xea840000-0xea843fff 64bit pref] pci 0000:1c:00.0: res=[mem 0xea800000-0xea7fffff 64bit pref] get_res_add_size add_size 20000 pci 0000:1c:00.0: res=[mem 0xea820000-0xea81ffff 64bit pref] get_res_add_size add_size 20000 pci 0000:1c:00.1: res=[mem 0xea840000-0xea83ffff 64bit pref] get_res_add_size add_size 20000 pci 0000:1c:00.1: res=[mem 0xea860000-0xea85ffff 64bit pref] get_res_add_size add_size 20000 pci 0000:1c:00.0: BAR 1: assigned [mem 0xea800000-0xeabfffff] pci 0000:1c:00.1: BAR 1: assigned [mem 0xeac00000-0xeaffffff] pci 0000:1c:00.0: BAR 0: assigned [mem 0xeb000000-0xeb01ffff] pci 0000:1c:00.1: BAR 0: assigned [mem 0xeb020000-0xeb03ffff] pci 0000:1c:00.0: BAR 3: assigned [mem 0xeb040000-0xeb043fff] pci 0000:1c:00.0: reg 184: [mem 0xea800000-0xea803fff 64bit pref] pci 0000:1c:00.0: BAR 7: assigned [mem 0xeb044000-0xeb063fff 64bit pref] pci 0000:1c:00.0: reg 190: [mem 0xea820000-0xea823fff 64bit pref] pci 0000:1c:00.0: BAR 10: assigned [mem 0xeb064000-0xeb083fff 64bit pref] pci 0000:1c:00.1: BAR 3: assigned [mem 0xeb084000-0xeb087fff] pci 0000:1c:00.1: reg 184: [mem 0xea840000-0xea843fff 64bit pref] pci 0000:1c:00.1: BAR 7: assigned [mem 0xeb088000-0xeb0a7fff 64bit pref] pci 0000:1c:00.1: reg 190: [mem 0xea860000-0xea863fff 64bit pref] pci 0000:1c:00.1: BAR 10: assigned [mem 0xeb0a8000-0xeb0c7fff 64bit pref] pci 0000:1c:00.0: BAR 2: assigned [io 0x7000-0x701f] pci 0000:1c:00.1: BAR 2: assigned [io 0x7020-0x703f] pci 0000:1b:02.0: pci bridge to [bus 1c] pci 0000:1b:02.0: bridge window [io 0x7000-0x7fff] pci 0000:1b:02.0: bridge window [mem 0xea800000-0xeb1fffff] pci 0000:1d:00.0: reg 184: [mem 0xea900000-0xea903fff 64bit pref] pci 0000:1d:00.0: reg 190: [mem 0xea920000-0xea923fff 64bit pref] pci 0000:1d:00.0: reg 184: [mem 0xea900000-0xea903fff 64bit pref] pci 0000:1d:00.0: reg 184: [mem 0xea900000-0xea903fff 64bit pref] pci 0000:1d:00.0: reg 190: [mem 0xea920000-0xea923fff 64bit pref] pci 0000:1d:00.0: reg 184: [mem 0xea900000-0xea903fff 64bit pref] pci 0000:1d:00.0: reg 190: [mem 0xea920000-0xea923fff 64bit pref] pci 0000:1d:00.1: reg 184: [mem 0xea940000-0xea943fff 64bit pref] pci 0000:1d:00.0: reg 184: [mem 0xea900000-0xea903fff 64bit pref] pci 0000:1d:00.0: reg 190: [mem 0xea920000-0xea923fff 64bit pref] pci 0000:1d:00.1: reg 190: [mem 0xea960000-0xea963fff 64bit pref] pci 0000:1d:00.0: reg 184: [mem 0xea900000-0xea903fff 64bit pref] pci 0000:1d:00.0: reg 190: [mem 0xea920000-0xea923fff 64bit pref] pci 0000:1d:00.1: reg 184: [mem 0xea940000-0xea943fff 64bit pref] pci 0000:1d:00.0: res=[mem 0xea900000-0xea8fffff 64bit pref] get_res_add_size add_size 20000 pci 0000:1d:00.0: res=[mem 0xea920000-0xea91ffff 64bit pref] get_res_add_size add_size 20000 pci 0000:1d:00.1: res=[mem 0xea940000-0xea93ffff 64bit pref] get_res_add_size add_size 20000 pci 0000:1d:00.1: res=[mem 0xea960000-0xea95ffff 64bit pref] get_res_add_size add_size 20000 pci 0000:1d:00.0: BAR 1: assigned [mem 0xeb400000-0xeb7fffff] pci 0000:1d:00.1: BAR 1: assigned [mem 0xeb800000-0xebbfffff] pci 0000:1d:00.0: BAR 0: assigned [mem 0xeb200000-0xeb21ffff] pci 0000:1d:00.1: BAR 0: assigned [mem 0xeb220000-0xeb23ffff] pci 0000:1d:00.0: BAR 3: assigned [mem 0xeb240000-0xeb243fff] pci 0000:1d:00.0: reg 184: [mem 0xea900000-0xea903fff 64bit pref] pci 0000:1d:00.0: BAR 7: assigned [mem 0xeb244000-0xeb263fff 64bit pref] pci 0000:1d:00.0: reg 190: [mem 0xea920000-0xea923fff 64bit pref] pci 0000:1d:00.0: BAR 10: assigned [mem 0xeb264000-0xeb283fff 64bit pref] pci 0000:1d:00.1: BAR 3: assigned [mem 0xeb284000-0xeb287fff] pci 0000:1d:00.1: reg 184: [mem 0xea940000-0xea943fff 64bit pref] pci 0000:1d:00.1: BAR 7: assigned [mem 0xeb288000-0xeb2a7fff 64bit pref] pci 0000:1d:00.1: reg 190: [mem 0xea960000-0xea963fff 64bit pref] pci 0000:1d:00.1: BAR 10: assigned [mem 0xeb2a8000-0xeb2c7fff 64bit pref] pci 0000:1d:00.0: BAR 2: assigned [io 0x8000-0x801f] pci 0000:1d:00.1: BAR 2: assigned [io 0x8020-0x803f] pci 0000:1b:04.0: pci bridge to [bus 1d] pci 0000:1b:04.0: bridge window [io 0x8000-0x8fff] pci 0000:1b:04.0: bridge window [mem 0xeb200000-0xebbfffff] pci 0000:1a:00.0: pci bridge to [bus 1b-1d] pci 0000:1a:00.0: bridge window [io 0x7000-0x8fff] pci 0000:1a:00.0: bridge window [mem 0xea800000-0xebbfffff] pci 0000:19:00.0: pci bridge to [bus 1a-1d] pci 0000:19:00.0: bridge window [io 0x7000-0x8fff] pci 0000:19:00.0: bridge window [mem 0xea800000-0xebbfffff] # QUESTION # Is it somehow possible to tell the kernel (e.g. via a parameter) to automatically do this? What is causing this issue in the first place? Thank you in advance! ## Update ## As the described fix fails on a 4.x system (actually starting with 3.12 I suppose), I had a look at the kernel and found that if I disable PCI ASPM (which was already disabled by ACPI (t can also be forced by pcie_aspm=off in kernel boot parameter)), the following small fix (on a 4.4.0) resolves a kernel null pointer dereference: --- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c @@ -552,11 +552,12 @@ static struct pcie_link_state *alloc_pcie_link_state(struct pci_dev *pdev) void pcie_aspm_init_link_state(struct pci_dev *pdev) { struct pcie_link_state *link; - int blacklist = !!pcie_aspm_sanity_check(pdev); - + int blacklist; if (!aspm_support_enabled) return; + blacklist = !!pcie_aspm_sanity_check(pdev); + if (pdev->link_state) return; Kind of odd that a sanity check is performed if the feature itself is deactivated, the actual null pointer dereference happend in pcie_aspm_sanity_check in this line list_for_each_entry(child, &pdev->subordinate->devices, bus_list) {. Is this a kernel bug?
Jan (91 rep)
Nov 10, 2016, 02:45 PM • Last activity: Jul 28, 2025, 01:05 PM
6 votes
1 answers
3138 views
How to get power consumption of a PCIe device?
I am familiar with `powertop` and `powerstat`, but they appear to measure CPU power consumption. Is there any way to measure the power consumption of a specific PCIe slot?
I am familiar with powertop and powerstat, but they appear to measure CPU power consumption. Is there any way to measure the power consumption of a specific PCIe slot?
Mark (747 rep)
Sep 26, 2018, 09:09 AM • Last activity: Jul 1, 2025, 06:02 PM
1 votes
1 answers
4300 views
How to make Realtek nic use r8168 driver
I've been trying (unsuccessfully) for the last few days to make my Realtek ethernet card to work. I have no problems with my wireless connection: only the ethernet connection doesn't work. I have Ubuntu 16.10 on a Dell Inspiron, with a RTL8101/2/6E PCI Express card. The card used the r8169 driver, w...
I've been trying (unsuccessfully) for the last few days to make my Realtek ethernet card to work. I have no problems with my wireless connection: only the ethernet connection doesn't work. I have Ubuntu 16.10 on a Dell Inspiron, with a RTL8101/2/6E PCI Express card. The card used the r8169 driver, which seems to be buggy and unreliable (as in [here](https://unix.stackexchange.com/questions/134878/make-linux-load-specific-driver-for-given-device-realtek-nic)) . Since the solution seems to be to use the r8168 driver, I: * installed the package r8168-dkms via apt-get, * blacklisted the r8169 module in /etc/modprobe.d/ * rebooted. It didn't work, as lsmod still listed the module as in use, and lspci -v still told me that the card was using the r8169 driver and module. I finally managed to blacklist the module passing the option to grub, by adding modprobe.blacklist=r8169 to the default command line in /etc/default/grub. The problem is that the r8168 module loads fine (I see it in lsmod), but it's not associated with the card so it doesn't show up in ifconfig (exactly as it happened to [lumi](https://unix.stackexchange.com/users/27995/lumi) in https://unix.stackexchange.com/questions/134878/make-linux-load-specific-driver-for-given-device-realtek-nic) . This is the relevant portion of my lshw -C network: *-network UNCLAIMED description: Ethernet controller product: RTL8101/2/6E PCI Express Fast/Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:01:00.0 version: 07 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list configuration: latency=0 resources: ioport:3000(size=256) memory:b0600000-b0600fff memory:b0400000-b0403fff My device: > lspci -v -s 01:00 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101/2/6E PCI Express Fast/Gigabit Ethernet controller (rev 07) Subsystem: Dell RTL8101/2/6E PCI Express Fast/Gigabit Ethernet controller Flags: bus master, fast devsel, latency 0, IRQ 11 I/O ports at 3000 [size=256] Memory at b0600000 (64-bit, non-prefetchable) [size=4K] Memory at b0400000 (64-bit, prefetchable) [size=16K] Capabilities: Please note that in the output above lspci does not show any drivers nor kernel modules in use. Finally, I tried to make my NIC to use the r8168 driver (as explained in this [answer](https://unix.stackexchange.com/a/141414)) , to no avail: % sudo echo 10ec 8168 > /sys/bus/pci/drivers/r8168/new_id /sys/bus/pci/drivers/r8168/new_id: File exists. % sudo echo "0000:01:00.0" > /sys/bus/pci/drivers/r8168/bind /sys/bus/pci/drivers/r8168/bind: File exists. What am I missing? Is there another way to tell a device to use a driver? Any links, clues or indications about what to read next would be helpful and very much appreciated.
cristianbravolillo (11 rep)
May 29, 2017, 05:12 PM • Last activity: Jun 1, 2025, 09:04 PM
2 votes
1 answers
2603 views
PCIe Bus error when booting Archiso and when using wifi-menu
I'm trying to install Arch Linux on a Acer Spin 5 Laptop. I'm booting the latest archiso from a USB-Stick in UEFI mode and even before the system has fully started these errors appear during the boot sequence: [...] pcieport 0000:00:1c.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=0...
I'm trying to install Arch Linux on a Acer Spin 5 Laptop. I'm booting the latest archiso from a USB-Stick in UEFI mode and even before the system has fully started these errors appear during the boot sequence: [...] pcieport 0000:00:1c.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e0(Receiver ID) [...] pcieport 0000:00:1c.0: device [8086:9d16] error status/mask=00002001/00002000 [...] pcieport 0000:00:1c.0: Receiver Error [...] pcieport 0000:00:1c.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e0(Receiver ID) [...] pcieport 0000:00:1c.0: device [8086:9d16] error status/mask=00002001/00002000 [...] pcieport 0000:00:1c.0: Receiver Error (First) And lscpi tells me that 0000:00:1c.0 belongs to PCI bridge: Intel Corporation Sunrise Point-LP PCI Express Root Port #7 (rev f1) These errors also appear (sometimes) when using wifi-menu to connect to my wifi. Sometimes this error does not occur at all, and sometimes it's spamming my shell. Some times the error code also is Replay Timer Timeout and sometimes Bad TLP, but I don't know what it depends on. Does someone know what might cause this error and how to fix it? It's very annoying and hindering me from installing arch.
herhuf (255 rep)
Dec 13, 2016, 03:25 PM • Last activity: Jun 1, 2025, 05:03 AM
0 votes
1 answers
99 views
How host allocate address for a 32-bit BAR in the PCIe device?
I wonder how a 64-bit host BIOS can allocate a physical address for a 32-bit BAR in the PCIe devices? There is only 4GB of space for 32-bit address. And the host needs to write the base address during the enumerating time. If there are too many PCIe devices or the devices require too much space, the...
I wonder how a 64-bit host BIOS can allocate a physical address for a 32-bit BAR in the PCIe devices? There is only 4GB of space for 32-bit address. And the host needs to write the base address during the enumerating time. If there are too many PCIe devices or the devices require too much space, then the 32-bit address can not provide enough space. Will the host allocate a 64-bit base address and tell the PCIe devices through some mapping mechanism, or will the host not allocate an address if there is not enough space in the low 4GB address space?
Sun Caelus (11 rep)
May 26, 2025, 07:38 AM • Last activity: May 26, 2025, 12:08 PM
1 votes
1 answers
6859 views
PCI passthrough: vfio-pci ignores ids of devices
I have 3 GPUs in my dual XEON server. I followed [instructions on Arch wiki][1] and set up `vfio-pci` with `ids=10de:100c,10de:0e1a`: $ modprobe -c | grep vfio options vfio_iommu_type1 allow_unsafe_interrupts=1 options vfio_pci ids=10de:100c,10de:0e1a ... But according to `dmesg` vfio ignores that o...
I have 3 GPUs in my dual XEON server. I followed instructions on Arch wiki and set up vfio-pci with ids=10de:100c,10de:0e1a: $ modprobe -c | grep vfio options vfio_iommu_type1 allow_unsafe_interrupts=1 options vfio_pci ids=10de:100c,10de:0e1a ... But according to dmesg vfio ignores that option: [ 1.278976] VFIO - User Level meta-driver version: 0.3 [ 1.306193] vfio_pci: add [1002:7142[ffff:ffff]] class 0x000000/00000000 [ 1.326139] vfio_pci: add [1002:7162[ffff:ffff]] class 0x000000/00000000 Moreover when I unplugged card with 1002:7142 and 1002:7162 devices on-board and reboot I still have such entries in dmesg output and no more! I upgraded linux kernel version and vfio_pci started to add another card but still independent in ids option! I don't know what to do to resolve that problem. I want to setup certain GPU to be add as vfio_pci device. I don't even know where to look. List of GPUs: #IOMMU group 17 # 02:00.0 VGA compatible controller : Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X] [1002 :6798] # 02:00.1 Audio device : Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT HDMI Audio [Radeon HD 7970 Series] [1002:aaa0] #IOMMU group 18 # 03:00.0 VGA compatible controller : NVIDIA Corporation GK110B [GeForce GTX TITAN Black] [10de:100c] (rev a1) # 03:00.1 Audio device : NVIDIA Corporation GK110 HDMI Audio [10de:0e1a] (rev a1) #IOMMU group 30 # 83:00.0 VGA compatible controller : Advanced Micro Devices, Inc. [AMD/ATI] RV515 PRO [Radeon X1300/X1550 Series] [1002:7142] # 83:00.1 Display controller : Advanced Micro Devices, Inc. [AMD/ATI] RV515 PRO [Radeon X1300/X1550 Series] (Secondary) [1002:7162] Modprobe settings: $ cat /etc/modprobe.d/vfio.conf options vfio-pci ids=10de:100c,10de:0e1a Linux version: $ uname -a Linux localhost 4.4.21-1-lts #1 SMP Thu Sep 15 20:38:36 CEST 2016 x86_64 GNU/Linux
petRUShka (1342 rep)
Sep 21, 2016, 08:40 PM • Last activity: May 23, 2025, 08:06 AM
0 votes
0 answers
24 views
system hang with G2/G3 NIC card after resuming from S3 suspend
system hang with G2/G3 NIC card after resuming from S3 suspend. But it works with G1 NIC & ethernet. What is the culprit here xhci_hcd 0000:03:00.3: Controller not ready at resume -19 xhci_hcd 0000:03:00.3: PCI post-resume error -19! xhci_hcd 0000:03:00.3: HC died; cleaning up xhci_hcd 0000:03:00.4:...
system hang with G2/G3 NIC card after resuming from S3 suspend. But it works with G1 NIC & ethernet. What is the culprit here xhci_hcd 0000:03:00.3: Controller not ready at resume -19 xhci_hcd 0000:03:00.3: PCI post-resume error -19! xhci_hcd 0000:03:00.3: HC died; cleaning up xhci_hcd 0000:03:00.4: Controller not ready at resume -19 xhci_hcd 0000:03:00.4: PCI post-resume error -19! xhci_hcd 0000:03:00.4: HC died; cleaning up xhci_hcd 0000:03:00.3: PM: dpm_run_callback(): pci_pm_resume returns -19 xhci_hcd 0000:03:00.4: PM: dpm_run_callback(): pci_pm_resume returns -19 xhci_hcd 0000:03:00.3: PM: failed to resume async: error -19 xhci_hcd 0000:03:00.4: PM: failed to resume async: error -19
Malin Shaik (11 rep)
May 8, 2025, 02:38 AM • Last activity: May 8, 2025, 03:29 PM
2 votes
2 answers
6089 views
Make NVIDIA gpu as a default gpu
I am using **Manjaro 20** gnome. When the linux is installed in my machine. A **Nvidia** driver was installed with **mhwd**. But `lspci` command does not show any nvidia gpu. **command:** ``` lspci | grep VGA ``` **output:** ``` 00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (r...
I am using **Manjaro 20** gnome. When the linux is installed in my machine. A **Nvidia** driver was installed with **mhwd**. But lspci command does not show any nvidia gpu. **command:**
lspci | grep VGA
**output:**
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02)
Any other commands like,
sudo mhwd -i pci nvidia-linux
or
sudo pacman -S nvidia
results in blank screen. Also **nvidia X server** does not shows any **openGl** or **x-screen** menu. Driver manually downloaded from nvidia did not work. machine is using intel gpu.
mhwd --listinstalled
> Installed PCI configs:
--------------------------------------------------------------------------------
                  NAME               VERSION          FREEDRIVER           TYPE
--------------------------------------------------------------------------------
     video-modesetting            2020.01.13                true            PCI
video-hybrid-intel-nvidia-prime            2020.11.30               false            PCI


Warning: No installed USB configs!
nvidia-smi
Tue Mar 16 22:39:35 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.56       Driver Version: 460.56       CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce 930MX       Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   41C    P0    N/A /  N/A |      0MiB /  2004MiB |      1%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
pacman --query | grep nvidia                                                                                                     
lib32-nvidia-utils 460.56-1
linux59-nvidia 460.56-1
mhwd-nvidia 460.56-1
mhwd-nvidia-390xx 390.141-1
nvidia-prime 1.0-4
nvidia-utils 460.56-1
neofetch
██████████████████  ████████   me_sajied@manjaro 
██████████████████  ████████   ----------------- 
██████████████████  ████████   OS: Manjaro Linux x86_64 
██████████████████  ████████   Host: HP ProBook 450 G4 
████████            ████████   Kernel: 5.9.16-1-MANJARO 
████████  ████████  ████████   Uptime: 3 hours, 6 mins 
████████  ████████  ████████   Packages: 1225 (pacman) 
████████  ████████  ████████   Shell: zsh 5.8 
████████  ████████  ████████   Resolution: 1366x768 
████████  ████████  ████████   DE: GNOME 3.38.3 
████████  ████████  ████████   WM: Mutter 
████████  ████████  ████████   WM Theme: Yaru 
████████  ████████  ████████   Theme: Arc [GTK2/3] 
████████  ████████  ████████   Icons: Yaru [GTK2/3] 
                               Terminal: gnome-terminal 
                               CPU: Intel i5-7200U (4) @ 3.100GHz 
                               GPU: NVIDIA GeForce 930MX 
                               GPU: Intel HD Graphics 620 
                               Memory: 1864MiB / 3819MiB
Sajied Shah Yousuf (41 rep)
Mar 16, 2021, 05:01 PM • Last activity: May 6, 2025, 06:04 PM
0 votes
0 answers
12 views
Jetson TK1 PCIe Link Training Fails on x4 Lane Endpoint (Kernel 3.10.40, R21.5)
I’m working with a Jetson TK1 where it’s configured as a **PCIe Root Complex**, connected to two endpoints: - An FPGA on a x1 lane - A PowerPC processor on a x4 lane During boot-up, the Jetson consistently establishes a link with the x1 device (FPGA). However, it **intermittently fails** to establis...
I’m working with a Jetson TK1 where it’s configured as a **PCIe Root Complex**, connected to two endpoints: - An FPGA on a x1 lane - A PowerPC processor on a x4 lane During boot-up, the Jetson consistently establishes a link with the x1 device (FPGA). However, it **intermittently fails** to establish a connection with the x4 lane endpoint (PowerPC). After investigating in kernel space (L4T R21.5, kernel 3.10.40), I noticed that the **Data Link Up** flag (RP_VEND_XP_DL_UP) is not set for the x4 lane, so the device is never enumerated. I’m looking to **debug this issue further** and would really appreciate any suggestions or insights from the community also at which stage of the Jetson TK1 boot process does PCIe **link training** occur? i want to dive into that area of debugging aswell
Mohammad Arij Kamran (3 rep)
Apr 30, 2025, 08:52 AM
1 votes
1 answers
3019 views
Disable Ethernet Hardware Devices at start-up
To startup vm called "sys-net" in Qubes on my laptop need to write "1" in file echo -n "1" > /sys/bus/pci/devices/0000\:04\:00.0/remove also 0000:04:00.0 and 0000:04:00.1 are conflicts and need to be removed first after every startup laptop. then network start and work fine. there is some input for...
To startup vm called "sys-net" in Qubes on my laptop need to write "1" in file echo -n "1" > /sys/bus/pci/devices/0000\:04\:00.0/remove also 0000:04:00.0 and 0000:04:00.1 are conflicts and need to be removed first after every startup laptop. then network start and work fine. there is some input for information $ lspci | grep -i eth 04:00.1 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev12) $ find /sys -name *04:00.0 /sys/bus/pci/devices/0000:04:00.0 /sys/bus/pci/drivers/rtsx_pci/0000:04:00.0 /sys/devices/pci0000:00/0000:00:1d.3/0000:04:00.0 $ find /sys -name *04:00.1 /sys/bus/pci/devices/0000:04:00.1 /sys/bus/pci/drivers/pciback/0000:04:00.1 /sys/devices/pci0000:00/0000:001d.3/0000:04:00.1 How can I convert it to systemd script to run it at start-up? It works only temporarily. After reboot the network device is there again.
nickaz (11 rep)
Mar 25, 2017, 06:55 PM • Last activity: Apr 25, 2025, 04:03 PM
1 votes
1 answers
3245 views
Pop OS doesn't boot up because of pci-e bus error
So, today I downloaded the iso file of the new Pop OS 18.04 in order to try it out. However, after I created a bootable usb, I opened my laptop, and chose to boot from usb, and suddenly I got a kind of loop that says "PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e5(Receiver ID)" I g...
So, today I downloaded the iso file of the new Pop OS 18.04 in order to try it out. However, after I created a bootable usb, I opened my laptop, and chose to boot from usb, and suddenly I got a kind of loop that says "PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e5(Receiver ID)" I got this error in the past, and I still get it every time I boot up Ubuntu, or Ubuntu based distros, but it stops after a moment. In this case, it doesn't. In theory, I could fix this problem, by adding "pci=noaer" to the grub file, but I can't do the same thing here, because I can't install the OS so I don't even have a grub, or a way to access the terminal. Is there any way I can fix this problem? Or maybe find the grub file on the USB? EDIT: I am using a 6th generation intel cpu+hd graphics card. 8gb of ram, and 1TB of storage. I have already asked a question about this error, and I have seen many others ask the same thing, so, adding the line above *will* fix the problem. Also, my laptop uses BIOS and not UEFI Thanks in advance 😄
user6516763 (111 rep)
Apr 29, 2018, 06:01 PM • Last activity: Apr 9, 2025, 10:08 AM
13 votes
1 answers
8459 views
NVMe PCIe disk power cycling
I want to test an NVMe SSD that is connected to a PCIe slot of my motherboard. The test procedure is a specific algorithm that writes workloads to the SSD, while the SSD is exposed to radiation (e.g., neutrons). I am running Fedora 22, with kernel 4.4.6. My current software successfully works with S...
I want to test an NVMe SSD that is connected to a PCIe slot of my motherboard. The test procedure is a specific algorithm that writes workloads to the SSD, while the SSD is exposed to radiation (e.g., neutrons). I am running Fedora 22, with kernel 4.4.6. My current software successfully works with SATA SSD. Since the SSD can become unresponsive due to radiation, it's sometimes mandatory to power cycle it in order to resume operations. It is made possible with an externally controlled power supply. Now, I would like to port my software to test NVMe SSD PCIe. I have modified a PCIe extender to externally apply voltage to the SSD; the derived power lines (+12V and 3.3V) are isolated from the PCIe connector power lines. With this setup, the SSD is well recognized – and works – when booting with the external power supply on. Removing the device and re-scanning the PCI bus works as long as the NVMe SSD is powered on, namely:
echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove
followed by:
echo 1 > /sys/bus/pci/rescan
works. However, if I power-off and then power-on the device after removing it, the PCI bus rescan does not work (and no message appears in dmesg). If I "brutally" power off the SSD (with my controlled power supply) without removing the SSD under sysfs, I would get the following: [ 192.688934] nvme 0000:01:00.0: Failed status: ffffffff, reset controller [ 192.689274] Trying to free nonexistent resource [ 192.699900] nvme 0000:01:00.0: Refused to change power state, currently in D3 [ 192.699946] Trying to free nonexistent resource [ 192.699953] nvme 0000:01:00.0: Device failed to resume And obviously, rescanning the PCI bus does nothing. Question: what would be necessary to achieve the power-cycling of the SSD without rebooting my test station? From similar threads, I understand that this problem is not trivial, so I would be content with a wide range of solutions (or hints), including: - Adding kernel boot parameters - Use of setpci commands (hints?) - Use of extra logic, e.g., wire modifications on the PCIe extender to "fool" the PCIe bus - Modifications in the kernel sources (hints?)
mamahuhu (243 rep)
Apr 14, 2016, 12:52 PM • Last activity: Mar 1, 2025, 02:05 AM
0 votes
0 answers
69 views
PopOs in emergency mode, "journalctl-xb" log returnes "PCIe Bus Correctable Error", what to do next?
I am new to linux, my laptop booted into emergency mode with the following message: > You are in the emergency mode. After logging in, type "journalctl -xb" to view system logs, etc. It returned the logs with the following lines (it keeps repeating them for several minutes): ``` pcieport 0000:00:1d....
I am new to linux, my laptop booted into emergency mode with the following message: > You are in the emergency mode. After logging in, type "journalctl -xb" to view system logs, etc. It returned the logs with the following lines (it keeps repeating them for several minutes):
pcieport 0000:00:1d.0: AER: Multiple Correctable error message received from 0000:01:00:0
PCIe Bus Error: severity=,Correctable, type=Physical Layer, (Receiver ID)
 device [10ec:8136]error status/mask=00000001/00006000
[ 0] RxErr        (First) "
What does it mean, and what should I do now?
Tatiana Drossos (1 rep)
Jan 27, 2025, 01:54 AM • Last activity: Jan 27, 2025, 06:50 AM
11 votes
5 answers
25789 views
Is there a command that can show if a Thunderbolt port is present on the hardware?
I'd guess lspci would be the tool to do this but I can't seem to find any identifying output. Is there a way to know from the command line whether a Thunderbolt port is present on a machine? I have one computer that I know has a Thunderbolt port and lspci shows this: 00:00.0 Host bridge: Intel Corpo...
I'd guess lspci would be the tool to do this but I can't seem to find any identifying output. Is there a way to know from the command line whether a Thunderbolt port is present on a machine? I have one computer that I know has a Thunderbolt port and lspci shows this: 00:00.0 Host bridge: Intel Corporation Device 3ec2 (rev 07) 00:01.0 PCI bridge: Intel Corporation Skylake PCIe Controller (x16) (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Device 3e92 00:08.0 System peripheral: Intel Corporation Skylake Gaussian Mixture Model 00:12.0 Signal processing controller: Intel Corporation Device a379 (rev 10) 00:14.0 USB controller: Intel Corporation Device a36d (rev 10) 00:14.2 RAM memory: Intel Corporation Device a36f (rev 10) 00:15.0 Serial bus controller [0c80]: Intel Corporation Device a368 (rev 10) 00:16.0 Communication controller: Intel Corporation Device a360 (rev 10) 00:17.0 SATA controller: Intel Corporation Device a352 (rev 10) 00:1d.0 PCI bridge: Intel Corporation Device a330 (rev f0) 00:1f.0 ISA bridge: Intel Corporation Device a306 (rev 10) 00:1f.3 Audio device: Intel Corporation Device a348 (rev 10) 00:1f.4 SMBus: Intel Corporation Device a323 (rev 10) 00:1f.5 Serial bus controller [0c80]: Intel Corporation Device a324 (rev 10) 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10) 01:00.0 Multimedia video controller: Blackmagic Design DeckLink Mini Recorder 02:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) I have remotely logged into another machine, and would like to know if it has a Thunderbolt port, and lspci shows the following: 00:00.0 Host bridge: Intel Corporation Device 191f (rev 07) 00:01.0 PCI bridge: Intel Corporation Device 1901 (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Device 1912 (rev 06) 00:14.0 USB controller: Intel Corporation Device a12f (rev 31) 00:14.2 Signal processing controller: Intel Corporation Device a131 (rev 31) 00:16.0 Communication controller: Intel Corporation Device a13a (rev 31) 00:16.3 Serial controller: Intel Corporation Device a13d (rev 31) 00:17.0 RAID bus controller: Intel Corporation 82801 SATA Controller [RAID mode] (rev 31) 00:1d.0 PCI bridge: Intel Corporation Device a118 (rev f1) 00:1f.0 ISA bridge: Intel Corporation Device a146 (rev 31) 00:1f.2 Memory controller: Intel Corporation Device a121 (rev 31) 00:1f.3 Audio device: Intel Corporation Device a170 (rev 31) 00:1f.4 SMBus: Intel Corporation Device a123 (rev 31) 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31) 01:00.0 Multimedia video controller: Blackmagic Design DeckLink Mini Recorder 02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express
mpr (1194 rep)
Sep 11, 2018, 04:46 PM • Last activity: Jan 26, 2025, 08:04 AM
0 votes
0 answers
49 views
Xenomai 3 and Xenomai 4 do not boot on Raspi CM4 due to kernel panic caused by pcie_brcmstb
I have a Raspberry Pi Compute Module 4 with an IO Board. I want to run Xenomai 3 and 4 on it. PCIe should work, so it recognizes my network adapter. I have built some Xenomai Kernels activated the Broadcom Brcmstb PCIe controller, but due to that I get a Kernel Panic during boot If I blacklist `pcie...
I have a Raspberry Pi Compute Module 4 with an IO Board. I want to run Xenomai 3 and 4 on it. PCIe should work, so it recognizes my network adapter. I have built some Xenomai Kernels activated the Broadcom Brcmstb PCIe controller, but due to that I get a Kernel Panic during boot If I blacklist pcie_brcmstb it starts Xenomai normally, but if I type lspci there is no output. I am not sure if I am missing something... This is the output during boot:
[    5.948202] lr : pci_generic_config_read+0x24/0xe0
[    5.948205] sp : ffff800010c134c0
[    5.948208] x29: ffff800010c135f0 x28: ffff3cb4429ec880
[    5.948218] x27: 0000000000000001 x26: 0000000000000000
[    5.948224] x25: 0000000000000001 x24: ffffa446dc4d5540
[    5.948230] x23: 0000000020000005 x22: ffffa446dab93c9c
[    5.948236] x21: ffff800010c13610 x20: 0000ffffffffffff
[    5.948242] x19: 0000000000000004 x18: ffffffffffffffff
[    5.948247] x17: 0000000000000005 x16: 00008a71ea304cb0
[    5.948253] x15: ffff3cb44305da1c x14: ffffa446dc4605a0
[    5.948259] x13: 0000000000000040 x12: 0000000000000000
[    5.948265] x11: 0000000000000000 x10: 0000000000000000
[    5.948270] x9 : 0000000000000228 x8 : 0000000000000000
[    5.948276] x7 : ffff3cb442b25540 x6 : ffffa446dc4633c0
[    5.948282] x5 : ffff800010b30000 x4 : ffff3cb4457fa380
[    5.948288] x3 : 0000000000000000 x2 : 0000000000008000
[    5.948294] x1 : 00000000deaddead x0 : ffff800010b38000
[    5.948300] Kernel panic - not syncing: Asynchronous SError Interrupt
[    5.948304] CPU: 0 PID: 242 Comm: (udev-worker) Not tainted 5.10.209xeno3-00256-ge2e46a0e4e4b-dirty #4
[    5.948307] Hardware name: Raspberry Pi Compute Module 4 Rev 1.1 (DT)
[    5.948309] IRQ stage: Linux
[    5.948311] Call trace:
[    5.948314]  dump_backtrace+0x0/0x1b0
[    5.948316]  show_stack+0x18/0x40
[    5.948318]  dump_stack+0xf4/0x124
[    5.948321]  panic+0x19c/0x36c
[    5.948323]  add_taint+0x0/0xc0
[    5.948325]  arm64_serror_panic+0x78/0x84
[    5.948328]  do_serror+0x38/0xac
[    5.948330]  el1_error+0x90/0x110
[    5.948332]  el1_irq+0x84/0x1c0
[    5.948335]  pci_generic_config_read+0x3c/0xe0
[    5.948338]  pci_bus_read_config_dword+0x7c/0xd0
[    5.948341]  pci_bus_generic_read_dev_vendor_id+0x34/0x1b0
[    5.948343]  pci_scan_single_device+0xa0/0x150
[    5.948346]  pci_scan_slot+0x40/0x120
[    5.948348]  pci_scan_child_bus_extend+0x54/0x2a0
[    5.948351]  pci_scan_bridge_extend+0x148/0x5c4
[    5.948353]  pci_scan_child_bus_extend+0x138/0x2a0
[    5.948356]  pci_scan_root_bus_bridge+0x64/0xdc
[    5.948358]  pci_host_probe+0x18/0xc4
[    5.948361]  brcm_pcie_probe+0x1dc/0x4e4 [pcie_brcmstb]
[    5.948364]  platform_drv_probe+0x54/0xac
[    5.948366]  really_probe+0xec/0x4e0
[    5.948369]  driver_probe_device+0x58/0xec
[    5.948371]  device_driver_attach+0xc0/0xd0
[    5.948373]  __driver_attach+0x68/0x130
[    5.948376]  bus_for_each_dev+0x70/0xd0
[    5.948378]  driver_attach+0x24/0x30
[    5.948381]  bus_add_driver+0x108/0x1fc
[    5.948383]  driver_register+0x78/0x130
[    5.948386]  __platform_driver_register+0x48/0x54
[    5.948389]  brcm_pcie_driver_init+0x24/0x1000 [pcie_brcmstb]
[    5.948391]  do_one_initcall+0x50/0x1c0
[    5.948393]  do_init_module+0x44/0x230
[    5.948396]  load_module+0x1f98/0x26f0
[    5.948398]  __do_sys_finit_module+0xa4/0xf0
[    5.948401]  __arm64_sys_finit_module+0x20/0x30
[    5.948404]  el0_svc_common.constprop.0+0xfc/0x214
[    5.948406]  do_el0_svc+0x28/0xac
[    5.948408]  el0_svc+0x1c/0x2c
[    5.948411]  el0_sync_handler+0xa4/0x12c
[    5.948413]  el0_sync+0x180/0x1c0
[    5.948449] SMP: stopping secondary CPUs
[    5.948452] Kernel Offset: 0x2446ca600000 from 0xffff800010000000
[    5.948455] PHYS_OFFSET: 0xffffc34c00000000
[    5.948457] CPU features: 0x28240022,61806000
[    5.948459] Memory Limit: none
If you need further informations feel free to ask, thanks already.
PikiTv (1 rep)
Jan 15, 2025, 09:25 AM • Last activity: Jan 21, 2025, 12:38 PM
1 votes
2 answers
1749 views
How to fix or get rid of BadDLLP warnings (Correctable PCIe Bus Error) flooding my logs?
First off, this question is not a duplicate of https://unix.stackexchange.com/q/543219/126755 because instead of asking what is causing this kernel warning, I directly ask how to solve it, or do some work-around. In something about a straight hour of writing and then reading from/to my **newly conne...
First off, this question is not a duplicate of https://unix.stackexchange.com/q/543219/126755 because instead of asking what is causing this kernel warning, I directly ask how to solve it, or do some work-around. In something about a straight hour of writing and then reading from/to my **newly connected USB disk device** Crucial P3 PCIe 3.0 x4 NVMe M.2 2280 SSD of size 4TB with model number CT4000P3SSD8, which I have put inside an AXAGON EEM2-SG2 SuperSpeed+ USB-C M.2 disk enclosure , and connected it to a **Thunderbolt 3 ** USB-C connector in my oldish Dell Inspiron 15 Gaming 7577 laptop. I immediatelly noticed such **BadDLLP warnings (Correctable PCIe Bus Error)** as this one (the time was removed for short):
kernel: pcieport 0000:00:1c.0: AER: Correctable error message received from 0000:02:00.0
kernel: pcieport 0000:02:00.0: PCIe Bus Error: severity=Correctable, type=Data Link Layer, (Receiver ID)
kernel: pcieport 0000:02:00.0:   device [8086:15da] error status/mask=00000080/00002000
kernel: pcieport 0000:02:00.0:    [ 7] BadDLLP
In just about an hour, kernel generated almost 300,000 of these warnings/correctable errors:
# journalctl --boot -1 --no-pager --no-hostname | grep BadDLLP | wc --lines

292727
Is there anything I can do *with relative safey* to mitigate these warnings/correctable errors? *** OS: Linux Mint 22 (wilma) with kernel version 6.8.0-51-generic.
Vlastimil Burián (30505 rep)
Dec 30, 2024, 08:28 PM • Last activity: Dec 31, 2024, 06:10 PM
1 votes
1 answers
538 views
Possible to map an entire NVMe SSD into PCIe BAR for MMIO?
Assumed with a 1 TiB NVMe SSD, I am wondering if it's possible to map its entire capacity (1 TiB) into PCIe BAR for memory-mapped I/O (MMIO). My understanding is that typically only device registers and doorbell registers of an NVMe SSD are mapped to PCIe BAR space, allowing MMIO access. Once the do...
Assumed with a 1 TiB NVMe SSD, I am wondering if it's possible to map its entire capacity (1 TiB) into PCIe BAR for memory-mapped I/O (MMIO). My understanding is that typically only device registers and doorbell registers of an NVMe SSD are mapped to PCIe BAR space, allowing MMIO access. Once the doorbell is triggered, data transfers occur via DMA between system memory and the NVMe SSD. It makes me thinking if is possible to open up the limited size of devices memory/registers for large range MMIO. ALso in this post, NVMe SSDs's CMB (Controller Memory Buffer) is excluded. Given the disparity between the small size of the NVMe SSD's PCIe BAR space and its overall storage capacity, I'm unsure whether the entire SSD can be exposed to the PCIe BAR or physical memory. I'm seeking guidance or clarification on my understanding of PCIe, BAR, and NVMe. --- Here is an example of 1 TiB Samsung 980Pro SSD with only 16K in PCIe BAR:
# lspci -s 3b:00.0 -v
3b:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO (prog-if 02 [NVM Express])
	Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
	Flags: bus master, fast devsel, latency 0, IRQ 116, NUMA node 0, IOMMU group 11
	Memory at b8600000 (64-bit, non-prefetchable) [size=16K]
	Capabilities:  Power Management version 3
	Capabilities:  MSI: Enable- Count=1/32 Maskable- 64bit+
	Capabilities:  Express Endpoint, MSI 00
	Capabilities: [b0] MSI-X: Enable+ Count=130 Masked-
	Capabilities:  Advanced Error Reporting
	Capabilities:  Alternative Routing-ID Interpretation (ARI)
	Capabilities:  Secondary PCI Express
	Capabilities:  Physical Layer 16.0 GT/s 
	Capabilities: [1bc] Lane Margining at the Receiver 
	Capabilities:  Latency Tolerance Reporting
	Capabilities: [21c] L1 PM Substates
	Capabilities: [3a0] Data Link Feature 
	Kernel driver in use: nvme
	Kernel modules: nvme
JGL (161 rep)
May 2, 2024, 08:58 PM • Last activity: Dec 31, 2024, 04:36 PM
1 votes
0 answers
126 views
QEMU on FreeBSD : how to passthrough a PCIe Wireless Network Adapter to the guest OS (Android 7)
I would like to passthru a PCI device to qemu for FreeBSD (14.2) without using virt-manager and vfio (because FreeBSD does not support it),but only the "raw" parameters. This is the device that I want to assign to qemu : marietto# lspci 05:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL...
I would like to passthru a PCI device to qemu for FreeBSD (14.2) without using virt-manager and vfio (because FreeBSD does not support it),but only the "raw" parameters. This is the device that I want to assign to qemu : marietto# lspci 05:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8192EE PCIe Wireless Network Adapter According with this post : https://unix.stackexchange.com/questions/96606/qemu-arm-how-to-passthrough-a-pci-card I've added the parameter "device pci-assign,host=05:00.0",like this : /usr/local/bin/qemu-system-x86_64 -machine pc-q35-9.1 -cpu max -m size=4292608k \ -vga std \ -drive file=/mnt/zroot2/zroot2/bhyve/img/Android/Android-qemu.img,format=raw \ -smp 4,sockets=4,cores=1,threads=1 -no-user-config -nodefaults \ -rtc base=utc,driftfix=slew \ -device pcie-root-port,port=16,chassis=1,id=pci.1,bus=pcie.0,multifunction=true,addr=0x2 \ -device pcie-pci-bridge,id=pci.2,bus=pci.1,addr=0x0 \ -device pcie-root-port,port=17,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x1 \ -device pcie-root-port,port=18,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=19,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x3 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x1d.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=true,addr=0x1d \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x1d.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x1d.0x2 \ -device ich9-ahci,id=sata \ -netdev tap,id=hostnet0,ifname=tap13,script=no,downscript=no \ -device e1000,netdev=hostnet0,mac=52:54:00:a3:e1:52 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.3,addr=0x0 \ > -device pci-assign,host=05:00.0 \ -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial,index=0 \ -drive if=pflash,format=raw,readonly=on,file=/usr/local/share/edk2-qemu/QEMU_UEFI_CODE-x86_64.fd \ qemu-system-x86_64: -device pci-assign,host=05:00.0: 'pci-assign' is > not a valid device model name Probably pci-assign is not a valid parameter anymore for the version of qemu that I'm using ? this one : marietto# qemu-system-x86_64 --version QEMU emulator version 9.1.0 Copyright (c) 2003-2024 Fabrice Bellard and the QEMU Project developers I have to say that if I boot the vm using bhyve instead of qemu,using these parameters,it is able to connect to internet,so the PCI-e device is recognized : bhyve-lin -S -c sockets=2,cores=1,threads=1 -m 4G -w -H -A \ -s 0,hostbridge \ -s 1,ahci-hd,/mnt/zroot-133/bhyve/img/Android/Android-qemu.img,bootindex=1 \ > -s 8:0,passthru,5/0/0 \ -s 11,hda,play=/dev/dsp,rec=/dev/dsp \ -s 13,virtio-net,tap13 \ -s 29,fbuf,tcp=0.0.0.0:5913,w=1600,h=950,wait \ -s 30,xhci,tablet \ -s 31,lpc \ -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI_CODE.fd,/usr/local/share/uefi-firmware/BHYVE_UEFI_VARS.fd \ vm0:13 < /dev/null & sleep 5 && vncviewer 0:13 && echo vncviewer 0:13 & I think that's only a matter of finding the correct syntax. Please,help me, thanks.
Marietto (579 rep)
Dec 29, 2024, 10:16 PM • Last activity: Dec 29, 2024, 10:31 PM
0 votes
1 answers
37 views
How to Interpret "PCIE: Target abort, signature" Kernel Messages on Tegra TK1 Board
I’m working on a Tegra TK1 board using the Xillybus driver to communicate with an FPGA over PCIe. The board also has a second PCIe connection to communicate with another device. During boot-up, I load the Xillybus drivers and run my PCIe applications. However, one of my applications enters a standst...
I’m working on a Tegra TK1 board using the Xillybus driver to communicate with an FPGA over PCIe. The board also has a second PCIe connection to communicate with another device. During boot-up, I load the Xillybus drivers and run my PCIe applications. However, one of my applications enters a standstill condition, and I suspect it might be related to these kernel messages I’m seeing:
[   20.929042] PCIE: Target abort, signature: 00101f01  
[   20.933937] PCIE: Target abort, signature: 00100001  
[   20.938827] PCIE: Target abort, signature: 00100b01  
[   20.943715] PCIE: Target abort, signature: 00101c01
I’m trying to understand what these "Target abort" messages mean and how to decipher the signature values (e.g., 00101f01). I think this might help me debug the issue with the application. Here’s what I’ve looked into so far: - I’ve checked the BAR (Base Address Registers) mappings using lspci, and they appear correct. - I’ve consulted the Xillybus documentation, but it doesn’t provide specifics about these kernel logs. - I suspect the messages are related to the PCIe host controller or hardware diagnostics on the Tegra TK1, but I’m unsure how to decode these "signature" values. Can anyone explain what these "Target abort" messages indicate, how to interpret the signature values, or provide guidance on debugging this issue?
Mohammad Arij Kamran (3 rep)
Dec 2, 2024, 06:36 AM • Last activity: Dec 2, 2024, 11:49 AM
Showing page 1 of 20 total questions