Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
3
votes
2
answers
873
views
How can I ascertain which kernel (not version; kernel) is in use?
I want to be able to ascertain what kernel is in use across modern, UNIX-derivative OSes. This appears feasible, considering that utilities like `uname` exist on multiple OSes with different kernels. However, if we consider the venerable `uname` to be a usable example, it soon fails at this: it prin...
I want to be able to ascertain what kernel is in use across modern, UNIX-derivative OSes.
This appears feasible, considering that utilities like
uname
exist on multiple OSes with different kernels. However, if we consider the venerable uname
to be a usable example, it soon fails at this: it prints “Darwin” on macOS, which isn't the name of the *kernel*. It's XNU. (macOS is, in effect, the DE running atop the Darwin OS). In contrast, uname
on cpe:/o:fedoraproject:fedora:42
prints “Linux”, which is the name of the kernel, rather than the base of the OS (GNU's CoreUtils). That kind of inconsistency renders it unreliable for this.
My goal might seem niche, because it is. However, if I've got a system running macOS, another running Hurd-based Debian , and another running Linux-based Fedora, what tool can I utilise to ascertain the kernel in use? (I suppose I could just look in /boot/efi
?)
RokeJulianLockhart
(541 rep)
Aug 3, 2025, 02:57 PM
• Last activity: Aug 4, 2025, 01:21 AM
1
votes
1
answers
2062
views
Compiling the Linux kernel and booting with UEFI
I recently compiled and installed a Linux kernel on my Kubuntu computer. The way I did this was, I downloaded the source .tar.gz from kernel.org, extracted it and used the following commands (running in the top directory of the source package) to compile and install it: make oldconfig make -j4 sudo...
I recently compiled and installed a Linux kernel on my Kubuntu computer. The way I did this was, I downloaded the source .tar.gz from kernel.org, extracted it and used the following commands (running in the top directory of the source package) to compile and install it:
make oldconfig
make -j4
sudo make modules_install
sudo make install
When I rebooted, however, I got a message saying "Error: out of memory" and when I pressed a key to continue it gave a kernel panic screen saying "not syncing: VFS: Unable to mount root fs on unknown-block(0,0)".
My other kernels work fine, so I can still boot up normally. But I'm curious to know why that kernel doesn't work and what I can do to get it working.
I tried this with a few versions (5.9.12, 5.9.14 and 5.10.2) and got the same result, so the exact version doesn't seem to be the issue here. But I know that I used to compile kernels like this all the time and they ran without issues. So I tried a bunch of stuff and eventually figured out that UEFI appears to be the culprit. This same kernel will work if I install it on a legacy system. Secure boot is disabled on the (UEFI) PC in question, so I figure it can't have to do with secure boot keys. It seems to be something about UEFI, but not secure boot, that breaks it.
However, upon searching the internet I couldn't find anything on getting a compiled Linux kernel to boot with UEFI. So is there really some additional thing I must do? Or is the problem something else?
**Edit:** I don't understand why my question was closed. But in case it requires more clarification, I am asking as follows: If I download the linux kernel source code from kernel.org or the code from which the Ubuntu kernels are built from here (https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.10.4/) and compile it using the commands above, I find that it will boot fine in BIOS but not UEFI. My question is why.
Isaac D. Cohen
(243 rep)
Dec 28, 2020, 02:32 AM
• Last activity: Aug 2, 2025, 03:10 AM
5
votes
2
answers
2096
views
Kernel parameters are not changed permanently for RHEL7
Trying to permanently change some vm kernel parameters, I created `/etc/sysctl.d/01-custom.conf` config file as it described in Red Hat knowledgebase article. Here is its content: # cat /etc/sysctl.d/01-custom.conf vm.swappiness=10 vm.dirty_ratio=20 vm.vfs_cache_pressure=200 But after reboot only `v...
Trying to permanently change some vm kernel parameters, I created
/etc/sysctl.d/01-custom.conf
config file as it described in Red Hat knowledgebase article. Here is its content:
# cat /etc/sysctl.d/01-custom.conf
vm.swappiness=10
vm.dirty_ratio=20
vm.vfs_cache_pressure=200
But after reboot only vm.vfs_cache_pressure
is changed and swappines
and dirty_ratio
have previous values.
# sysctl vm.swappiness
vm.swappiness = 30
# sysctl vm.dirty_ratio
vm.dirty_ratio = 30
In /etc/sysctl.conf
file there is no changes for vm.dirty_ratio
parameter and vm.swappines
is set to 10 as well. Does it mean that the system takes these values from somewhere else?
There are no any config files under /etc/sysctl.d
besides mine and link to /etc/sysctl.conf
:
# ll /etc/sysctl.d/
total 4
-rw-r--r-- 1 root root 147 May 30 04:40 01-custom.conf
lrwxrwxrwx. 1 root root 14 Apr 3 15:00 99-sysctl.conf -> ../sysctl.conf
**Update:**
sysctl --system
shows that values from my config were taken. Nothing for vm.swappines
and vm.dirty_ratio
are set to 30.
sys463
(355 rep)
May 30, 2018, 09:13 AM
• Last activity: Aug 1, 2025, 03:01 AM
1
votes
0
answers
18
views
Process Maps in s390x linux systems
So I am working on a debugger for linux s390x system and have the whole disassembler etc set up for reading the ELF file. For debugger I just run it on the process with base address from the process maps. Now when running for debugger, the process map doesn't have a read only map which would only ha...
So I am working on a debugger for linux s390x system and have the whole disassembler etc set up for reading the ELF file. For debugger I just run it on the process with base address from the process maps.
Now when running for debugger, the process map doesn't have a read only map which would only have ELF headers and this map also does not have the ELF magic bytes in the starting unlike other systems like linux x86_64 and linux arm64. Now this affects my debugger as the addresses are set according to this.
Also to set up the breakpoint ptrace provides the
#define S390_BREAKPOINT_U16 ((__u16)0x0001)
Now when set the this at the opcode, it hits the breakpoint correctly, but when I replace the original opcode, the opcode 4 bytes ahead gets placed at this position for some reason.
I think most probably the ELF header magic bytes missing messes up stuff, even if i set the breakpoint to start of a function like main
SIGILL is hit some
well-mannered-goat
(31 rep)
Jul 31, 2025, 03:35 PM
0
votes
2
answers
3348
views
Gentoo: cannot install sys-kernel/gentoo-sources-4.4.6, no error message
I am trying to install Gentoo on an old laptop following the online handbook. I got as far as installing the kernel sources (see [here][1]): the installation with emerge --ask sys-kernel/gentoo-sources seems to run fine, until it fails with no precise error message. The log file /var/tmp/portage/sys...
I am trying to install Gentoo on an old laptop following the online handbook. I got as far as installing the kernel sources (see here ): the installation with
emerge --ask sys-kernel/gentoo-sources
seems to run fine, until it fails with no precise error message. The log file
/var/tmp/portage/sys-kernel/gentoo-sources-4.4.6/temp/build.log
contains no error message. The last lines in this file read
* Final size of build directory: 1 KiB
* Final size of installed tree: 623669 KiB
ecompressdir: bzip2 -9 /usr/share/doc
I also looked at
/var/log/emerge.log
which also does not contain any error message:
1473188561: Started emerge on: set 06, 2016 21:02:40
1473188561: *** emerge --ask sys-kernel/gentoo-sources
1473188622: >>> emerge (1 of 1) sys-kernel/gentoo-sources-4.4.6 to /
1473188622: === (1 of 1) Cleaning (sys-kernel/gentoo-sources-4.4.6::/usr/portage/sys-kernel/gentoo-sources/gentoo-sources-4.4.6.ebuild)
1473188698: === (1 of 1) Compiling/Merging (sys-kernel/gentoo-sources-4.4.6::/usr/portage/sys-kernel/gentoo-sources/gentoo-sources-4.4.6.ebuild)
1473189553: === (1 of 1) Merging (sys-kernel/gentoo-sources-4.4.6::/usr/portage/sys-kernel/gentoo-sources/gentoo-sources-4.4.6.ebuild)
1473190741: *** Finished. Cleaning up...
1473190743: *** exiting unsuccessfully with status '1'.
1473190751: *** terminating.
I do not know what I should check next. Any idea?
**EDIT**
Here is the content of /var/tmp/portage/sys-kernel/gentoo-sources-4.4.6/temp/build.log
(I have removed some non-printable characters that appeared at the beginning of each line):
Package: sys-kernel/gentoo-sources-4.4.6
Repository: gentoo
Maintainer: kernel@gentoo.org
USE: abi_x86_32 elibc_glibc kernel_linux userland_GNU x86
FEATURES: preserve-libs sandbox userpriv usersandbox
>>> Preparing to unpack ...
>>> Unpacking source...
>>> Unpacking linux-4.4.tar.xz to /var/tmp/portage/sys-kernel/gentoo-sources-4.4.6/work
>>> Unpacking genpatches-4.4-8.base.tar.xz to /var/tmp/portage/sys-kernel/gentoo-sources-4.4.6/work/patches
>>> Unpacking genpatches-4.4-8.extras.tar.xz to /var/tmp/portage/sys-kernel/gentoo-sources-4.4.6/work/patches
Excluding Patch #5000_enable-additional-cpu-optimizations-for-gcc.patch ...
Excluding Patch #5015_kdbus*.patch ...
Applying 1000_linux-4.4.1.patch (-p1) ...
Applying 1001_linux-4.4.2.patch (-p1) ...
Applying 1002_linux-4.4.3.patch (-p1) ...
Applying 1003_linux-4.4.4.patch (-p1) ...
Applying 1004_linux-4.4.5.patch (-p1) ...
Applying 1005_linux-4.4.6.patch (-p1) ...
Applying 1500_XATTR_USER_PREFIX.patch (-p1) ...
Applying 1510_fs-enable-link-security-restrictions-by-default.patch (-p1) ...
Applying 2700_ThinkPad-30-brightness-control-fix.patch (-p1) ...
Applying 2900_dev-root-proc-mount-fix.patch (-p1) ...
Applying 4200_fbcondecor-3.19.patch (-p1) ...
Applying 4567_distro-Gentoo-Kconfig.patch (-p1) ...
>>> Source unpacked in /var/tmp/portage/sys-kernel/gentoo-sources-4.4.6/work
>>> Preparing source in /var/tmp/portage/sys-kernel/gentoo-sources-4.4.6/work/linux-4.4.6-gentoo ...
>>> Source prepared.
>>> Configuring source in /var/tmp/portage/sys-kernel/gentoo-sources-4.4.6/work/linux-4.4.6-gentoo ...
>>> Source configured.
>>> Compiling source in /var/tmp/portage/sys-kernel/gentoo-sources-4.4.6/work/linux-4.4.6-gentoo ...
>>> Source compiled.
>>> Test phase [not enabled]: sys-kernel/gentoo-sources-4.4.6
>>> Install gentoo-sources-4.4.6 into /var/tmp/portage/sys-kernel/gentoo-sources-4.4.6/image/ category sys-kernel
>>> Copying sources ...
>>> Completed installing gentoo-sources-4.4.6 into /var/tmp/portage/sys-kernel/gentoo-sources-4.4.6/image/
Final size of build directory: 1 KiB
Final size of installed tree: 623669 KiB
ecompressdir: bzip2 -9 /usr/share/doc
Giorgio
(847 rep)
Sep 6, 2016, 08:00 PM
• Last activity: Jul 29, 2025, 05:08 AM
6
votes
3
answers
24380
views
Cannot remove module nvidia nvidia-uvm in order to install drivers
I was trying to update the drivers of a system with a couple of Nvidia GTX 980 cards but somehow I messed up and now I encounter this error when I run the installer with Nvidia: ERROR: An NVIDIA kernel module 'nvidia-uvm' appears to already be loaded in your kernel. This may be because it is in use...
I was trying to update the drivers of a system with a couple of Nvidia GTX 980 cards but somehow I messed up and now I encounter this error when I run the installer with Nvidia:
ERROR: An NVIDIA kernel module 'nvidia-uvm' appears to already be loaded in your kernel. This may be because it is in use (for example, by the X server), but may also happen if your kernel was configured
without support for module unloading. Please be sure you have exited X before attempting to upgrade your driver. If you have exited X, know that your kernel supports module unloading, and still
receive this message, then an error may have occured that has corrupted the NVIDIA kernel module's usage count; the simplest remedy is to reboot your computer.
lsmod | grep -i nvidia
gives:
nvidia_uvm 77824 0
nvidia 8540160 77 nvidia_uvm
drm 344064 4 nvidia
So the suggestion that an error may have occured that corrupted the kernel module usage count makes sense, however, the remedy does not help and rebooting does not do a thing. I have tried blacklisting both modules in different ways and no matter what I do they are always back. Doing rmmod
or modprobe -r
does not help either. In fact with the later I get:
modprobe: FATAL: Module nvidia-uvm not found.
I have tried everything I found on the net, nothing changed that 77 there.
Any ideas? Thanks!
Miquel Martí
(61 rep)
Dec 12, 2016, 07:01 PM
• Last activity: Jul 28, 2025, 01:23 PM
1
votes
1
answers
1922
views
Qubes OS - Update a Template Kernel
I'm trying to update the kernel in the Debian Template of `Qubes OS`, following the official documentations, but it seems I'm missing something or doing something wrong. I'm using `gcc 6.3.0`. **Qubes Docs:** Installing kernel in Debian VM In Debian based VM, you need to install qubes-kernel-vm-supp...
I'm trying to update the kernel in the Debian Template of
**Qubes Docs:**
Installing kernel in Debian VM In Debian based VM, you need to install qubes-kernel-vm-support package. This package include required additional kernel module and initramfs addition required to start Qubes VM (for details see template implementation). Additionally you need some GRUB tools to create it’s configuration. Note: you don’t need actual grub bootloader as it is provided by dom0. But having one also shouldn’t harm. sudo apt-get update sudo apt-get install qubes-kernel-vm-support grub2-common Then install whatever kernel you want. If you are using distribution kernel package (linux-image-amd64 package), initramfs and kernel module should be handled automatically. If not, or you are building kernel manually, do this on using dkms and initramfs-tools: sudo dkms autoinstall -k # replace this with actual kernel version sudo update-initramfs -u When kernel is installed, you need to create GRUB configuration. You may want to adjust some settings in /etc/default/grub, for example lower GRUB_TIMEOUT to speed up VM startup. Then you need to generate actual configuration: In Fedora it can be done using update-grub2 tool: sudo mkdir /boot/grub sudo update-grub2 Then shutdown the VM. From now you can set pvgrub2 as VM kernel and it will start kernel configured within VM.
**Debian Docs:**
Don't be afraid to try compiling the kernel. It's fun and profitable. To compile a kernel the Debian way, you need some packages: fakeroot, kernel-package, linux-source-version. Hereafter, we'll assume you have free rein over your machine and will extract your kernel source to somewhere in your home directory. Make sure you are in the directory to where you want to unpack the kernel sources, extract them using tar xf /usr/src/linux-source-version.tar.xz and change to the directory linux-source-version that will have been created. Now, you can configure your kernel. Run make xconfig if X11 is installed, configured and being run; run make menuconfig otherwise (you'll need libncurses5-dev installed). Take the time to read the online help and choose carefully. When in doubt, it is typically better to include the device driver (the software which manages hardware peripherals, such as Ethernet cards, SCSI controllers, and so on) you are unsure about. Be careful: other options, not related to a specific hardware, should be left at the default value if you do not understand them. Do not forget to select “Kernel module loader” in “Loadable module support” (it is not selected by default). If not included, your Debian installation will experience problems. Clean the source tree and reset the kernel-package parameters: make-kpkg clean Now, compile the kernel: fakeroot make-kpkg --initrd Once the compilation is complete, you can install your custom kernel like any package. As root, do dpkg -i ../linux-image-version-subarchitecture.deb. For instance, the System.map will be properly installed and /boot/config-3.16 will be installed, containing your current configuration set. Your new kernel package is also clever enough to automatically update your boot loader to use the new kernel. If you have created a modules package, you'll need to install that package as well.
**The Debian Way Output:** ... ... ... This is kernel package version 13.014+nmu1. install -p -d -o root -g root -m 755 /usr/src/linux-source-4.8/debian/linux-image-4.8.15-rt10-11.pvops.qubes.x86_64/DEBIAN sed -e 's/=V/4.8.15-rt10-11.pvops.qubes.x86_64/g' -e 's/=IB//g' \ -e 's/=ST/linux/g' -e 's/=R//g' \ -e 's/=KPV/13.014+nmu1/g' \ -e 's/=K/vmlinuz/g' \ -e 's/=I/YES/g' -e 's,=D,/boot,g' \ -e 's@=A@amd64@g' \ -e 's@=B@x86_64@g' \ ... dpkg-gencontrol: error: illegal package name 'linux-image-4.8.15-rt10-11.pvops.qubes.x86_64': character '_' not allowed debian/ruleset/targets/image.mk:230: recipe for target 'debian/stamp/binary/linux-image-4.8.15-rt10-11.pvops.qubes.x86_64' failed make: *** [debian/stamp/binary/linux-image-4.8.15-rt10-11.pvops.qubes.x86_64] Error 255
**Manual compiling:**
I've downloaded
I think this can be solved easily, but if I can compile the kernel manually, then how should I procede?
**UPDATE:** Installing directly the Debian package
I noticed the
Qubes OS
, following the official documentations, but it seems I'm missing something or doing something wrong.
I'm using gcc 6.3.0
.
**Qubes Docs:**
Installing kernel in Debian VM In Debian based VM, you need to install qubes-kernel-vm-support package. This package include required additional kernel module and initramfs addition required to start Qubes VM (for details see template implementation). Additionally you need some GRUB tools to create it’s configuration. Note: you don’t need actual grub bootloader as it is provided by dom0. But having one also shouldn’t harm. sudo apt-get update sudo apt-get install qubes-kernel-vm-support grub2-common Then install whatever kernel you want. If you are using distribution kernel package (linux-image-amd64 package), initramfs and kernel module should be handled automatically. If not, or you are building kernel manually, do this on using dkms and initramfs-tools: sudo dkms autoinstall -k # replace this with actual kernel version sudo update-initramfs -u When kernel is installed, you need to create GRUB configuration. You may want to adjust some settings in /etc/default/grub, for example lower GRUB_TIMEOUT to speed up VM startup. Then you need to generate actual configuration: In Fedora it can be done using update-grub2 tool: sudo mkdir /boot/grub sudo update-grub2 Then shutdown the VM. From now you can set pvgrub2 as VM kernel and it will start kernel configured within VM.
**Debian Docs:**
Don't be afraid to try compiling the kernel. It's fun and profitable. To compile a kernel the Debian way, you need some packages: fakeroot, kernel-package, linux-source-version. Hereafter, we'll assume you have free rein over your machine and will extract your kernel source to somewhere in your home directory. Make sure you are in the directory to where you want to unpack the kernel sources, extract them using tar xf /usr/src/linux-source-version.tar.xz and change to the directory linux-source-version that will have been created. Now, you can configure your kernel. Run make xconfig if X11 is installed, configured and being run; run make menuconfig otherwise (you'll need libncurses5-dev installed). Take the time to read the online help and choose carefully. When in doubt, it is typically better to include the device driver (the software which manages hardware peripherals, such as Ethernet cards, SCSI controllers, and so on) you are unsure about. Be careful: other options, not related to a specific hardware, should be left at the default value if you do not understand them. Do not forget to select “Kernel module loader” in “Loadable module support” (it is not selected by default). If not included, your Debian installation will experience problems. Clean the source tree and reset the kernel-package parameters: make-kpkg clean Now, compile the kernel: fakeroot make-kpkg --initrd Once the compilation is complete, you can install your custom kernel like any package. As root, do dpkg -i ../linux-image-version-subarchitecture.deb. For instance, the System.map will be properly installed and /boot/config-3.16 will be installed, containing your current configuration set. Your new kernel package is also clever enough to automatically update your boot loader to use the new kernel. If you have created a modules package, you'll need to install that package as well.
**The Debian Way Output:** ... ... ... This is kernel package version 13.014+nmu1. install -p -d -o root -g root -m 755 /usr/src/linux-source-4.8/debian/linux-image-4.8.15-rt10-11.pvops.qubes.x86_64/DEBIAN sed -e 's/=V/4.8.15-rt10-11.pvops.qubes.x86_64/g' -e 's/=IB//g' \ -e 's/=ST/linux/g' -e 's/=R//g' \ -e 's/=KPV/13.014+nmu1/g' \ -e 's/=K/vmlinuz/g' \ -e 's/=I/YES/g' -e 's,=D,/boot,g' \ -e 's@=A@amd64@g' \ -e 's@=B@x86_64@g' \ ... dpkg-gencontrol: error: illegal package name 'linux-image-4.8.15-rt10-11.pvops.qubes.x86_64': character '_' not allowed debian/ruleset/targets/image.mk:230: recipe for target 'debian/stamp/binary/linux-image-4.8.15-rt10-11.pvops.qubes.x86_64' failed make: *** [debian/stamp/binary/linux-image-4.8.15-rt10-11.pvops.qubes.x86_64] Error 255
**Manual compiling:**
I've downloaded
linux-source-4.8
from Debian and I've extracted it in /usr/src
.
Then:
make defconf
make menuconf # custom settings
make
Same error as above:
dpkg-gencontrol: error: illegal package name 'linux-image-4.8.15-rt10-11.pvops.qubes.x86_64':
character '_' not allowed
I think this can be solved easily, but if I can compile the kernel manually, then how should I procede?
make install
and make modules_install
are required or I have to use dkms autoinstall
directly? This isn't specified...
**UPDATE:** Installing directly the Debian package
linux-image-amd64
make the console disappeared and the VM work unproperly, I tried to reboot it, but I could use it only by attaching to the serial console.
I noticed the
dpkg
crashed during installation, so I ran dpkg --configure -a
and it finished the installation, but it showed a warning message that told with that initramfs
, the machine would have never booted, in fact I updated GRUB
and rebooted, but initramfs
couldn't mount root.
JumpAlways
(123 rep)
Jan 31, 2017, 05:43 AM
• Last activity: Jul 27, 2025, 02:07 PM
24
votes
3
answers
42891
views
What exactly happens when I enable net.ipv4.ip_forward=1?
Suppose I have this situation where I wrote a program to poison the ARP cache of 2 devices (let's say A and B), both in the local network to successfully able to MITM from device M. The program runs on device M. When I enable IP forwarding with the command `sysctl net.ipv4.ip_forward=1` on device M,...
Suppose I have this situation where I wrote a program to poison the ARP cache of 2 devices (let's say A and B), both in the local network to successfully able to MITM from device M. The program runs on device M. When I enable IP forwarding with the command
sysctl net.ipv4.ip_forward=1
on device M, HTTP connection from device A to B can be established without any issues, and I am able to see the traffic on device M.
But, the same situation where ARP caches are poisoned after I disable the IP forwarding with the command sysctl net.ipv4.ip_forward=0
on device M, HTTP connection can't be established from device A to B. I can see the TCP SYN packet from device A on device M. In my program, after receiving the SYN packet on device M, I modify the src MAC address in the packet with M's MAC address (from A's MAC address) and dst MAC address to B's MAC address (from M's MAC address) and inject it into the network. I don't modify anything from the network layer onwards. I can see the packet at B with new src and dst MACs with TCPdump command, which means the packet gets to the B. But B doesn't respond to that packet, which I can't comprehend why.
So, the question is what special does ip_forward=1
does that makes this kind of MITM situation work? To clarify, all the machines are linux. With forwarding enabled on device M, I don't need to modify the MAC addresses in the packets. I just poison the cache and things work fine from there.
InvisibleWolf
(341 rep)
Oct 17, 2021, 01:17 PM
• Last activity: Jul 26, 2025, 04:52 PM
1
votes
1
answers
1906
views
Linux Traffic Control (tc) not working without reboot
When trying to run ``` tc qdisc add dev $INTERFACE root handle 1: prio priomap 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 tc qdisc add dev $INTERFACE parent 1:1 handle 10: netem loss "${LOSS}"% ``` I get the error of `Specified qdisc not found` `yum -y install kernel-modules-extra` works as a fix, but requires...
When trying to run
tc qdisc add dev $INTERFACE root handle 1: prio priomap 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
tc qdisc add dev $INTERFACE parent 1:1 handle 10: netem loss "${LOSS}"%
I get the error of
Specified qdisc not found
yum -y install kernel-modules-extra
works as a fix, but requires a reboot.
In my case, a reboot is not an option
Is there a way around a reboot to get qdisk
working?
RHEL 8.6
dsuma
(111 rep)
Mar 16, 2023, 11:09 PM
• Last activity: Jul 22, 2025, 04:02 AM
11
votes
3
answers
10597
views
Appending files to initramfs image - reliable?
I'm modifying a bunch of `initramfs` archives from different Linux distros in which normally only one file is being changed. I would like to automate the process without switching to root user to extract all files inside the `initramfs` image and packing them again. First I've tried to generate a li...
I'm modifying a bunch of
initramfs
archives from different Linux distros in which normally only one file is being changed.
I would like to automate the process without switching to root user to extract all files inside the initramfs
image and packing them again.
First I've tried to generate a list of files for gen_init_cpio
*without* extracting all contents on the initramfs
archive, i.e. parsing the output of cpio -tvn initrd.img
(like ls -l
output) through a script which changes all permissions to octal and arranges the output to the format gen_init_cpio
wants, like:
dir /dev 755 0 0
nod /dev/console 644 0 0 c 5 1
slink /bin/sh busybox 777 0 0
file /bin/busybox initramfs/busybox 755 0 0
This involves some replacements and the script may be hard to write for me, so I've found a better way and I'm asking about how safe and portable is:
In some distros we have an initramfs
file with concatenated parts, and apparently the kernel parses the whole file extracting all parts packed in a 1-byte boundary, so there is no need to fill each part to a multiple of 512 bytes. I thought this 'feature' can be useful for me to avoid recreating the archive when modifying files inside it. Indeed it works, at least for Debian
and CloneZilla
.
For example if we have modified the /init
file on initrd.gz
of Debian 8.2.0, we can append it to initrd.gz
image with:
$ echo ./init | cpio -H newc -o | gzip >> initrd.gz
so initrd.gz
has two concatenated archives, the original and its modifications. Let's see the result of binwalk
:
DECIMAL HEXADECIMAL DESCRIPTION
--------------------------------------------------------------------------------
0 0x0 gzip compressed data, maximum compression, has original file name: "initrd", from Unix, last modified: Tue Sep 1 09:33:08 2015
6299939 0x602123 gzip compressed data, from Unix, last modified: Tue Nov 17 16:06:13 2015
It works perfectly. But it is reliable? what restrictions do we have when appending data to initfamfs
files? it is safe to append without padding the original archive to a multiple of 512 bytes? from which kernel version is this feature supported?
Emilio Lazo
(253 rep)
Nov 17, 2015, 06:05 PM
• Last activity: Jul 21, 2025, 04:09 AM
7
votes
2
answers
9597
views
How to compile a third party driver into the kernel?
I am using Linux Mint 17.2 on Toshiba c640. As my LAN driver is no more functional, I am using a USB to LAN converter which was provided with some driver installation files. Every time i want to use the device I have to install the drivers manually by running the given commands. So I am requesting u...
I am using Linux Mint 17.2 on Toshiba c640. As my LAN driver is no more functional, I am using a USB to LAN converter which was provided with some driver installation files. Every time i want to use the device I have to install the drivers manually by running the given commands. So I am requesting u guys if u could help me to make it automatically load them after every rebooting. For that purpose manufacturer have given some instructions but since I am not a pro techie I couldn't do it myself. I am providing the details of files. Any help is appreciated. Thank you
These are the files:
Contents are

Readme.txt
:
Note:
1. Please run as root
2. Supported linux kernel range from 2.6.x to 3.8.x
3. CH9x00 module depends on mii and usbnet modules
4. If you want complied this module in kernel, refer to followed
a. # cp ch9x00.c ~/2.6.25/driver/net/usb/
b. # cd ~/2.6.25/driver/net/usb/
c. modified Makefile and Kconfig for ch9x00.c
Install:
# make
# make load
Uninstall:
# make unload
Makefile
:
# This makefile for CH9X00 network adaptor
# Makefile for linux 2.6.x - 3.8.x
ifneq ($(KERNELRELEASE), )
#call from kernel build system
obj-m := ch9x00.o
else
KERNELDIR := /lib/modules/$(shell uname -r)/build
PWD := $(shell pwd)
modules:
$(MAKE) -C $(KERNELDIR) M=$(PWD)
load:
modprobe mii
modprobe usbnet
insmod ch9x00.ko
unload:
rmmod ch9x00
clean:
rm -rf *.o *~ core .depend .*.cmd *.mod.c .tmp_versions modules.* Module*
endif
Maddyrdm
(71 rep)
Aug 11, 2015, 08:16 AM
• Last activity: Jul 19, 2025, 03:06 AM
2
votes
2
answers
2113
views
ubuntu Kernel panic while installing CentOS 7 in VirtualBox
I have a laptop with Xubuntu desktop 14.10 , and I installed VirtualBox and updated it to the latest version . I'm trying to install CentOS 7 (source downloaded from the official site) as a VM, but each time I begin the installation the laptop itself suffers from a kernel panic. I wonder how a VM wi...
I have a laptop with Xubuntu desktop 14.10 , and I installed VirtualBox and updated it to the latest version .
I'm trying to install CentOS 7 (source downloaded from the official site) as a VM, but each time I begin the installation the laptop itself suffers from a kernel panic.
I wonder how a VM will kernel panic the laptop, and I want to know how to solve this issue.
- The iso I use is CentOS-7.0-1406-x86_64-Everything.iso
- The version of VirtualBox is 4.3.18.r96516
- Laptop model: Dell E6430
- VM Settings
- Crash photo
uname -a
Linux Laptop 3.16.0-25-generic #33-Ubuntu SMP Tue Nov 4 12:06:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux


Fat Mind
(121 rep)
Jan 16, 2015, 08:47 AM
• Last activity: Jul 17, 2025, 01:03 AM
0
votes
1
answers
5241
views
Server not using the latest kernel installed
I am running RHEL 7.3 I updated the `/boot` configuration based on one of the answers from [here][1] (Relocate /boot to the root partition). After that, I upgraded the kernel from `kernel-3.10.0-514.el7.x86_64` to kernel-`3.10.0-693.el7.x86_64`. It is installed as per below: [root@qradar-hardened us...
I am running RHEL 7.3
I updated the
/boot
configuration based on one of the answers from here (Relocate /boot to the root partition).
After that, I upgraded the kernel from kernel-3.10.0-514.el7.x86_64
to kernel-3.10.0-693.el7.x86_64
.
It is installed as per below:
[root@qradar-hardened user]# rpm -qa | grep kernel
kernel-3.10.0-693.el7.x86_64
kernel-3.10.0-514.el7.x86_64
kernel-tools-libs-3.10.0-514.el7.x86_64
kernel-tools-3.10.0-514.el7.x86_64
[root@qradar-hardened user]#
However, after reboot, this change is not reflected.
[root@qradar-hardened user]# uname -r
3.10.0-514.el7.x86_64
[root@qradar-hardened user]#
I did the same without making changes to /boot
and it works. So I think it has something to do with changing that.
I followed this guide and changed the boot order to use this new kernel, but still does not work.
Just need to make sure how can I make this to use the latest kernel.
screenslaver
(31 rep)
May 3, 2018, 06:46 AM
• Last activity: Jul 15, 2025, 04:45 AM
1
votes
1
answers
3551
views
How to automatically run mkinitramfs on Debian after apt update for kernel packages?
What I am trying to achieve is to have an encrypted root file system on a Raspberry Pi (running Raspian Buster) that gets unlocked at boot via ssh. I got quite far by adapting [a tutorial for Kali linux][1] and got it working at least once, but it does not survive kernel updates yet. One of the prob...
What I am trying to achieve is to have an encrypted root file system on a Raspberry Pi (running Raspian Buster) that gets unlocked at boot via ssh. I got quite far by adapting a tutorial for Kali linux and got it working at least once, but it does not survive kernel updates yet.
One of the problems is, that this setup is using an initramfs that is referenced in
/boot/config.txt
by
initramfs initramfs.gz followkernel
and that needs to be updated after an kernel update by manually calling e.g.
mkinitramfs -o /boot/initramfs.gz 4.19.118-v7+
where 4.19.118-v7+
depends on the current kernel version and the kind of Raspberry Pi hardware that is used. Of course, I want to have this automatically done whenever apt upgrade
installs a new kernel.
This is where I got stuck with 2 problems:
- A) Where and how do I plug in that update process in a proper way?
- B) How do I determine the correct kernel version to use?
Regarding A) I came as far as learning that raspberrypi-kernel.postinst
executes /etc/kernel/postinst.d/
. This again calls /usr/sbin/update-initramfs
which in the end will call mkinitramfs
. Where I got confused was this code in /usr/sbin/update-initramfs
:
set_initramfs()
{
initramfs="${BOOTDIR}/initrd.img-${version}"
}
It determines the filename for the initramfs. No such file got ever generated during the update and I'm not sure if I am on the right track, as wikipedia says that the init.rd scheme was superseded by the initramfs scheme. However, I was not able to find a good documentation that describes how things are supposed to happen after a kernel module upgrade. (Good links appreciated).
So my question is:
Where is a good place to plug in a script that runs the mkinitramfs
command? Should I modify /etc/kernel/postinst.d/
? Will this solution be stable over the next few Debian versions?
Regarding B), it is easy to get available kernel versions with
> ls -l /lib/modules/ | awk -F" " '{print $9}'`
5.4.51+
5.4.51-v7+
5.4.51-v7l+
5.4.51-v8+
But how do I automatically select the right one for the current hardware? For a Pi3B+ this would be 5.4.51-v7+
. Is there a way to determine this automatically?
Thank you very much for your help!
Alexander Lorz
(11 rep)
Aug 14, 2020, 08:29 PM
• Last activity: Jul 12, 2025, 05:03 PM
0
votes
2
answers
373
views
How to use vm.overcommit_memory=1 without getting system hung?
I am using ```vm.overcommit_memory=1``` on my linux system which has been helpful to allow starting multiple applications which otherwise wouldn't even start with default value of 0, however, sometimes my system just freezes and seems the OOM killer is unable to do anything to prevent this situation...
I am using
.overcommit_memory=1
on my linux system which has been helpful to allow starting multiple applications which otherwise wouldn't even start with default value of 0, however, sometimes my system just freezes and seems the OOM killer is unable to do anything to prevent this situation. I have some swap memory which also got consumed. I've also noticed some instances when system is unresponsive, even the magic SysRq keys don't work. Sorry, no logs are available at this time to include here.
In general, is there any configuration or tunable that can get the OOM killer to kill the highest memory consuming process(es) immediately without ever letting the system go unresponsive when using .overcommit_memory=1
?
eagle007
(3 rep)
Nov 27, 2024, 10:10 PM
• Last activity: Jul 10, 2025, 10:16 AM
0
votes
1
answers
2184
views
How to verify if node reboot was triggered by the watchdog?
I have created software watchdog device using command: `$ sudo modprobe softdog soft_margin=60` In OS log it is visible that software watchdog was initialized: `Jul 12 09:49:00 patroni4 kernel: softdog: Software Watchdog Timer: 0.08 initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayou...
I have created software watchdog device using command:
$ sudo modprobe softdog soft_margin=60
In OS log it is visible that software watchdog was initialized:
Jul 12 09:49:00 patroni4 kernel: softdog: Software Watchdog Timer: 0.08 initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0)
To trigger node reboot by the watchdog I have executed echo a | sudo tee /dev/watchdog
.
Node rebooted but in the logs there is no info that reboot was triggered by the watchdog.
If I set option soft_noboot=1
in the OS log there is message softdog: Triggered - Reboot ignored
.
Based on the watchdog implementation there should be log message when node reboot was triggered by the watchdog.
https://github.com/spacex/kernel-centos7/blob/master/drivers/watchdog/softdog.c
static void watchdog_fire(unsigned long data)
{
if (soft_noboot)
pr_crit("Triggered - Reboot ignored\n");
else if (soft_panic) {
pr_crit("Initiating panic\n");
panic("Software Watchdog Timer expired");
} else {
pr_crit("Initiating system reboot\n");
emergency_restart();
pr_crit("Reboot didn't ?????\n");
}
}
OS: CentOS Linux release 7.9.2009 (Core)
Linux lin1 3.10.0-1160.62.1.el7.x86_64
How can I verify if reboot was triggered by the watchdog?
Why this information is not logged? Can I enable logging somehow?
Thank you for the info
msutic
(1 rep)
Jul 12, 2022, 09:07 PM
• Last activity: Jul 9, 2025, 12:04 PM
0
votes
1
answers
96
views
Is there a specialized OS for container orchestration?
Containers are intended to solve the "it worked on my machine" problem. Thus, the blueprint of containers has two compatibility requirements: the OS and the architecture. We often see a container image like linux/amd64, windows/amd64, linux/aarch64, etc. But, as container orchestration joins the pic...
Containers are intended to solve the "it worked on my machine" problem. Thus, the blueprint of containers has two compatibility requirements: the OS and the architecture. We often see a container image like linux/amd64, windows/amd64, linux/aarch64, etc.
But, as container orchestration joins the picture, we all agree that the worker node or master node shouldn't use an OS other than Linux—like Windows—due to the overhead they introduce.
Moreover, why did we introduce Virtual Machine (VM) technology?
Can't we just directly use a single physical machine as a node? I mean, isn't it redundant and needless overhead if VMs run inside a single large physical machine where the hypervisor is installed? Specifically, a bare-metal hypervisor where the hypervisor itself is treated as a specialized OS.
I mean, when that physical machine goes down, all corresponding VMs would go down too, right?
So, instead of a hypervisor of the bare-metal type, can we just replace it directly with a container orchestrator?
Thus, the container orchestrator itself can be viewed as a master node or worker node (it's switchable). I mean, that physical machine can switch roles as a master or a worker.
Muhammad Ikhwan Perwira
(319 rep)
Jul 8, 2025, 11:53 AM
• Last activity: Jul 8, 2025, 01:51 PM
0
votes
1
answers
3623
views
How fix "No rule to make target 'arch/arm64/boot/dts/kona-rumi.dtb', needed by '__build'. Stop
Previously, I had the same error related to the file: `arch/arm64/boot/dts/qcom/apq8016-sbc.dtb`, but I solved it by deleting the line `subdir-y += qcom` from the `arch/arm64/boot/dts/Makefile`. Now a new same error appears and I have no idea how to fix it. Internet advice didn't help. Error: make[3...
Previously, I had the same error related to the file:
arch/arm64/boot/dts/qcom/apq8016-sbc.dtb
, but I solved it by deleting the line subdir-y += qcom
from the arch/arm64/boot/dts/Makefile
.
Now a new same error appears and I have no idea how to fix it.
Internet advice didn't help.
Error:
make: *** No rule to make target 'arch/arm64/boot/dts/kona-rumi.dtb', needed by '__build'. Stop.
make: *** Waiting for unfinished jobs....
make: *** [../scripts/Makefile.build:642: arch/arm64/boot/dts] Error 2
make[1] : *** [arch/arm64/Makefile:172: dtbs] Error 2
make[1] : *** Waiting for unfinished jobs....
Error with make V=1
:
make -f ../scripts/Makefile.build obj=arch/arm64/boot/dts/ti need-builtin=
(cat /dev/null; ) > arch/arm64/boot/dts/ti/modules.order
make -f ../scripts/Makefile.build obj=arch/arm64/boot/dts/vendor need-builtin=
make -f ../scripts/Makefile.build obj=arch/arm64/boot/dts/vendor/qcom need-builtin=
(cat /dev/null; ) > arch/arm64/boot/dts/vendor/qcom/modules.order
make: *** No rule to make target 'arch/arm64/boot/dts/vendor/qcom/kona-rumi.dtb', needed by '__build'. Stop.
make: *** Waiting for unfinished jobs....
make: *** [../scripts/Makefile.build:642: arch/arm64/boot/dts/vendor/qcom] Error 2
make: *** [../scripts/Makefile.build:642: arch/arm64/boot/dts/vendor] Error 2
make[1] : *** [arch/arm64/Makefile:172: dtbs] Error 2
make[1] : *** Waiting for unfinished jobs....
MyKernel:
kernel
My build code:
export ARCH=arm64
export SUBARCH=arm64
export HEADER_ARCH=arm64
export DTC_EXT=dtc
PATH="/home/hehe/Downloads/clang/bin:/home/hehe/Downloads/aarch64-linux-android-4.9/bin:/home/hehe/Downloads/arm-linux-androideabi-4.9/bin:${PATH}"
rm -rf out
make O=out clean && make mrproper
make O=out ARCH=arm64 kona_defconfig
make -j$(nproc --all) O=out ARCH=arm64 CC=clang CLANG_TRIPLE=aarch64-linux-gnu- CROSS_COMPILE=aarch64-linux-android- CROSS_COMPILE_ARM32=arm-linux-androideabi-
JohnTit
(11 rep)
Feb 16, 2023, 05:22 PM
• Last activity: Jul 8, 2025, 12:08 AM
0
votes
0
answers
28
views
linux kernel - (virtual) bluetooth device for testing
I want to test and debug linux kernel internals within the bluetooth stack, i.e. `/net/bluetooth`. I have a (rather minimal) kernel, manually built, with debug symbols, and a `busybox` at the moment, running in `qemu`. Now I want to investigate specific bluetooth functions from the kernel. I thought...
I want to test and debug linux kernel internals within the bluetooth stack, i.e.
/net/bluetooth
. I have a (rather minimal) kernel, manually built, with debug symbols, and a busybox
at the moment, running in qemu
. Now I want to investigate specific bluetooth functions from the kernel. I thought, a virtual device would be easiest but it seems harder than expected.
I found there is btvirt
from bluez
for dealing with virtual bluetooth devices.
I have tried manually building bluez
statically. Doesn't work, btvirt
is still (at least partly) dynamically linked (and hence doesn't work in my vm):
# in bluez repo
autoreconf -vfi
./configure --enable-static --enable-debug --enable-test --enable-testing --enable-deprecated --enable-experimental --enable-logger CFLAGS=-static LDFLAGS=-static
make
ldd emulator/btvirt # output below
linux-vdso.so.1 (0x00007f7225f23000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f7225cf5000)
/lib64/ld-linux-x86-64.so.2 (0x00007f7225f25000)
Also, I tried clang
instead of gcc
, without success. musl-gcc
yielded some error about readline
when trying to ./configure
I have not followed further yet.
What options do I have/which route would be the easiest?
1. Should I abandon my minimal kernel and use a full debian/ubuntu instead? I need debugging symbols and might want to pin to specific versions, so I guess, I would have to manually build the debian/ubuntu kernel (i.e. in accordance with some minimum build flags these distros need/expect), right?
2. How much work is it/should I try to expand my custom small setup with libc, linker etc?
3. Am I on the right track at all? I assume(d) that kernel bluetooth developers might use virtual devices. (Am I correct on that one? If people have experience here, I'd be curious.) Or is this rather hopeless and should I try to pass-through a USB bluetooth device instead?
Thanks in advance, I'll be happy to provide further info if needed.
nox
(161 rep)
Jul 7, 2025, 05:11 PM
• Last activity: Jul 7, 2025, 08:29 PM
-1
votes
1
answers
2182
views
How does an open(at) syscall result in a file being written to disk?
I'm trying to learn as as much as I can about about the interplay between syscalls, the VFS, device driver handling and ultimately, having the end device do some operation. I thought I would look at a fairly trivial example - creating a file - and try to understand the underlying process in as much...
I'm trying to learn as as much as I can about about the interplay between syscalls, the VFS, device driver handling and ultimately, having the end device do some operation. I thought I would look at a fairly trivial example - creating a file - and try to understand the underlying process in as much detail as possible.
I created a bit of C, to do little more than open a (non-existing) file for writing, compiled this (without optimization), and took a peek at it with strace when I ran it. In particular, I wanted to focus on the
openat
syscall, and why and how this call was ultimately able to not only create the file object / file description, but also actually do the writing to disk (for reference, EXT4 file system, SATA HDD).
Broadly speaking, excluding some of the checks and ancilliary bits and pieces, my understanding of the process is as follows (and please correct me if I'm way off!):
- ELF is mapped into memory
- libc is mapped into memory
- fopen
is called
- libc does its open
- openat
syscall is called, with the O_CREAT
flag among others
- Syscall number is put into RAX register
- Syscall args (e.g. file path, etc.) are put into RDI register (and RSI, RDX, etc. as appropriate)
- Syscall instruction issue, and CPU transition to ring 0
- System_call code pointed to by MSR_LSTAR register invoked
- registers pushed to kernel stack
- Function pointer from RAX called at offset into sys_call_table
- asmlinkage
wrapper for the actual openat
syscall code is invoked
And at that point my understanding is lacking, but ultimately, I know that:
1. The open call returns a file descriptor, which is unique to the process, and maintained globally within the kernel's file descriptor table
2. The FD maps to a file description file object
3. The file object is populated with, among other structures, inode structure, inode_operations, file_operations, etc.
4. The file operations table should map generic syscalls to the respective device drivers to handle the respective calls (such that, for example, when a write
syscall is called, the respective driver write call is called instead for the device on which the file resides, e.g. a SCSI driver)
5. This mapping is based on the major/minor numbers for that file/device
6. Somewhere along the line, code is invoked which causes a instructions to be send to the device drive for the hard drive, which gets send to the disk controller, which causes a file to be written to the hard disk, though whether this is via interrupts or DMA, or some other method of I/O I'm not sure
7. Ultimately, the disk controller sends a message back to the kernel to say it's done, and the kernel returns control back to use space.
I'm not too good at following the kernel source, though I've tried a little, but feel there's a lot I'm missing. My questions are as follows:
I've found some functions which return, and destroy FDs in the kernel source, but can't find where the code is which actually populates the file object / file description for the file.
A) On an open
or openat
syscall, when a new file is created, how is the file structure populated? Where does the data come from? Specifically, how are the file_operations and inode_operations, etc. for that file populated? how does the kernel know, when populating that structure, that the file operations for this particular file need to be that of the SCSI driver, for instance?
B) Where in the process - and particularly with reference to the source - does the switch to the device driver happen? For example, if an ioctl
or similar was called, I would expect to some reference to the instruction to be called for the respective device, and some memory address for the data to be passed on, but I can't find where that happens.
From looking at the kernel source, all I can really find is code that assigns a new FD, but nothing that populates the file structure, no anything that calls the respective file operations to transfer control to a device driver.
Apologies that this is a really long-winded description, but I'm really trying to learn as much as possible, and although I have a basic grasp of C, I really struggle to understand others' code.
Hoping someone with greater knowledge than I can help clarify some of these things for me, as I seem to have hit a figurative brick wall. Any advice would be greatly appreciated.
**Edit:**
Hopefully the following points will clarify what technical detail I'm after.
- The open
or openat
syscalls take a file path, and flags (with the latter also being passed an FD pointing to a directory)
- When the O_CREAT
flag is also passed, the file is 'created' if it doesn't exist
- Based on the file path, the kernel is able to identify the device type this file should be
- The device type is identified from the major/minor numbers ordinarily - for a file that already exists, these are stored in the inode structure for the file (as member i_rdev
) and the stat structure for the file (as members st_dev
for the device type of the file system on which the file resides, and st_rdev
for the device type of the file itself)
So really, my questions are:
1. When a file is created with either of the open syscalls, the respective inode and stat structure must also be created and populated - how do the open syscalls do this (when all they have to go on at this point is the file path, and flags? Do they look at the inode or stat structure of the parent directory, and copy the relevant structure members from this?
2. At which point (i.e. where in the source) does this happen?
3. It's my understanding that when these open syscalls are invoked, it needs to know the device type, in order for the VFS to know what device driver code to invoke. On creating a new file, where the device type has yet to be set in the file object structures, what happens? What code is called?
4. Is the sequence more like:
user process tries to open new file -> open('/tmp/foo', O_CREAT)
open -> look up structure for '/tmp', get its device type -> get unused FD -> populate inode/stat structure, including setting device type to that of parent -> based on device type, map file operations / inode operations to device driver code -> call device driver code for open
syscall -> send appropriate instruction to disk controller to write new file to disk -> tidy up things, do checks, etc. -> return FD to user calling process?
genericuser99
(119 rep)
Aug 25, 2020, 10:19 PM
• Last activity: Jul 6, 2025, 03:04 PM
Showing page 1 of 20 total questions