Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
1
answers
164
views
isolcpus kernel option appears to break taskset
I have a laptop with an [Intel Core i7-12700H][1] CPU, running Ubuntu 24.04 LTS. This CPU has 6 “performance“ cores, each one running 2 threads, and 8 “efficient” cores. Most of the time, the “efficient” cores can provide much more power than I need, so I’d rather keep the “performance” cores idle,...
I have a laptop with an Intel Core i7-12700H CPU, running Ubuntu 24.04 LTS. This CPU has 6 “performance“ cores, each one running 2 threads, and 8 “efficient” cores.
Most of the time, the “efficient” cores can provide much more power than I need, so I’d rather keep the “performance” cores idle, to save power, and only use the “efficient” ones. Occasionally, I have to run and benchmark some intensive (multi-threaded) computation code. Then I wan to use the “performance” cores, without being disturbed by other processes. I may want either to run a single thread per core, to get the maximum per-thread performance, or to use both threads of each core.
At first, I considered using cgroup’s cpusets, with the
cset
tool. I wrote the following script to create the sets:
cset set -c 12-19 --cpu_exclusive -s efficient
cset set -c 0-11 --cpu_exclusive -s performance
cset set -c 0,2,4,6,8,10 -s performance/no-ht
cset proc -m -f root -t efficient
It works as expected, but I find it troublesome since I need root to escape the efficient
cgroup, with ugly commands like
sudo cset proc --exec performance/no-ht -- sudo -u $(whoami) ~/bin/my_code
Moreover, because it switches to root, I can’t wrap that whole command in time
or a profiler. I can still run time
or the profiler within the performance
cgroup, but then they share the same cores as the code under test…
Hence I considered taskset
, with the following command:
/usr/bin/time taskset -c 0-11:2 ~/bin/my_code
It works as expected, does not need root
, but other processes may use the “performance“ cores.
One solution is to use both systems, with an awful command like:
sudo cset proc --exec root -- sudo -u $(whoami) taskset -c 12-19 /usr/bin/time taskset -c 0-11:2 ~/bin/my_code
It works fine but looks over-complicated and still needs root
…
Then I read about the isolcpus
kernel parameter and, despite being deprecated, it looked great to me. So I added isolcpus=0-11
to the GRUB_CMDLINE_LINUX_DEFAULT
parameter in /etc/default/grub
, ran sudo update-grub
and rebooted.
At first sight it looked fine, only a few kernel threads were running on CPUs 0 to 11, all userspace processes were running on CPUs 12 to 19. So I tried:
/usr/bin/time taskset -c 0-11:2 ~/bin/my_code
As expected, OpenMP correctly discovered it could use 6 cores and launched 6 threads, but I figured out all of them were running on CPU 0… I checked the Cpus_allowed_list
line in /proc//status
and it was correct (0,2,4,6,8,10
). Running taskset
to allow each thread on a single core works, but it’s not very convenient and does not allow to share load if there are more threads than cores…
Any idea why the threads seem to be stuck on the first allowed CPU when using the isolcpus
kernel option?
Is there a better way to keep everything running on CPUs 12 to 19 by default, like if everything was run through taskset -c 12-19
, without using cgroup
jails than need root
to escape?
user2233709
(1709 rep)
Apr 9, 2025, 08:46 AM
• Last activity: Apr 24, 2025, 07:47 AM
1
votes
1
answers
1255
views
How to disable CPU hotplug feature (and kernel thread) in Linux-5.10.24
I am working on an embedded Linux system, which is using kernel-5.10.24. As the system resource is limited, so I want to minimize the CPU/memory/storage usage. From `ps -ax` I found 2 kernel threads as follows, ``` 14 root 0:00 [cpuhp/0] 15 root 0:00 [cpuhp/1] ``` I think they are used for CPU hotpl...
I am working on an embedded Linux system, which is using kernel-5.10.24.
As the system resource is limited, so I want to minimize the CPU/memory/storage usage.
From
ps -ax
I found 2 kernel threads as follows,
14 root 0:00 [cpuhp/0]
15 root 0:00 [cpuhp/1]
I think they are used for CPU hotpluging, and there is NO CPU hotpluging use case in this system, so I want to disable the feature and not create these 2 kernel threads.
I tried to disable this configuration by force (Removing select SYS_SUPPORTS_HOTPLUG_CPU
from the arch/ARM/Kconfig and others).
But after deployed the new kernel, these 2 kernel threads are still there.
By checking the codes, it seemed that these 2 threads are created regardless of CONFIG_HOTPLUG_CPU
and CONFIG_SYS_SUPPORTS_HOTPLUG_CPU
, which means when SMP is configured, these 2 threads are ALWAYS there!
So I am not sure if there is a way to disable creation of these 2 kernel threads. If no, I have to live with them, assuming they will NOT take too much CPU and memory for running.
## Updated with kernel menuconfig based on dhanushka's comment
Symbol: HOTPLUG_CPU [=y]
Type : bool
Defined at arch/mips/Kconfig:2942
Prompt: Support for hot-pluggable CPUs
Depends on: SMP [=y] && SYS_SUPPORTS_HOTPLUG_CPU [=y]
Location:
-> Kernel type
(2) -> Multi-Processing support (SMP [=y])
Selected by [y]:
- PM_SLEEP_SMP [=y] && SMP [=y] && (ARCH_SUSPEND_POSSIBLE [=y] || ARCH_HIBERNATION_POSSIBLE [=y]) && PM_SLEEP [=y]
The same as dhanushka's comment.
I will try to disable it and update this question.
And as I said, the cpuhp0/1
seemed not be able to be disabled.
wangt13
(631 rep)
Apr 25, 2023, 01:19 AM
• Last activity: Nov 3, 2024, 07:22 PM
2
votes
0
answers
201
views
Linux reboots with no panic when booting SMP configuration from kexec
I'm working on a project involving kexec. I have it working on some of our hardware platforms. On one platform, I'm getting sudden reboots with no panic dump during SMP setup: ``` [ 25.219028] smpboot: CPU0: AMD EPYC 7402 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) [ 25.228083] Perf...
I'm working on a project involving kexec. I have it working on some of our hardware platforms. On one platform, I'm getting sudden reboots with no panic dump during SMP setup:
[ 25.219028] smpboot: CPU0: AMD EPYC 7402 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0)
[ 25.228083] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[ 25.237997] ... version: 0
[ 25.247996] ... bit width: 48
[ 25.257996] ... generic registers: 6
[ 25.267996] ... value mask: 0000ffffffffffff
[ 25.277996] ... max period: 00007fffffffffff
[ 25.287996] ... fixed-purpose events: 0
[ 25.297996] ... event mask: 000000000000003f
[ 25.308059] rcu: Hierarchical SRCU implementation.
[ 25.318046] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
[ 25.328283] smp: Bringing up secondary CPUs ...
[ 25.335543] x86: Booting SMP configuration:
��
This platform boots fine under normal (i.e., non-kexec) circumstances. The primary and kexec kernels are built from the same codebase but linked differently - this is unlikely to be related to the issue because I've already tested this on an Intel platform.
Kexec command line:
[ 0.000000] Command line: elfcorehdr=0x86000000 ro panic=5 console=ttyS0,9600 loglevel=8 numifbs=0 nf_conntrack.acct=1 nmi_watchdog=1 profile=0 root=/dev/ram0 initrd=/crashfs.gz libata.force=disable
BIOS - may be relevant since one boot is with and one is without the BIOS:
Version 2.20.1275. Copyright (C) 2022 American Megatrends, Inc.
BIOS V1.05(08/26/2022)
I've tracked it down to wakeup_secondary_cpu_via_init
in arch/x86/kernel/smpboot.c
. The last output I get is just before the first apic_icr_write
.
I don't know where to even begin debugging this. Could it possibly be the NMI watchdog forcing a reboot because the only available core is hanging for some reason? Seems unlikely since that hung core wouldn't be able to perform NMI checks.
Sarvadi
(121 rep)
Sep 27, 2023, 05:00 PM
10
votes
1
answers
5715
views
"Remote function call interrupts" (CAL in /proc interrupts). What is it?
I'm running a test program which generates a large number of threads and asynchronous I/O. I'm seeing very high counts of these interrupts in /proc/interrupts, the program cannot scale beyond a certain point because CPUs are 100% saturated with softirq processing. According to: http://www.kernel.org...
I'm running a test program which generates a large number of threads and asynchronous I/O. I'm seeing very high counts of these interrupts in /proc/interrupts, the program cannot scale beyond a certain point because CPUs are 100% saturated with softirq processing.
According to: http://www.kernel.org/doc/man-pages/online/pages/man5/proc.5.html CAL stands for 'remote function call interrupt' but that's all the info I can find on google. So...what does that mean? I have smp_affinity assigned for my I/O adapter cards and those are not the CPUs taking a huge number of CAL interrupts. Is it a sign my program is running with incorrect SMP affinity?
twblamer
(939 rep)
Apr 30, 2012, 08:18 PM
• Last activity: May 30, 2023, 09:02 PM
0
votes
2
answers
416
views
Get CPU core id executing a process which suddenly exits
In Linux, in a multi-core processor, [`ps`][2], [`top`][1] and similar tools can show the CPU logical core id running a specific process. If the process runs for a certain amount of time, it's easy to identify it in the process list. I have instead a stand-alone program which prints "hello world" an...
In Linux, in a multi-core processor,
ps
, top
and similar tools can show the CPU logical core id running a specific process. If the process runs for a certain amount of time, it's easy to identify it in the process list.
I have instead a stand-alone program which prints "hello world" and the number of logical core detected from CPU assembly (RDPID
instruction):
$ ./hello_world
hello world
1
$
It immediately ends. I would like to compare this number with the one provided by ps
, top
or similar. So, how to obtain the same information (the CPU logical core id) in this case? How to get the process information while this process is still executing?
BowPark
(5155 rep)
May 9, 2023, 07:39 AM
• Last activity: May 9, 2023, 09:03 AM
1
votes
0
answers
805
views
Why I am not able to bind the interrupts with code LOC, IWI, RES when irqbalance is disabled?
On Ubuntu 14.04, I am trying to bind all the interrupts to core 0 and 1 out of 4 cores. I have disabled the `irqbalance daemon` via file `/etc/init/irqbalance.override`. Then I went to every interrupt in /proc/irq and changed the files `/proc/irq/ /smp_affinity_list`. But what I see that interrupts...
On Ubuntu 14.04, I am trying to bind all the interrupts to core 0 and 1 out of 4 cores. I have disabled the
irqbalance daemon
via file /etc/init/irqbalance.override
. Then I went to every interrupt in /proc/irq and changed the files /proc/irq//smp_affinity_list
. But what I see that interrupts LOC(/etc/init/SERVICE.override), IWI(/etc/init/SERVICE.override) and RES(/etc/init/SERVICE.override) are still being processed on every core, all other interrupts are binded correctly to expected core. Why I am not able to bind these LOC, IWI and RES interrupts? or how to bind them permanently to particular core when irqbalance
is disabled? Even I have modified file /proc/irq/default_smp_affinity
to point to core 0 and 1. but no effect.
One more observation. I am not able to bind the cpu list for interrupt 0 and 2. While interrupt 0 seems to be occuring only on cpu 0 and interrupt 2 is not in the file /proc/interrupts and seems to be occurred only for 0 times.
rahul.deshmukhpatil
(495 rep)
Jan 17, 2016, 12:41 PM
• Last activity: Mar 22, 2019, 12:34 PM
1
votes
0
answers
60
views
Does the lack of kernel feature (XENFEAT_hvm_pirqs) cause RedHat EC2 interrupt issue?
I have a RedHat 6.5 on AWS EC2 running kernel 2.6.32.431, I have installed the ixgbevf driver with the minimum version the doc recommends. After configuration the system now has 2 queues(IRQs): grep eth0-TxRx /proc/interrupts 48: 7986 0 0 0 0 0 0 0 PCI-MSI-edge eth0-TxRx-0 49: 7026 0 0 0 0 0 0 0 PCI...
I have a RedHat 6.5 on AWS EC2 running kernel 2.6.32.431, I have installed the ixgbevf driver with the minimum version the doc recommends. After configuration the system now has 2 queues(IRQs):
grep eth0-TxRx /proc/interrupts
48: 7986 0 0 0 0 0 0 0 PCI-MSI-edge eth0-TxRx-0
49: 7026 0 0 0 0 0 0 0 PCI-MSI-edge eth0-TxRx-1
However, even if the /proc/irq/48/smp_affinity or the /proc/irq/49/smp_affinity got changed to 4, there seems to be no any change. The output of "eth0-TxRx | /proc/interrupts" remains the same.
grep eth0-TxRx /proc/interrupts
48: 8025 0 0 0 0 0 0 0 PCI-MSI-edge eth0-TxRx-0
49: 7096 0 0 0 0 0 0 0 PCI-MSI-edge eth0-TxRx-1
The queues were still fixed to the CPU core 0.
I have been seeking the solution for a while, some of answers suggested that the Redhat 6.5 is lacking of XENFEAT_hvm_pirqs kernel feature with running kernel 2.6.32.431. But it could somehow be seen worked on RedHat 6.9:
grep Tx /proc/interrupts
48: 16 0 0 0 2810 0 0 0 PCI-MSI-edge eth0-TxRx-0
49: 22 2326 0 0 0 0 0 0 PCI-MSI-edge eth0-TxRx-1
Since if using XENFEAT_hvm_pirqs, the output should show xen-pirq-msi, but here RedHat 6.9 and RedHat 6.5 both display PCI-MSI-edge. I suppose they were both not using XENFEAT_hvm_pirqs flags here, were they?
Could anyone help figure out what exactly the kernel flag is? what is the purpose of XENFEAT_hvm_pirqs? Does this flag have anything to do with this problem? Is there any backport I could use to get this resolved?
By the way, the RedHat 6.5 was imported from Vmware, and it could work pretty well on Vmware. And the smp_affinity parameters could work as expected. Thanks very much in advance for any answers.
Jepsenwan
(45 rep)
Jun 7, 2017, 06:28 AM
• Last activity: Jun 7, 2017, 07:19 AM
1
votes
1
answers
524
views
Can the 0th physical core used asymmetrically on linux?
In an SMP and with a fair scheduling algorithm I'd expect all physical cores of a machine to get used evenly by linux. In theory I believe this is the case, but in practice I suspect not. Does anyone have any good explanations why an average linux setup might favour core 0 for certain processes? Is...
In an SMP and with a fair scheduling algorithm I'd expect all physical cores of a machine to get used evenly by linux. In theory I believe this is the case, but in practice I suspect not.
Does anyone have any good explanations why an average linux setup might favour core 0 for certain processes? Is that realistically possible? You may assume that processor affinity for all user space processes is bitmasked to 0xFFFFFFFF. No custom changes made to the kernel either.
Andrew Parker
(133 rep)
Aug 5, 2016, 09:48 AM
• Last activity: Aug 14, 2016, 09:17 PM
0
votes
1
answers
852
views
linux / SMP -- suspend immediately after wakeup from suspend
(Please note -- I read [this post][1] and it is not a duplicate.) So for years, my linux laptop has allowed me to both suspend to disk and (with a little effort) suspend to RAM with one of these two commands echo -n mem > /sys/power/state echo "disk" > /sys/power/state and wake up successfully every...
(Please note -- I read this post and it is not a duplicate.)
So for years, my linux laptop has allowed me to both suspend to disk and (with a little effort) suspend to RAM with one of these two commands
echo -n mem > /sys/power/state
echo "disk" > /sys/power/state
and wake up successfully every time.
[Edit -- I'm using ACPI to intercept the power button and run a short script that turns off wifi, issues the above command, and (after wake-up) turns wifi back on.]
One day recently, I discovered that my laptop is dual-core, and I was not using an SMP kernel. So I enabled SMP. As far as I can tell, that is the only change I made.
Now, my laptop can suspend to disk successfully, but as soon as it wakes up, it immediately goes into a second suspend-to-disk process. After the second suspend, the laptop wakes up and resumes normally. Almost as if the suspend command is applied to each CPU consecutively.
Suspending to RAM appears to work, but it is completely unable to wake up (the CAPS LOCK button flashes) so I'm not certain that it is or isn't working.
Is there something special I need to do to suspend/resume a dual-core Linux laptop?
hymie
(1828 rep)
Jan 10, 2016, 01:57 PM
• Last activity: Jan 12, 2016, 02:14 AM
6
votes
3
answers
9414
views
Making a IRQ SMP Affinity change permanent
I have to change the smp_affinity of a interrupt permanently. The following code needs to be executed when the server reboots: echo "1" > /proc/irq/152/smp_affinity_list echo "2" > /proc/irq/151/smp_affinity_list echo "3" > /proc/irq/150/smp_affinity_list echo "4" > /proc/irq/149/smp_affinity_list e...
I have to change the smp_affinity of a interrupt permanently. The following code needs to be executed when the server reboots:
echo "1" > /proc/irq/152/smp_affinity_list
echo "2" > /proc/irq/151/smp_affinity_list
echo "3" > /proc/irq/150/smp_affinity_list
echo "4" > /proc/irq/149/smp_affinity_list
echo "5" > /proc/irq/148/smp_affinity_list
echo "6" > /proc/irq/147/smp_affinity_list
echo "7" > /proc/irq/146/smp_affinity_list
echo "8" > /proc/irq/145/smp_affinity_list
echo "9" > /proc/irq/144/smp_affinity_list
echo "10" > /proc/irq/143/smp_affinity_list
echo "11" > /proc/irq/142/smp_affinity_list
echo "12" > /proc/irq/141/smp_affinity_list
echo "13" > /proc/irq/140/smp_affinity_list
echo "14" > /proc/irq/139/smp_affinity_list
echo "15" > /proc/irq/138/smp_affinity_list
echo "16" > /proc/irq/137/smp_affinity_list
I've added these lines to the /etc/rc.local file but the changes are not applied to the system. I've also added echo "test" > /root/test which gets executed properly, so the rc.local file gets executed. The system is running Debian 6.0.
Philip
(61 rep)
Mar 22, 2013, 04:15 PM
• Last activity: Dec 22, 2015, 04:47 PM
3
votes
0
answers
1239
views
How to decide how much memory to allocate per core for MPI app GENE?
This is required information for some MPI using app that I am working with... From its Makefile *template*: #insert memory per core and uncomment the following line #PREPROC= -D'MB_PER_CORE=750' Note that said scientific app runs also on NUMA machines, like Cray, where each core has its own memory....
This is required information for some MPI using app that I am working with...
From its Makefile *template*:
#insert memory per core and uncomment the following line
#PREPROC= -D'MB_PER_CORE=750'
Note that said scientific app runs also on NUMA machines, like Cray, where each core has its own memory. I am asking here what to put in above line on ***Linux*** on e.g. 12GB machine with 16 cores.
Jakub Narębski
(1288 rep)
May 27, 2011, 08:59 PM
• Last activity: Dec 17, 2015, 08:52 PM
1
votes
2
answers
4776
views
Kernel not detecting multicore cpu
Kernel Version 3.3.4-5.fc17.x86_64 CPU info: sashan@dhcp-au-122 ~ $ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz stepping : 7 microcode : 0x28 cpu MHz : 2793.577 cache size : 4096 KB physical id : 0 siblings...
Kernel Version
3.3.4-5.fc17.x86_64
CPU info:
sashan@dhcp-au-122 ~ $ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
stepping : 7
microcode : 0x28
cpu MHz : 2793.577
cache size : 4096 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc up arch_perfmon pebs bts nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
bogomips : 5587.15
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
Notice that it says 1 core and that it is an i7 which has 2 (http://ark.intel.com/products/53464/Intel-Core-i7-2640M-Processor-4M-Cache-up-to-3_50-GHz )
sashang
(736 rep)
Nov 26, 2012, 05:03 AM
• Last activity: Dec 26, 2012, 11:32 PM
7
votes
2
answers
5053
views
VirtualBox guest: 16 CPUs detected but only 1 online
I am running VirtualBox (using the Qiime image http://qiime.org/install/virtual_box.html) The physical hardware is a 32 core machine. The virtual machine in VirtualBox has been given 16 cores. When booting I get: Ubuntu 10.04.1 LTS Linux 2.6.38-15-server # grep . /sys/devices/system/cpu/* /sys/devic...
I am running VirtualBox (using the Qiime image http://qiime.org/install/virtual_box.html)
The physical hardware is a 32 core machine. The virtual machine in VirtualBox has been given 16 cores.
When booting I get:
Ubuntu 10.04.1 LTS
Linux 2.6.38-15-server
# grep . /sys/devices/system/cpu/*
/sys/devices/system/cpu/kernel_max:255
/sys/devices/system/cpu/offline:1-15
/sys/devices/system/cpu/online:0
/sys/devices/system/cpu/possible:0-15
/sys/devices/system/cpu/present:0
/sys/devices/system/cpu/sched_mc_power_savings:0
# ls /sys/kernel/debug/tracing/per_cpu/
cpu0 cpu1 cpu10 cpu11 cpu12 cpu13 cpu14 cpu15 cpu2 cpu3 cpu4 cpu5 cpu6 cpu7 cpu8 cpu9
# ls /sys/devices/system/cpu/
cpu0 cpufreq cpuidle kernel_max offline online possible present probe release sched_mc_power_savings
# echo 1 > /sys/devices/system/cpu/cpu6/online
-su: /sys/devices/system/cpu/cpu6/online: No such file or directory
So it seems it detects the resources for 16 CPUs, but it only sets one online.
I have tested with another image that the VirtualBox host can run a guest with 16 cores. That works. So the problem is to trouble shoot the Qiime image to figure out why this guest image only detects 1 CPU.
Ole Tange
(37348 rep)
May 11, 2012, 02:17 PM
• Last activity: Jun 18, 2012, 12:35 AM
7
votes
2
answers
5107
views
OpenBSD SMP support
The [OpenBSD 4.9 release announcement](http://distrowatch.com/?newsid=06655) says > "SMP kernels can now boot on machines with up to 64 cores;" So OpenBSD does support several CPU's/cores? If i have a Core2Duo cpu in my laptop (t7100) then would it bring greater performance if I use "SMP" kernel? If...
The [OpenBSD 4.9 release announcement](http://distrowatch.com/?newsid=06655) says
> "SMP kernels can now boot on machines with up to 64 cores;"
So OpenBSD does support several CPU's/cores? If i have a Core2Duo cpu in my laptop (t7100) then would it bring greater performance if I use "SMP" kernel? If this is true, then how can I install/use an SMP kernel under OpenBSD 4.9?
So OpenBSD does support several CPU's/cores? If i have a Core2Duo cpu in my laptop (t7100) then would it bring greater performance if I use "SMP" kernel? If this is true, then how can I install/use an SMP kernel under OpenBSD 4.9?
LanceBaynes
(41465 rep)
May 2, 2011, 04:37 AM
• Last activity: May 24, 2012, 08:13 PM
Showing page 1 of 14 total questions