Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
76 views
cpulimit: detect failure / exit status in Linux
I'm using `cpulimit` in a Bash script to run a certain command (ffmpeg) with a limited CPU usage, but I want to know if the command fails. But when the command(ffmpeg) fails with any error, cpulimit still exists with exit status 0. What should I do? My cpulimit command: `cpulimit -l 300 -f -- ffmpeg...
I'm using cpulimit in a Bash script to run a certain command (ffmpeg) with a limited CPU usage, but I want to know if the command fails. But when the command(ffmpeg) fails with any error, cpulimit still exists with exit status 0. What should I do? My cpulimit command: cpulimit -l 300 -f -- ffmpeg ... CPUlimit version 3.0 OS: Ubuntu with Linux 6.8.0 Note: cpulimit does not work on forks of the given command, unless I pass --monitor-forks flag, which the manual says is a bad idea specially in scripts: > -m, --monitor-forks watch and throttle child processes of the target process Warning: It is usually a bad idea to use this flag, > espe‐ cially on a shell script. The commands in the script will each > spawn a process which will, in turn, spawn more copies of this program > to throttle them, bogging down the system. Also, it is possible for a > child process to die and for its PID to be assigned to another > program. When this happens quickly it can cause cpulimit to target the > new, unin‐ tended process before the old information has had a chance > to be flushed out. Only use the monitor-forks option in specific > cases, ideally on machines without a lot of new processes being > spawned.
saeedgnu (153 rep)
Feb 9, 2025, 07:04 AM • Last activity: Feb 9, 2025, 08:48 AM
0 votes
0 answers
29 views
Linux kernel cgroup v2 CFS - cpu throttled_usec accounting?
In Linux kernel cgroup v2’s CFS scheduler, how is cpu.stat `throttled_usec` accounted when a cgroup with multiple threads gets throttled during a single quota period? Specifically, is `throttled_usec` tracked as the total wall-clock time that the cgroup was throttled as a whole, or is it a sum of th...
In Linux kernel cgroup v2’s CFS scheduler, how is cpu.stat throttled_usec accounted when a cgroup with multiple threads gets throttled during a single quota period? Specifically, is throttled_usec tracked as the total wall-clock time that the cgroup was throttled as a whole, or is it a sum of the throttled times of all individual threads? Kernel Version: "5.14.0-284.11.1.el9_2.x86_64 #1 SMP PREEMPT_DYNAMIC Tue May 9 11:41:53 PDT 2023 x86_64 x86_64 x86_64 GNU/Linux" Distro: Oracle Linux 9.x
ALZ (961 rep)
Jan 11, 2025, 12:35 PM
0 votes
1 answers
578 views
How to *actually* limit CPU usage for Hashcat
I’d rather not fry the living crap out of my CPU. The hashcat program uses a lot of CPU and GPU. I know, the simple solutions people recommend online, such as using tools to limit cpu usage, such as `cpulimit`, but all it does it cut Hashcat. Limiting the threads or cores doesn’t help either, if any...
I’d rather not fry the living crap out of my CPU. The hashcat program uses a lot of CPU and GPU. I know, the simple solutions people recommend online, such as using tools to limit cpu usage, such as cpulimit, but all it does it cut Hashcat. Limiting the threads or cores doesn’t help either, if anything, makes it worse. My cpu constantly sits at about 94-8%, at over 90 degrees during Hashcat running. The -w option only helps it cool a few degrees at the most! So *is* there any solution to this, or do I have to either kiss my cpu goodbye or run Hashcat for short periods at a time, and pause to cool down?
security_paranoid (103 rep)
Apr 27, 2024, 08:53 AM • Last activity: Apr 27, 2024, 12:04 PM
0 votes
1 answers
273 views
Limit CPU usage of background process if system is busy
At the moment I do the following: When I see that the system load of my Ubuntu server is high, I get the PIDs of specific currently running background processes and start cpulimit with the PID. When the system load gets lower, I kill cpulimit, so that the background processes can run faster. When th...
At the moment I do the following: When I see that the system load of my Ubuntu server is high, I get the PIDs of specific currently running background processes and start cpulimit with the PID. When the system load gets lower, I kill cpulimit, so that the background processes can run faster. When the load is still too high, I also kill cpulimit and start it again with a lower limit for the process. I repeat this until the background processes are done after a few hours. But it gets very annoying to do this manually every day, so I would like to know if there is a way to automate this.
Jomo (1 rep)
Jul 23, 2023, 11:12 PM • Last activity: Jul 24, 2023, 05:47 AM
0 votes
0 answers
310 views
Limit the total CPU usage for the entire system?
I'm running a Chrome instance on a NAT VPS. Being a NAT VPS it has some restrictions such not not allowing a 15min average load greater than 1. However, just starting and going to a few webpages causes the load to increase > 2 killing the instance. I read about limiting CPU usage online but most are...
I'm running a Chrome instance on a NAT VPS. Being a NAT VPS it has some restrictions such not not allowing a 15min average load greater than 1. However, just starting and going to a few webpages causes the load to increase > 2 killing the instance. I read about limiting CPU usage online but most are limited to a single process, whereas Chrome spawns multiple process. I've tried using cpulimit but it doesn't help since it can only be used on a single PID. Is there a way I can limit the CPU usage of the entire system as a whole? Or atleast any process started by the chrome executable? Thanks. PS: The chrome instance is being started and controlled using Selenium on Python.
DentFuse (35 rep)
Apr 11, 2023, 06:58 AM
0 votes
1 answers
476 views
clamscan and cpulimit together runs multiple clamscan processes in ubuntu 18 and 20
I have installed clamav and cpulimit. I want to clamscan all directories in /home which are not owned by root 1 by 1 with a cpu limit of 70%. I use the below command to do that in centos and almalinux: **find /home/ -mindepth 1 -maxdepth 1 -type d ! -user root -exec cpulimit -l 70 -- /usr/bin/clamsc...
I have installed clamav and cpulimit. I want to clamscan all directories in /home which are not owned by root 1 by 1 with a cpu limit of 70%. I use the below command to do that in centos and almalinux: **find /home/ -mindepth 1 -maxdepth 1 -type d ! -user root -exec cpulimit -l 70 -- /usr/bin/clamscan -i -r {} ; > /root/scan_results.txt** The above command works fine in centos. But in ubuntu 18 and 20, it creates multiple clamscan processes for each directory which are in /home and all the processes consume 70% cpu usage thereby overloading my server. I checked that using 'top' command. 'ps aux | grep clamscan' command also shows multiple clamscan processes running simultaneously. **find /home/ -mindepth 1 -maxdepth 1 -type d ! -user root -exec /usr/bin/clamscan -i -r {} ; > /root/scan_results.txt** When I remove cpulimit from the command like shown above, it scans 1 by 1 but clamscan process consumes 100% cpu usage which I dont want. I tried some other commands which didnt work as well: 1) **find /home/ -mindepth 1 -maxdepth 1 -type d ! -user root | xargs -I {} cpulimit -l 70 -- /usr/bin/clamscan -i -r {} > /root/scan_results.txt** 2) **find /home/ -mindepth 1 -maxdepth 1 -type d ! -user root | xargs -P 1 -I {} cpulimit -l 70 -- /usr/bin/clamscan -i -r {} > /root/scan_results.txt** I want a command which scans all the /home directories which are not owned by root one at a time with cpulimit of 50% and not simultaneously.
jay (41 rep)
Jan 30, 2023, 05:58 AM • Last activity: Jan 30, 2023, 12:48 PM
1 votes
2 answers
5523 views
Prevent a process from using too much CPU even if available
I would like to avoid a process using too much CPU. In fact, I want to prevent my CPU from heating up since I have some really long CPU demanding tasks (video converting) to run from a Raspberry Pi with Debian on it: temperature rises over 80 °C. I have seen that there's a `cpulimit` command, b...
I would like to avoid a process using too much CPU. In fact, I want to prevent my CPU from heating up since I have some really long CPU demanding tasks (video converting) to run from a Raspberry Pi with Debian on it: temperature rises over 80 °C. I have seen that there's a cpulimit command, but I don't know how to run my command with it since it seems to take either the pid (process ID) as argument or an executable file with the code to run, not the bash command itself. I would like to directly see what my task returns and be able to Ctrl-C if needed. Note: if I try to put my task command into a file and run cpulimit -l 20 --path=/path/to/my/file.sh, then it returns Warning: no target process found. Waiting for it... So it looks like I am unable to understand 1) what the --path argument actually does, and 2) how to properly use cpulimit on any terminal command... I would **prefer** **not to use** workarounds like nohup my-command --my-args &, even if it secondarily returns the pid and let me write the cpulimit command for it. Thanks in advance!
Johannes Lemonde (155 rep)
Feb 27, 2018, 08:18 PM • Last activity: Aug 3, 2022, 10:59 AM
2 votes
1 answers
327 views
How can I apply cpu and memory limit to "Web Content"?
I tried to put memory limit on programs, by using their desktop shortcuts in order to apply the same limit for their child process too. I found that "Web Content" is a separate process in [what is web content][1] and it uses high memory not only in my computer. I am using for instance for Firefox an...
I tried to put memory limit on programs, by using their desktop shortcuts in order to apply the same limit for their child process too. I found that "Web Content" is a separate process in what is web content and it uses high memory not only in my computer. I am using for instance for Firefox and child processes in Firefox's .desktop shortcut having an execution line:
Exec=sh -c "ulimit -m 131072;nice -u username 19; cpulimit -l 25 -- ../firefox/firefox/firefox"
Although sometimes, Firefox uses a little bit higher than 25(i.e 26,27) but it seemed that it worked. Also GeckoMain is limited in terms of cpu. However, I noticed that process named "Web Content" continues to use much higher CPU when Firefox is open. How can I apply cpu and memory limit to "Web Content"?
user458762
Mar 11, 2022, 02:41 PM • Last activity: Mar 28, 2022, 08:06 AM
0 votes
0 answers
643 views
Automatically Kill high-resource utilization programs
In our company, we use a headless Linux machine as development machine. However, sometimes users use up all resources (CPU, RAM) on that machine, which influences the work of others. Hence, we want to cap the amount of resources a user can take up for a single process. Is there a utility on Linux th...
In our company, we use a headless Linux machine as development machine. However, sometimes users use up all resources (CPU, RAM) on that machine, which influences the work of others. Hence, we want to cap the amount of resources a user can take up for a single process. Is there a utility on Linux that allows to limit the total amount of resources a user can use? Or that automatically kills processes that use up too many resources?
Green 绿色 (331 rep)
Mar 4, 2022, 03:17 AM
1 votes
0 answers
181 views
How do I make a Linux process believe it is using 100% of CPU while limiting it with a cgroup?
Cgroups allow artificially limiting CPU time available to a process using `cpu.cfs_quota_us` and `cpu.cfs_period_us` parameters. This however results in a discrepancy when the program monitors its CPU usage (e.g. by comparing wall time and CPU time) and graceful degradation algorithms (such as decre...
Cgroups allow artificially limiting CPU time available to a process using cpu.cfs_quota_us and cpu.cfs_period_us parameters. This however results in a discrepancy when the program monitors its CPU usage (e.g. by comparing wall time and CPU time) and graceful degradation algorithms (such as decreasing quality of something realtime) may not kick in. How do I make the program think it consumes 100% of CPU while limiting it with a cgroup policy?
Vi. (5985 rep)
Aug 11, 2021, 11:46 AM
16 votes
2 answers
4228 views
Why can't "cpulimit" limit chromium browser?
Due to its high CPU usage i want to limit Chromium web browser by `cpulimit` and use terminal to run: cpulimit -l 30 -- chromium --incognito but it does not limit CPU usage as expected (i.e. to maximum to 30%). It again uses 100%. Why? What am I doing wrong?
Due to its high CPU usage i want to limit Chromium web browser by cpulimit and use terminal to run: cpulimit -l 30 -- chromium --incognito but it does not limit CPU usage as expected (i.e. to maximum to 30%). It again uses 100%. Why? What am I doing wrong?
user458762
Jul 24, 2021, 12:15 PM • Last activity: Jul 24, 2021, 03:42 PM
-1 votes
1 answers
837 views
Error while trying to install cpulimit on centos 8
[root@XYZ ~]# sudo dnf install cpulimit Last metadata expiration check: 0:17:21 ago on Tue 13 Oct 2020 11:24:25 AM PDT. No match for argument: cpulimit Error: Unable to find a match: cpulimit [root@XYZ ~]# Unable to find a resolution. Any help is appreciated.
[root@XYZ ~]# sudo dnf install cpulimit Last metadata expiration check: 0:17:21 ago on Tue 13 Oct 2020 11:24:25 AM PDT. No match for argument: cpulimit Error: Unable to find a match: cpulimit [root@XYZ ~]# Unable to find a resolution. Any help is appreciated.
Ravi (1 rep)
Oct 13, 2020, 06:46 PM • Last activity: Oct 13, 2020, 07:13 PM
3 votes
2 answers
430 views
Stop specific processes from heating up my system's CPU
I have a process doing some computation the whole time. It causes my system's CPU to heat up and the fan to spin faster. I want this process to run, but with a low priority. I don't want my system to heat up and my fan to spin because of it. Is it possible to achieve this? If it matters, my CPU is a...
I have a process doing some computation the whole time. It causes my system's CPU to heat up and the fan to spin faster. I want this process to run, but with a low priority. I don't want my system to heat up and my fan to spin because of it. Is it possible to achieve this? If it matters, my CPU is an AMD Ryzen 5 3500U and the process is a browser tab (I'm running it in Chromium, but am happy to change browser if it helps). I'm willing to even run the whole browser inside a VM if it helps. --- What I tried and why it failed: 1. The best solution found so far is limiting the maximum CPU frequency:
echo 0 > /sys/devices/system/cpu/cpufreq/boost
    cpupower frequency-set -u 1200MHz
It works, but it means that other processes are affected too: compiling, (un)compressing etc takes way longer. Moreover, the CPU is still warmer than if the process wasn't running. Although to acceptable temperatures. 2. [cpulimit](https://github.com/opsengine/cpulimit) cripples the performance of the target process. If I set the percentage of CPU allowed to 50% (cpulimit -l 50 -p ...), the computation becomes a lot slower than normal (like 10x as slow; I don't have a good way to measure the accurate slowdown). 3. I played around with cgroups for a bit, but couldn't get any effect. 4. Nice has no effect of course. Is there anything else I should try?
peoro (3938 rep)
Aug 19, 2020, 06:27 AM • Last activity: Aug 19, 2020, 09:29 PM
3 votes
1 answers
4064 views
cpulimit is not actually limiting CPU usage
I'm calling `cpulimit` from `cron`: `00 16 * * * /usr/bin/cpulimit --limit 20 /bin/sh /media/storage/sqlbackup/backups.sh` When the job kicks off, the CPU spikes and alerts as it always has, with no actual identifiable limit having taken place. The job itself iterates over a directory of many subdir...
I'm calling cpulimit from cron: 00 16 * * * /usr/bin/cpulimit --limit 20 /bin/sh /media/storage/sqlbackup/backups.sh When the job kicks off, the CPU spikes and alerts as it always has, with no actual identifiable limit having taken place. The job itself iterates over a directory of many subdirectories and performs rsync's each time, which I believe is spawning rsync child processes (running top will have a pid available for the called rsync, which after a few minutes will have a different pid for rsync). I'm unsure how to properly utilize cpulimit to effectively limit the usage this process consumes. It might be important to keep in mind this is a VM with 2G RAM and 1vCPU.
Kahn (1827 rep)
Nov 27, 2019, 04:30 PM • Last activity: Jun 23, 2020, 02:31 PM
5 votes
1 answers
4157 views
How to limit CPU usage with systemd-run
I have a buggy program which uses 100% CPU even when it's idle. Since fixing it isn't practical at the moment, I'd like to just limit it to be able to use no more than 10% CPU. However no matter what I do, the process always chews up 100% of one CPU. I found instructions on the [Arch Wiki](https://w...
I have a buggy program which uses 100% CPU even when it's idle. Since fixing it isn't practical at the moment, I'd like to just limit it to be able to use no more than 10% CPU. However no matter what I do, the process always chews up 100% of one CPU. I found instructions on the [Arch Wiki](https://wiki.archlinux.org/index.php/Cgroups) that tell me to create a file containing this: # cpulimit.slice [Slice] CPUQuota=10% Apparently I can then launch a shell using these limits, like this: systemd-run --slice=cpulimit.slice --uid=myuser --shell This seems to work and after entering in my sudo password I get a shell, so I run a simple test that will use 100% CPU and I can stop with Ctrl+C: while true; do true; done I expect this to use no more than 10% CPU since it's running inside the slice, however it always uses 100% CPU! What am I doing wrong?
Malvineous (7395 rep)
May 26, 2020, 11:22 AM • Last activity: May 26, 2020, 11:41 AM
1 votes
0 answers
132 views
Linux shell wrapper to run program with low system resources?
There's `nice` and `renice` to lower priority of a process, `cpulimit` to lets say 30% maximum, `taskset` to limit to 1 core, `ionice`. Each of these tools has a different syntax. Specifically `cpulimit` seems harder to master. Syntax isn't trivial. Writing this for multiple tasks (on a server) woul...
There's nice and renice to lower priority of a process, cpulimit to lets say 30% maximum, taskset to limit to 1 core, ionice. Each of these tools has a different syntax. Specifically cpulimit seems harder to master. Syntax isn't trivial. Writing this for multiple tasks (on a server) would be a lot work. nice alone does not solve it. If I run for example nice -n19 stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s on my desktop system, it helps, but it is still less responsive until that process finishes. Would be useful for tasks (such as backups) that require a lot of CPU / IO where it does not matter if these finish in 5 seconds, 5 minutes or 30 minutes. More important is not to take away CPU shares from more important processes. Before re-inventing all of that... Is there a linux shell wrapper script to run programs with low system resources that covers all or most of above?
adrelanos (1956 rep)
May 13, 2020, 02:45 PM • Last activity: May 13, 2020, 03:38 PM
1 votes
1 answers
1000 views
Can I run BOINC using only a little computing power?
I found that I can help find scientific results using [BOINC](https://boinc.berkeley.edu/). As I tried, it used so much cpu or memory that my desktop hanged. Is there a way to tell Ubuntu that run BOINC but use at most, say 20% of CPU power or memory?
I found that I can help find scientific results using [BOINC](https://boinc.berkeley.edu/) . As I tried, it used so much cpu or memory that my desktop hanged. Is there a way to tell Ubuntu that run BOINC but use at most, say 20% of CPU power or memory?
boincuser (11 rep)
Feb 26, 2020, 11:08 AM • Last activity: Mar 11, 2020, 11:11 PM
18 votes
3 answers
2539 views
Is it possible to limit how much CPU power a process can take?
I'm wondering, is there way to tell a process how much processor power it can take? The problem is I'm converting video with *Arista* (video converter) and I'm annoyed by the fan running like crazy, when I look at the task monitor, it's taking over 92% of CPU. Can I (somehow) tell it that it can tak...
I'm wondering, is there way to tell a process how much processor power it can take? The problem is I'm converting video with *Arista* (video converter) and I'm annoyed by the fan running like crazy, when I look at the task monitor, it's taking over 92% of CPU. Can I (somehow) tell it that it can take just 20%? Thanks
equivalent8 (303 rep)
Jun 18, 2012, 09:59 AM • Last activity: Feb 26, 2020, 11:30 AM
1 votes
0 answers
49 views
Is there a way to make my computers fans turn on instead of the CPU dropping frequency? I'm using Linux Mint 19.3
My computer, for whatever reason, will lower CPU usage when it gets hot instead of turning on its fan. I don't know if it's a driver issue or something else. Thanks! [![This is at 80 Celcius][1]][1] [1]: https://i.sstatic.net/nkNjW.png The above is at 80 degrees Celcius.
My computer, for whatever reason, will lower CPU usage when it gets hot instead of turning on its fan. I don't know if it's a driver issue or something else. Thanks! This is at 80 Celcius The above is at 80 degrees Celcius.
T1 L. (11 rep)
Feb 19, 2020, 03:07 PM
0 votes
0 answers
332 views
WSL Ubuntu 18.04 shows wrong Cpu_allowed mask
Ubuntu 18.04 running under WSL detects my CPU wrong: $ cat /proc/self/status | grep Cpus_allowed Cpus_allowed: 00000001 The CPU is an i7-4510U (2 cores/4 threads) so I expected: Cpus_allowed: f or similar (ff, ffffffff, 0000000f). The peculiar thing is that when I run 4 processes, each process gets...
Ubuntu 18.04 running under WSL detects my CPU wrong: $ cat /proc/self/status | grep Cpus_allowed Cpus_allowed: 00000001 The CPU is an i7-4510U (2 cores/4 threads) so I expected: Cpus_allowed: f or similar (ff, ffffffff, 0000000f). The peculiar thing is that when I run 4 processes, each process gets a CPU thread, and thus runs at 400% CPU utilization. So it is as if the CPU mask is not respected. Also taskset is not respected. This should use a single thread (100%), but it uses all 4 (400%): taskset 2 parallel -j4 'bzip2 ' ::: {1..10} Is this a bug - possibly in WSL? And if so: Where do I report it? **Background** The problem described on https://arstechnica.com/civis/viewtopic.php?f=15&t=1442563 is explained by the above: GNU Parallel detects the wrong number of CPU threads because it looks at the CPU mask in /proc/*/status to determine how many CPU threads it is allowed to use.
Ole Tange (37348 rep)
Mar 29, 2019, 10:20 AM • Last activity: Mar 30, 2019, 09:11 AM
Showing page 1 of 20 total questions