Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
1
answers
5782
views
JAVA OPTS Xms Xmx MetaspaceSize MaxMetaspaceSize relationship with server resources
I have just started working with jboss application servers and recently we had a problem when trying to deploy an application in a new test server (RHEL 7), it happened that, when starting the jboss service (jboss eap 7.1) with the application in the deployment area, the server began to freeze, that...
I have just started working with jboss application servers and recently we had a problem when trying to deploy an application in a new test server (RHEL 7), it happened that, when starting the jboss service (jboss eap 7.1) with the application in the deployment area, the server began to freeze, that is, it began to respond extremely slowly and it was necessary to turn it off, we solved the problem simply by adding more cpu and ram, in the configuration (
standalone.conf
) there are these parameters:
JAVA_OPTS="-Xms4096m -Xmx4096m -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=512m
Could you give me a brief explanation of the meaning of each one and its relationship with the memory and cpu of the server? Is there any rule or recommendation to take into account to configure these parameters and server resources?
Thanks in advance.
miguel ramires
(9 rep)
Jul 11, 2022, 11:01 PM
• Last activity: Jun 11, 2025, 04:05 PM
0
votes
1
answers
89
views
Is there an official (re)source where is list it all the categories of commands with their respective set of commands?
Just being curious: **Question** * Is there an **official (re)source** where is list it all the categories of commands with their respective set of commands? Something like [Linux Foundation](https://www.linuxfoundation.org) for [Filesystem Hierarchy Standard **FHS**](https://refspecs.linuxfoundatio...
Just being curious:
**Question**
* Is there an **official (re)source** where is list it all the categories of commands with their respective set of commands?
Something like [Linux Foundation](https://www.linuxfoundation.org) for [Filesystem Hierarchy Standard **FHS**](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html)
I did do a research on Google with the
linux types of commands
search term but appears few information. But at the top from AI Overview
is shared as follows:
...
Navigation:
cd: Changes the current working directory.
pwd: Displays the full path of the current working directory.
...
Networking:
ping: Sends ICMP echo requests to check network connectivity.
ssh: Provides secure shell access to remote machines.
wget: Downloads files from the web.
curl: Transfers data using URLs.
...
Manuel Jordan
(2108 rep)
May 1, 2025, 04:25 PM
• Last activity: May 2, 2025, 10:23 AM
2
votes
1
answers
736
views
Get memory/cpu usage by application
**What I need** I want to monitor system resources (namely memory and CPU usage) by application, not just by process. Just as the Windows Task Manager groups resources by the 'calling mother process,' I would like to see it that way as well. Nowadays, applications like Firefox and VSCode spawn many...
**What I need**
I want to monitor system resources (namely memory and CPU usage) by application, not just by process. Just as the Windows Task Manager groups resources by the 'calling mother process,' I would like to see it that way as well.
Nowadays, applications like Firefox and VSCode spawn many child processes, and I want to get a quick and complete overview of their usage.
The solution can be a GUI or TUI, a bash script or a big one-liner. I do not really care. For it to work, I imagine I could feed it with the pid of the mother process or the name of an executable as a means of filtering.
**Example**
**What I Tried**
* I tried

htop
, but it only shows me a tree where the calling process has its own memory listed - not the ones it called.
* I tried the gnome-system-monitor
, but its the same.
* I tried a bit with ps
and free
but have not found the correct set of arguments / pipes to make them do what I want.
It stumped me that I could not google a solution for that. Maybe there is a reason for it?
Does anybody have an idea?
Y. Shallow
(23 rep)
Oct 25, 2021, 12:07 PM
• Last activity: Mar 17, 2025, 01:32 PM
1
votes
1
answers
212
views
How to specify cleanup commands for `systemctl disable --now`?
I just learned the (hard way) that in a systemd service file, the `ExecStopPost` commands are not executed on ``` systemctl disable --now myservice.service ``` They are only executed on ``` systemctl stop myservice.service ``` (which might then be followed by `systemctl disable myservice.service`)....
I just learned the (hard way) that in a systemd service file, the
ExecStopPost
commands are not executed on
systemctl disable --now myservice.service
They are only executed on
systemctl stop myservice.service
(which might then be followed by systemctl disable myservice.service
).
I have two questions on this:
1. What is the rationale behind that behaviour?
2. How can I specify cleanup commands that are also executed on systemctl disable --now
?
vog
(429 rep)
Nov 1, 2024, 10:46 AM
• Last activity: Nov 1, 2024, 06:28 PM
2
votes
5
answers
1378
views
Continuously monitor a process and all its children with `top`?
I want to run a process that spawns children, e.g., ``` for i in {1..4}; do sh -c 'echo $$; for j in {1..3}; do sh -c "echo ...\$\$; sleep 1" done' done ``` and I would like to monitor the CPU and memory usage every 2 seconds with `top`. - I can monitor its resource usage with `top -p `, but this do...
I want to run a process that spawns children, e.g.,
for i in {1..4}; do
sh -c 'echo $$; for j in {1..3}; do
sh -c "echo ...\$\$; sleep 1"
done'
done
and I would like to monitor the CPU and memory usage every 2 seconds with top
.
- I can monitor its resource usage with top -p
, but this doesn't account for the children.
- I can monitor _all_ running processes just with top
, but this is too much information.
- I can precompute a list of PIDs, then pass all of them to top
, but the process could then spawn new children, which would not be accounted for.
How can I get a top
snapshot every 2 seconds of just the process I'm running and any processes it spawns?
Someone asked a similar question [here](https://unix.stackexchange.com/questions/178795/how-can-i-find-the-cpu-resource-utilization-of-a-process-and-all-its-children) , but it was about summarizing this information into a *single* number, after the process finishes. My question is about *continuously* monitoring the processes while they’re still going.
wobtax
(1135 rep)
Jun 25, 2024, 06:54 PM
• Last activity: Jun 27, 2024, 06:01 PM
0
votes
2
answers
146
views
why is all this swap space being used?
I have a Debian box, where I am doing some data recovery using ddrescue on a sata ssd. The process has been running for 24 hours, and has 24 to go (at least) in any event, the PC has 16GB ram, and 10GB swap. For some reason, There is 8GB swap in use, and 2GB RAM in use. This seems like an inefficien...
I have a Debian box, where I am doing some data recovery using ddrescue on a sata ssd. The process has been running for 24 hours, and has 24 to go (at least)
in any event, the PC has 16GB ram, and 10GB swap. For some reason, There is 8GB swap in use, and 2GB RAM in use. This seems like an inefficient use of resources. I'd like to avoid this behavior in the future. Why is the memory devices being utilized in this way?
And what can be done to avoid this sort of operation in the future?
j0h
(3949 rep)
May 5, 2024, 05:44 PM
• Last activity: May 6, 2024, 04:45 AM
17
votes
1
answers
2831
views
Is it wrong to think of "memfd"s as accounted "to the process that owns the file"?
https://dvdhrm.wordpress.com/2014/06/10/memfd_create2/ > Theoretically, you could achieve [`memfd_create()`] behavior without introducing new syscalls, like this: > > ```int fd = open("/tmp", O_RDWR | O_TMPFILE | O_EXCL, S_IRWXU);``` (Note, to more portably guarantee a tmpfs here, we can use "`/dev/...
https://dvdhrm.wordpress.com/2014/06/10/memfd_create2/
> Theoretically, you could achieve [
memfd_create()
] behavior without introducing new syscalls, like this:
>
> fd = open("/tmp", O_RDWR | O_TMPFILE | O_EXCL, S_IRWXU);
(Note, to more portably guarantee a tmpfs here, we can use "/dev/shm
" instead of "/tmp
").
> Therefore, the most important question is why the hell do we need a third way?
>
> [...]
>
> * The backing-memory is accounted to the process that owns the file and is not subject to mount-quotas.
^ Am I right in thinking the first part of this sentence cannot be relied on?
The [memfd_create() code](https://elixir.bootlin.com/linux/latest/source/mm/shmem.c#L3679) is literally implemented as an "[unlinked file living in [a] tmpfs which must be kernel internal](https://elixir.bootlin.com/linux/latest/source/mm/shmem.c#L4254) ". Tracing the code, I understand it differs in not implementing LSM checks, also memfds are created to support "seals", as the blog post goes on to explain. However, I'm extremely sceptical that memfds are _accounted_ differently to a tmpfile in principle.
Specifically, when the [OOM-killer](https://www.kernel.org/doc/gorman/html/understand/understand016.html) comes knocking, I don't think it will account for memory held by memfds. This could total up to 50% of RAM - the value of the [size= option for tmpfs](https://www.kernel.org/doc/Documentation/filesystems/tmpfs.txt) . The kernel doesn't set a different value for the internal tmpfs, so it would use the default size of 50%.
So I think we can generally expect processes which hold a large memfd, but no other significant memory allocations, will not be OOM-killed. Is that correct?
sourcejedi
(53222 rep)
May 2, 2018, 07:22 PM
• Last activity: Apr 12, 2024, 04:06 AM
1
votes
0
answers
153
views
mysqld thread count exceeds max_connections
As the question title suggests: I have set `max_connections = 2` in my `my.cnf` file, but when I activate my mysql daemon, the thread count sits at 37. I am searching online but cannot find indication my expectations are wrong. Am I understanding the `max_connections` directive correctly ? Can anyon...
As the question title suggests: I have set
max_connections = 2
in my my.cnf
file, but when I activate my mysql daemon, the thread count sits at 37. I am searching online but cannot find indication my expectations are wrong. Am I understanding the max_connections
directive correctly ? Can anyone suggest a reason as to why this may not be limiting the thread count ?
Attempts at a solution
---------------------
1) I query mysql global variables via the mysql CLI client:
| Variable_name | Value |
+----------------------------+----------------------+
| max_allowed_packet | 67108864 |
| max_binlog_cache_size | 18446744073709547520 |
| max_binlog_size | 1073741824 |
| max_binlog_stmt_cache_size | 18446744073709547520 |
| max_connect_errors | 100 |
| max_connections | 2 |
| max_delayed_threads | 20 |
| max_digest_length | 1024 |
| max_error_count | 1024 |
| max_execution_time | 0 |
| max_heap_table_size | 16777216 |
| max_insert_delayed_threads | 20 |
| max_join_size | 18446744073709551615 |
| max_length_for_sort_data | 4096 |
| max_points_in_geometry | 65536 |
| max_prepared_stmt_count | 16382 |
| max_relay_log_size | 0 |
| max_seeks_for_key | 18446744073709551615 |
| max_sort_length | 1024 |
| max_sp_recursion_depth | 0 |
| max_user_connections | 2 |
| max_write_lock_count | 18446744073709551615 |
This confirms the my.cnf
file is being correctly loaded as the max_connections
variable is indeed set to 2.
2) As is visible from the output in (1), I have attempted limiting the max_user_connections
variable also, but again, no luck.
3) I killed other server processes that I suspected could be querying the mysql daemon - httpd, php-fpm, but this has not reduced the number of threads.
System Specs
------------
My MySQL version is 8.3.0, and is a minimal install:
mysql-8.3.0-linux-glibc2.17-x86_64-minimal
Update
------
Running the query SHOW PROCESSLIST
returns:
*************************** 1. row ***************************
Id: 5
User: event_scheduler
Host: localhost
db: NULL
Command: Daemon
Time: 146457
State: Waiting on empty queue
Info: NULL
*************************** 2. row ***************************
Id: 11
User: root
Host: localhost
db: NULL
Command: Query
Time: 0
State: init
Info: SHOW PROCESSLIST
2 rows in set, 1 warning (0.00 sec)
I found more helpful running SELECT * FROM performance_schema_threads\G
, which returned information about many tasks mysqld is executing. I had a suspicion this list of tasks would correspond to the number of threads and running:
SELECT COUNT(*) FROM performance_schema.threads\G
Returns:
38
Which nicely corresponds to the number of threads - I suspect the extra task corresponds to the master process. In the terminal:
pstree | grep sql
|-mysqld---37*[{mysqld}
Reading through the task names returned by performance_schema.threads
, I gather these threads are necessary for MySQL to function as their names are io_read_thread
,io_write_thread
,log_writer_thread
and similar.
Setting max_connections
works as expected - setting max_user_connections=2
and attempting to open a third client as root
returns:
ERROR 1203 (42000): User root already has more than 'max_user_connections' active connections
However, if max_connections
is set to 2
, while max_user_connections
is at 6
, I can have 3 mysql clients open at once with user root. When I try and open a fourth client as root, I receive:
ERROR 1040 (HY000): Too many connections
How does a limit of three client sessions under one user arise from my.cnf
limits of:
max_connections = 2
max_user_connections = 6
thread_cache_size = 2
?
I am continuing to read up MySQL documentation but am yet to find an explanation. If someone more experienced can offer clarity, I would be greatly appreciative. How do these settings affect connection (and so thread) number ?
user10709800
(73 rep)
Apr 5, 2024, 09:59 AM
• Last activity: Apr 6, 2024, 02:42 PM
74
votes
2
answers
134053
views
Running dd. Why resource is busy?
I just formatted microSD card, and would like to run a `dd` command. Unfortunately `dd` command fails: $ sudo dd bs=1m if=2016-02-26-raspbian-jessie-lite.img of=/dev/rdisk2 dd: /dev/rdisk2: Resource busy $ Everyone on the internet says I need to unmount the disk first. Sure, can do that and move on....
I just formatted microSD card, and would like to run a
dd
command. Unfortunately dd
command fails:
$ sudo dd bs=1m if=2016-02-26-raspbian-jessie-lite.img of=/dev/rdisk2
dd: /dev/rdisk2: Resource busy
$
Everyone on the internet says I need to unmount the disk first. Sure, can do that and move on. But **I want to understand why / what exactly in OS X is making the device busy**? How do I diagnose this?
So far I tried:
1. Listing open files:
$ lsof /dev/disk2
$ lsof /dev/disk2s1
$
Also:
$ lsof /Volumes/UNTITLED
$
2. Listing users working on the file:
$ fuser -u /dev/disk2
/dev/disk2:
$ fuser -u /dev/disk2s1
/dev/disk2s1:
$
Also:
$ fuser -u /Volumes/UNTITLED
$
3. Check for system messages:
$ sudo dmesg | grep disk
$
Also:
$ sudo dmesg | grep /Volumes/UNTITLED
$
My environment
1. Operating system:
Darwin Eugenes-MacBook-Pro-2.local 15.3.0 Darwin Kernel Version 15.3.0: Thu Dec 10 18:40:58 PST 2015; root:xnu-3248.30.4~1/RELEASE_X86_64 x86_64
2. Information about my microSD:
diskutil list disk2
/dev/disk2 (internal, physical):
#: TYPE NAME SIZE IDENTIFIER
0: FDisk_partition_scheme *31.9 GB disk2
1: DOS_FAT_32 UNTITLED 31.9 GB disk2s1
P.S. I'm using OS X 10.11.
**Update 22/3/2016**. Figured it out. I re-ran the lsof
and fuser
from above using sudo
, and finally got to the bottom of the issue:
$ sudo fuser /Volumes/UNTITLED/
/Volumes/UNTITLED/: 62 282
$
And:
$ sudo lsof /Volumes/UNTITLED/
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
mds 62 root 8r DIR 1,6 32768 2 /Volumes/UNTITLED
mds 62 root 22r DIR 1,6 32768 2 /Volumes/UNTITLED
mds 62 root 23r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD
mds 62 root 25u REG 1,6 0 999999999 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/journalExclusion
mds_store 282 root txt REG 1,6 3277 17 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexGroups
mds_store 282 root txt REG 1,6 8 23 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexCompactDirectory
mds_store 282 root txt REG 1,6 312 19 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexTermIds
mds_store 282 root txt REG 1,6 3277 29 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexGroups
mds_store 282 root txt REG 1,6 1024 35 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexCompactDirectory
mds_store 282 root txt REG 1,6 312 21 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexPositionTable
mds_store 282 root txt REG 1,6 8192 31 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexTermIds
mds_store 282 root txt REG 1,6 2056 22 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexDirectory
mds_store 282 root txt REG 1,6 8192 33 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexPositionTable
mds_store 282 root txt REG 1,6 8224 34 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexDirectory
mds_store 282 root txt REG 1,6 16 16 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexIds
mds_store 282 root txt REG 1,6 65536 48 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/reverseDirectoryStore
mds_store 282 root txt REG 1,6 704 24 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.indexArrays
mds_store 282 root txt REG 1,6 65536 26 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/0.directoryStoreFile
mds_store 282 root txt REG 1,6 32768 28 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexIds
mds_store 282 root txt REG 1,6 65536 36 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.indexArrays
mds_store 282 root txt REG 1,6 65536 38 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/live.0.directoryStoreFile
mds_store 282 root 5r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD
mds_store 282 root 17u REG 1,6 8192 12 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/psid.db
mds_store 282 root 32r DIR 1,6 32768 10 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD
mds_store 282 root 41u REG 1,6 28 15 /Volumes/UNTITLED/.Spotlight-V100/Store-V2/A2D41CCB-48CC-45F3-B8D6-F3B383D91AAD/indexState
$
From the above it's easy to see that processes called mds
and mds_store
have created and are _holding_ lots of files on the volume.
oldhomemovie
(1737 rep)
Mar 22, 2016, 01:56 PM
• Last activity: Feb 12, 2024, 10:12 AM
6
votes
3
answers
3581
views
What is the historical reason for limits on file descriptors (ulimit -n)
When I first borrowed an account on a UNIX system in 1990, the file limit was an astonishing 1024, so I never really saw that as a problem. Today 30 years later the (soft) limit is a measly 1024. I imagine the historical reason for 1024 was that it was a scarce resource - though I cannot really find...
When I first borrowed an account on a UNIX system in 1990, the file limit was an astonishing 1024, so I never really saw that as a problem.
Today 30 years later the (soft) limit is a measly 1024.
I imagine the historical reason for 1024 was that it was a scarce resource - though I cannot really find evidence for that.
The limit on my laptop is (2^63-1):
$ cat /proc/sys/fs/file-max
9223372036854775807
which I today see as astonishing as 1024 in 1990. The hard limit (
ulimit -Hn
) on my system limits this further to 1048576.
But why have a limit at all? Why not just let RAM be the limiting resource?
I ran this on Ubuntu 20.04 (from year 2020) and HPUX B.11.11 (from year 2000):
ulimit -n ulimit -Hn
On Ubuntu this increases the limit from 1024 to 1048576. On HPUX it increases from 60 to 1024. *In neither case is there any difference in the memory usage as per ps -edalf
*. If the scarce resource is not RAM, what *is* the scarce resource then?
I have never experienced the 1024 limit helping me or my users - on the contrary, it is the root cause for errors that my users cannot explain and thus cannot solve themselves: Given the often mysterious crashes they do not immediately think of ulimit -n 1046576
before running their job.
I can see it is useful to limit the *total* memory size of a process, so if it runs amok, it will not take down the whole system. But I do not see how that applies to the file limit.
What is the situation where the limit of 1024 (and not just a general memory limit) would help back in 1990? And is there a similar situation today?
Ole Tange
(37348 rep)
Dec 21, 2020, 11:27 PM
• Last activity: Oct 24, 2023, 02:23 AM
0
votes
1
answers
49
views
QNX resource manager full duplex or half duplex
I am using a resource manager to read the data from the communication channel and send it to the application. I have doubt, is the QNX resource manager half duplex or full duplex? Can simultaneous read/write work with QNX resource manager?
I am using a resource manager to read the data from the communication channel and send it to the application. I have doubt, is the QNX resource manager half duplex or full duplex?
Can simultaneous read/write work with QNX resource manager?
sagar
(3 rep)
May 24, 2023, 10:56 AM
• Last activity: May 24, 2023, 12:07 PM
1
votes
1
answers
735
views
HP Pavilion x360 Convertible, installing Linux, endless IRQ and logging loop
When installing a Linux OS (e.g. Mint) from a live cd or USB stick, editing /etc/default/grub does not solve the problem. You can add or edit a line like GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=nomsi" in /etc/default/grub, then running update-grub (which internally uses grub-mkconfig) will tran...
When installing a Linux OS (e.g. Mint) from a live cd or USB stick, editing /etc/default/grub does not solve the problem. You can add or edit a line like
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=nomsi"
in /etc/default/grub, then running update-grub (which internally uses grub-mkconfig) will transfer that to /boot/grub/grub.cfg .
But that becomes effective only after a reboot, and when you reboot a Linux live session (i.e. without having installed anything to hard disk yet), the live stick will reinstate its own Grub settings again, so you have the interrupt looping problem yet again.
How to solve this?
Ruud Harmsen
(21 rep)
Apr 4, 2023, 05:24 AM
• Last activity: Apr 4, 2023, 05:27 AM
3
votes
1
answers
492
views
Is the kilobyte used by time and ulimit commands either 1000 (SI) or 1024 (old school) bytes?
From `man time`: M Maximum resident set size of the process during its lifetime, in Kilobytes. From `ulimit -a`: max memory size (kbytes, -m) unlimited But a ["kilobyte" may mean either 1000 or 1024 bytes][1]. I guess here it is a round 1024, but I want to be sure. Authoritative reference would be a...
From
man time
:
M Maximum resident set size of the process during its lifetime, in Kilobytes.
From ulimit -a
:
max memory size (kbytes, -m) unlimited
But a "kilobyte" may mean either 1000 or 1024 bytes . I guess here it is a round 1024, but I want to be sure. Authoritative reference would be appreciated.
abukaj
(165 rep)
Sep 7, 2022, 10:20 AM
• Last activity: Sep 7, 2022, 01:08 PM
0
votes
2
answers
1039
views
Will `rm -rf` continue deleting if it can't delete something in the middle?
Consider a dir `garbage` containing many files and directories. If I run `rm -rf garbage`, but some files or directories within `garbage` are busy by the OS/NFS/etc., so `rm -rf` will fail for them. Will it delete the rest? Or will it stop deleting upon the first failure? The current OS is Ubuntu 20...
Consider a dir
garbage
containing many files and directories.
If I run rm -rf garbage
, but some files or directories within garbage
are busy by the OS/NFS/etc., so rm -rf
will fail for them. Will it delete the rest? Or will it stop deleting upon the first failure?
The current OS is Ubuntu 20.04, but it's of interest whether this behavior is standard, or it depends on the (version of) OS.
Serge Rogatch
(167 rep)
Jul 30, 2022, 06:19 AM
• Last activity: Jul 30, 2022, 09:50 AM
0
votes
1
answers
1127
views
sleep in Bash if/else sleep vs <condition> && sleep clamining cpu
I wrote a script to check on the battery and if it is below 5% notify the user. The script is put to sleep for 1 minute after every check. If I write the script in the following way, the cpu workload will rise to great extend: ``` #!/bin/sh while true; do [[ $( upower -i /org/freedesktop/UPower/devi...
I wrote a script to check on the battery and if it is below 5% notify the user.
The script is put to sleep for 1 minute after every check.
If I write the script in the following way, the cpu workload will rise to great extend:
You can see in the image, that a lot of cpu resources are claimed. They will not be cleared.
If I write the script in a more conventional way, with simple if/else blocks, everything works as expected and the cpu will only go up for a short time in the beginning and after the sleep period is over:
#!/bin/sh
while true; do
[[
$(
upower -i /org/freedesktop/UPower/devices/battery_BAT0 |
grep percentage |
grep -Poe "\d*"
) -lt 5
]] &&
notify-send "Battery below 5%" &&
sleep 60
done
Do you know why the cpu went up?

while true; do
if [[
$(
upower -i /org/freedesktop/UPower/devices/battery_BAT0 |
grep percentage |
grep -Poe "\d*"
) -lt 5
]]; then
notify-send "Battery below 5%"
else
sleep 60
fi
done
Do you know why this behaviour is happening?
jeykey
(33 rep)
Jul 13, 2022, 06:22 PM
• Last activity: Jul 13, 2022, 07:13 PM
0
votes
0
answers
559
views
Memory runs out and PC freezes
I have been having issues playing video games on my PC, as I keep running out of RAM (8GB). When I run out of RAM, my displays and everything else freezes, except playing audio keeps looping. I have to force a shutdown and I often get banned from the game before I can rejoin. I want to solve this is...
I have been having issues playing video games on my PC, as I keep running out of RAM (8GB).
When I run out of RAM, my displays and everything else freezes, except playing audio keeps looping. I have to force a shutdown and I often get banned from the game before I can rejoin.
I want to solve this issue in some way, I'll write down my options in the order I prefer them in:
1. Prevent my PC from running out of RAM by reducing RAM usage safely somehow. Windows apparently already does this, scaling its RAM usage when running out.
2. Another option would be to only crash the game, not the whole OS when running out of memory. Some kind of script that kills the process using the most memory would do. This would at least prevent me from getting banned every time I run out of memory.
3. How viable is it to run a swapfile on a 130MBps max speed with 26ms delay HDD? Would I be able to keep playing or at least kill the game from htop running on my second display while running on swap memory?
tempacc
(357 rep)
Mar 4, 2022, 09:21 PM
0
votes
1
answers
31
views
Changing rights and owner in one command to safe resources
I have a backup script with the following function: function change_rights() { chown -R ${OWNER}:${GROUP} ${DIR} find ${DIR} -type f -exec chmod 0640 {} \; find ${DIR} -type d -exec chmod 0770 {} \; } Now the problem is that `${DIR}` is very huge and in order to change the owner and rights I have to...
I have a backup script with the following function:
function change_rights()
{
chown -R ${OWNER}:${GROUP} ${DIR}
find ${DIR} -type f -exec chmod 0640 {} \;
find ${DIR} -type d -exec chmod 0770 {} \;
}
Now the problem is that
${DIR}
is very huge and in order to change the owner and rights I have to traverse the directory at least twice which is extremely resource-intensive.
Is there any way/any command to change the owner **and** rights **at the same time**? That is to say changing what's in the inode in one go?
I'm working on a ext4
file system.
manifestor
(2563 rep)
Jan 10, 2022, 04:09 PM
• Last activity: Jan 10, 2022, 06:43 PM
4
votes
1
answers
2504
views
Why isn't this systemd service resource limited when using CPUShares property?
Ok to get my hands dirty with cgroups and systemd, I wrote the most moronic C program I could think of (just a timer and a spinlocking while loop) and named it `idiot`, which I accompanied with the following `idiot.service` file in `/sys/fs/systemd/system/`: [Unit] Description=Idiot - pretty idiotic...
Ok to get my hands dirty with cgroups and systemd, I wrote the most moronic C program I could think of (just a timer and a spinlocking while loop) and named it
idiot
, which I accompanied with the following idiot.service
file in /sys/fs/systemd/system/
:
[Unit]
Description=Idiot - pretty idiotic imo
[Service]
Type=simple
ExecStart=/path/to/idiot
User=bruno
CPUShares=100
[Install]
WantedBy=default.target
Then I did sudo systemctl start idiot.service; top | grep idiot
, which predictably told me idiot
used 100% of CPU. Now, according to link , we should be able to limit the resources of this service by the following:
sudo systemctl set-property idiot.service CPUShares=100
sudo systemctl daemon-reload
sudo systemctl restart idiot.service
which I did, followed by top
. But this still tells me that idiot
is using 100% of CPU! What am I doing wrong?
Note: I also tried adding CPUShares=100
to the unit file, to no avail
embedded_crysis
(337 rep)
Feb 28, 2017, 02:27 PM
• Last activity: Jan 5, 2022, 09:14 AM
2
votes
0
answers
577
views
Is there a program that will show me the average CPU usage over the last minute for every process?
One feature I miss from Windows is that the Resource Monitor program would show, among other statistics, the average CPU usage over the last minute for each program. On Linux I use either Gnome system monitor or `top` to monitor processes, but neither program offers this feature from what I can tell...
One feature I miss from Windows is that the Resource Monitor program would show, among other statistics, the average CPU usage over the last minute for each program. On Linux I use either Gnome system monitor or
top
to monitor processes, but neither program offers this feature from what I can tell.
A common use case for these programs for me is "my laptop fan just flared up for no apparent reason. What program is eating up my CPU?" Often, however, by the time my fan turns on and I switch windows into top
the process has stopped, so I can't see which process it was. This is where it is useful to have an average over the last minute.
So does anyone know how I can get top
to show this or something similar, or another program that can do this?
A. Kriegman
(131 rep)
Nov 14, 2021, 12:56 AM
5
votes
1
answers
1867
views
systemd ignores drop-in configuration files - what am I doing wrong?
On one of my machines with Debian Buster (which ships **systemd 241**), I wanted to watch resource usage via `systemd-cgtop`. When I started this utility, I was seeing memory usage, but neither CPU usage nor I/O usage. Obviously, CPU accounting was turned off. Following the [manpage for system.conf]...
On one of my machines with Debian Buster (which ships **systemd 241**), I wanted to watch resource usage via
systemd-cgtop
. When I started this utility, I was seeing memory usage, but neither CPU usage nor I/O usage. Obviously, CPU accounting was turned off.
Following the manpage for system.conf , I put the this line into /etc/systemd/system.conf
(all other lines already were commented out):
DefaultCPUAccounting=yes
This worked as expected (of course after having reloaded systemd
itself by systemctl daemon-reexec
). [ Note: In fact, I was still seeing CPU usage only for some slices, not for all, but this is another story / question. ]
However, that man page does not recommend to change /etc/systemd/system.conf
. Rather, we should create a drop-in configuration file with the required lines. I followed that advice, created the directory /etc/systemd/system.conf.d
, and created a file /etc/systemd/system.conf.d/10-pp.conf
. Then I removed the line shown above from /etc/systemd/system.conf
, put it into /etc/systemd/system.conf.d/10-pp.conf
, and issued systemctl daemon-reexec
.
This took me back to the beginning: systemd-cgtop
didn't show CPU usage at all.
I can reproduce the situation at any time. Regardless of the drop-in configuration file, I must alter the main configuration file to enable CPU accounting.
What am I doing wrong?
P.S.
- I have verified that there is no other drop-in configuration file which could hurt mine. That is, /usr/lib/systemd/system.conf.d/
does not exist, nor does /usr/local/lib/systemd/system.conf.d/
, nor does /run/systemd/system.conf.d/
.
- I have verified the access permissions of the directory and the file I have created. They are like the permissions of the other (installed-by-default) .d
directories and the files in them, respectively.
Binarus
(3891 rep)
Oct 15, 2021, 12:20 PM
• Last activity: Oct 19, 2021, 12:01 PM
Showing page 1 of 20 total questions