Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
2095 views
How to make sure nproc values are taking effect for all the users?
In the */etc/security/limit.conf* file I have added the following values: user1 - nproc unlimited user2 - nproc unlimited Both *user1* and *user2* have **sudo** privileges and I used *user1* to make this change. I then logged out of the server and logged back in. When I check as user1, the ulimit -u...
In the */etc/security/limit.conf* file I have added the following values: user1 - nproc unlimited user2 - nproc unlimited Both *user1* and *user2* have **sudo** privileges and I used *user1* to make this change. I then logged out of the server and logged back in. When I check as user1, the ulimit -u command goves me output 'Unlimited'. **However, when I check as user2, the ulimit -u command gives me value of 10000.** Where can this value be coming from?
proutray (111 rep)
Jul 16, 2018, 08:19 PM • Last activity: Jul 22, 2025, 10:05 PM
2 votes
1 answers
2394 views
how to verify current open files on specific service
on our rhel server 7.6 version we have the following systemctl service /etc/systemd/system/test-infra.service and the value of LimitNOFILE is systemctl show test-infra.service | grep LimitNOFILE LimitNOFILE=65535 so I assume the number of open files is max 65535 per this service is it possible to pr...
on our rhel server 7.6 version we have the following systemctl service /etc/systemd/system/test-infra.service and the value of LimitNOFILE is systemctl show test-infra.service | grep LimitNOFILE LimitNOFILE=65535 so I assume the number of open files is max 65535 per this service is it possible to print the current of open files that are used by this service? or how to show how many files this service is using?
yael (13936 rep)
Jan 5, 2021, 11:58 AM • Last activity: Jul 6, 2025, 04:06 AM
50 votes
3 answers
130932 views
How to check ulimit usage
Is there any way to check the usage of the ulimits for a given user? I know that you can change ulimits for a single process when you start it up or for a single shell when running but I want to be able to "monitor" how close a user is to hitting their limits. I am planning on writing a `bash` scrip...
Is there any way to check the usage of the ulimits for a given user? I know that you can change ulimits for a single process when you start it up or for a single shell when running but I want to be able to "monitor" how close a user is to hitting their limits. I am planning on writing a bash script that will report back to statsd the current usage percentage. Specifically, I want to track: 1. open files (ulimit -n) 2. max user processes (ulimit -u) 3. pending signals (ulimit -i) What I want out is the percentage of usage (0-100).
hazmat (609 rep)
Sep 17, 2015, 05:26 PM • Last activity: Jul 4, 2025, 06:39 PM
0 votes
2 answers
2141 views
ulimit: what is the maximum core file size value
I want to set unlimited core file size with ulimit inside docker. I can't do this by ``` ulimit -c unlimited ``` because parameters for docker are set by framework, that doesn't accept `unlimited` parameter. So I need to set it by passing direct value. I found that 9223372036854775807 is the max val...
I want to set unlimited core file size with ulimit inside docker. I can't do this by
ulimit -c unlimited
because parameters for docker are set by framework, that doesn't accept unlimited parameter. So I need to set it by passing direct value. I found that 9223372036854775807 is the max value, but when I set it, I get:
ulimit: 9223372036854775807: limit out of range
How can I find max value which I can pass to ulimit -c ?
Biedrona (1 rep)
Jun 28, 2021, 09:34 AM • Last activity: Jun 25, 2025, 12:05 PM
1 votes
1 answers
266 views
Why is ulimit -l (max locked memory) value 64?
I am trying to prepare the server running Oracle Linux 8.8 for Oracle database 19c installation. I installed `oracle-database-preinstall-19c.rpm` and noticed that `ulimit -l` value is showing unexpected to me value of `64`. It's my first time configuring a server, so I need some help in understandin...
I am trying to prepare the server running Oracle Linux 8.8 for Oracle database 19c installation. I installed oracle-database-preinstall-19c.rpm and noticed that ulimit -l value is showing unexpected to me value of 64. It's my first time configuring a server, so I need some help in understanding ulimit -l value. I expect it to be the value set in the /etc/limits.conf for (all) users and value from /etc/limits.d/oracle-database-preinstall-19c.conf for oracle user, but I see ‘64’ using both xrdp connection and local log in for all users. However, when I open a console and switch user using su and run ulimit -l I get the expected value that was set in .conf files. /etc/security/limits.d/oracle-database-preinstall-19c.conf file contains the following lines for setting memlock value: oracle soft memlock 134217728 oracle hard memlock 134217728 /etc/security/limits.conf file contains the following lines for setting memlock value: * soft memlock 134217728 * hard memlock 134217728 No other /etc/security/.conf files sets memlock value. The server has 96GB of memory and x86_86 architecture. sudo grep pam_limits /etc/pam.d/ shows: /etc/pam.d/fingerprint-auth: session required pam_limits.so /etc/pam.d/password-auth: session required pam_limits.so /etc/pam.d/runuser: session required pam_limits.so /etc/pam.d/system-auth: session required pam_limits.so grep Huge /proc/meminfo shows: AnonHugePages: 0 kB ShmemHugePages: 0kB FileHugePages: 0kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugeagesize: 2048 kB Hugetlb: 0 kB Transparent HugePages are set to ‘never’. I haven't found any memlock values in systemctl show user-$(id -u oracle).slice I also have set vm.hugetlb_shm_group value in /etc/sysctl.d/99-hugetlb-shm-group.conf in case that matters. I haven't installed Oracle database yet. So can anyone help me? Why is ulimit -l value set to 64? Is it okay? I can provide more environment settings and values if needed.
Анатолий (41 rep)
May 6, 2025, 05:14 PM • Last activity: Jun 8, 2025, 01:02 PM
3 votes
1 answers
2176 views
Process specific ulimit still low after changes to soft and hard ulimits
I'm having trouble with increasing the open-files ulimit (`ulimit -n`) for a particular process on a Debian 6 server. AFAIK I've done everything to change the servers hard and soft limits in this case, (`ulimit -n` shows 200000), but when I check the `/proc/ /limits` file it's still showing the old...
I'm having trouble with increasing the open-files ulimit (ulimit -n) for a particular process on a Debian 6 server. AFAIK I've done everything to change the servers hard and soft limits in this case, (ulimit -n shows 200000), but when I check the /proc//limits file it's still showing the old limits: Limit Soft Limit Hard Limit Units Max open files 1024 4096 files ---------- *The steps that I have already taken to permanently increase the ulimits are:* **Added to /etc/profile:** # set ulimit n permanetly ulimit -n 200000 **Added to /etc/security/limits.conf:** * soft nofile 200000 * hard nofile 200000 **Uncommented this lime in /etc/pam.d/su** session required pam_limits.so **What am I missing?** Thank you! ---------- Other (relevant?) info: - The process is started in a init.d script with start-stop-daemon - The /etc/security/limits.d/ directory is empty
UpTheCreek (898 rep)
Jun 9, 2014, 11:45 AM • Last activity: May 24, 2025, 02:00 PM
1 votes
1 answers
8957 views
rhel 7 setting stack size to unlimited
I have some old code that needs the stack to not be limited to 8192kb in order for it to run. I am used to doing this in `/etc/security/limits.conf` * stack hard unlimited * stack soft unlimited However in RHEL 7.9 having a local account with a bash shell when I do a `ulimit -s` it still responds wi...
I have some old code that needs the stack to not be limited to 8192kb in order for it to run. I am used to doing this in /etc/security/limits.conf * stack hard unlimited * stack soft unlimited However in RHEL 7.9 having a local account with a bash shell when I do a ulimit -s it still responds with 8192. So my modification of limits.conf seems to have no affect? In my terminal window having a bash shell if I do a ulimit -s unlimited first then run my code, my code runs fine. What is the best way to set stack size to unlimited, globally for all users in RHEL 7.9 ? Am I missing something, is ulimit and /etc/security/limits.conf not the same thing?
ron (8647 rep)
Dec 7, 2020, 06:55 PM • Last activity: May 3, 2025, 09:09 PM
2 votes
3 answers
3909 views
Debian increase ulimit for Asterisk
I've been facing an issue with Asterisk 13.11.2 on Debian 8 where it's crashing after reaching the limit of open files bridge_channel.c: Can't create pipe! Try increasing max file descriptors with ulimit -n I have managed to increase the limit from 65536 to 150000 using the `/etc/security/limits.con...
I've been facing an issue with Asterisk 13.11.2 on Debian 8 where it's crashing after reaching the limit of open files bridge_channel.c: Can't create pipe! Try increasing max file descriptors with ulimit -n I have managed to increase the limit from 65536 to 150000 using the /etc/security/limits.conf I have added the following: root soft nofile 150000 root hard nofile 150000 * soft nofile 150000 * hard nofile 150000 The result of ulimit -n is now 150000 When i try check the limit for the Asterisk process cat /proc/xxx/limits I still get the old limit! Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 31945 31945 processes Max open files 1024 4096 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 31945 31945 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us How to solve this?
TareKhoury (121 rep)
Oct 4, 2016, 08:54 AM • Last activity: Apr 9, 2025, 01:09 PM
0 votes
1 answers
61 views
The shell script run by crond keeps stopping after five minutes
As titled, I am using CentOS 6.10 Here is part of my shell script, it runs find manually. ``` #!/bin/sh backup_dir="/mnt/backup/website" all_web="$(</mnt/backup/list.txt)" for web in $all_web do IFS=',' read -ra arraydata <<< "$web" tar zcvf $backup_dir/${arraydata[0]}\_$(date +%Y%m%d).tar.gz ${arra...
As titled, I am using CentOS 6.10 Here is part of my shell script, it runs find manually.
#!/bin/sh
backup_dir="/mnt/backup/website"
all_web="$(
The settings in /etc/crontab
0 22 * * * root sh /mnt/backup/backup.sh
I have tried to print the script's logs, but nothing was found, the log just shows that it was stopped. /mnt/website/corey/public/files/111.pdf /mnt/website/corey/public/files/222.pdf /mnt/website/corey/public/files/333.pdf /mnt/website/corey/pub Looked through the limits, don't know which one would cause the issue.
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 65536
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
Inspected the system using these commands, all of them showed nothing.
dmesg | grep -i "killed process"
dmesg | grep -i "out of memory"
dmesg | grep -i "corn"
dmesg | grep -i "backup"
grep -i "sigkill" /var/log/messages
grep -i "limit" /var/log/messages
Didn't find any error message in /var/log/cron either. **2025/03/31 Updated** I tried to execute the tar command line natively in cornd, still got stopped after five minutes, below is part of my /etc/crontab file
50 3 * * * root /bin/tar zcvf /mnt/backup/website/corey_20250331.tar.gz /mnt/webdisk/corey/
ls -lh
-rw-r--r-- 1 root root 6.1G Mar 31 03:55 corey_20250331.tar.gz
**Workaround?** I moved the script from /etc/crontab to folder /etc/cron.daily, the script doesn't get killed. I have no idea what is the difference. **2025/04/07 Updated** As mentioned above, the workaround that moves the script to /etc/cron.daily worked well, but when I tried to specify the execution time in /etc/crontab like this
30 21 * * * root run-parts /etc/cron.daily
The script **GET KILLED AFTER FIVE MINUTES AGAIN!** Any idea would really appreciated.
Corey (101 rep)
Mar 28, 2025, 09:35 AM • Last activity: Apr 7, 2025, 05:53 AM
0 votes
1 answers
179 views
Is "ulimit -c 0" the same as "ulimit -c unlimited"?
Am I correct in assuming the `ulimit -c 0` sets the maximum size of core files create to an unlimited (unbound, no-limit) value? I have seen reference to `ulimit -c unlimited` but `help ulimit` does not mention 0 nor 'unlimited' in Bash the linux distribution I tried (Rocky Linux 8.10, 9.5, Ubuntu)...
Am I correct in assuming the ulimit -c 0 sets the maximum size of core files create to an unlimited (unbound, no-limit) value? I have seen reference to ulimit -c unlimited but help ulimit does not mention 0 nor 'unlimited' in Bash the linux distribution I tried (Rocky Linux 8.10, 9.5, Ubuntu) or even in Bash 5.2.37 running in macOS. Under zsh 5.9 on macOS, run-help ulimit mentions unlimited and hard but not 0.
PRouleau (273 rep)
Mar 25, 2025, 03:49 PM • Last activity: Mar 25, 2025, 07:37 PM
262 votes
12 answers
426379 views
Limit memory usage for a single Linux process
I'm running `pdftoppm` to convert a user-provided PDF into a 300DPI image. This works great, except if the user provides an PDF with a very large page size. `pdftoppm` will allocate enough memory to hold a 300DPI image of that size in memory, which for a 100 inch square page is 100*300 * 100*300 * 4...
I'm running pdftoppm to convert a user-provided PDF into a 300DPI image. This works great, except if the user provides an PDF with a very large page size. pdftoppm will allocate enough memory to hold a 300DPI image of that size in memory, which for a 100 inch square page is 100*300 * 100*300 * 4 bytes per pixel = 3.5GB. A malicious user could just give me a silly-large PDF and cause all kinds of problems. So what I'd like to do is put some kind of hard limit on memory usage for a child process I'm about to run--just have the process die if it tries to allocate more than, say, 500MB of memory. Is that possible? I don't think ulimit can be used for this, but is there a one-process equivalent?
Ben Dilts (2723 rep)
Feb 13, 2011, 08:00 AM • Last activity: Jan 28, 2025, 07:11 AM
1 votes
0 answers
59 views
Why are Ulimit Soft Limits Ignored
I am doing some programming, and I want to have core dumps disabled by default and only enabled when the soft limit is increased using `ulimit -c unlimited`. I am trying to disable core dumps by default by having the following settings in the `/etc/security/limits.conf` file: ``` * hard core unlimit...
I am doing some programming, and I want to have core dumps disabled by default and only enabled when the soft limit is increased using ulimit -c unlimited. I am trying to disable core dumps by default by having the following settings in the /etc/security/limits.conf file:
*               hard    core            unlimited
*               soft    core            0
After restarting and logging in, core dumps are still generated despite the fact that running ulimit -c gives 0. Setting the hard limit using ulimit -Hc 0 disables the core dumps, but I would like to use soft limits to disable core dumps. I would like to know why soft limits are being ignored in this case, and how to enforce these soft limits.
Geeoon Chung (11 rep)
Dec 24, 2024, 10:07 PM • Last activity: Dec 25, 2024, 02:37 AM
0 votes
1 answers
1275 views
`ulimit` a shell, not a user
We're multiple persons working on the same user account. I want to limit my activities to not hang the computer and to not bother the other persons. I want to limit my resources usage to not use all of them (because if a process can, it uses them all). There are 2 approaches, the priority one and st...
We're multiple persons working on the same user account. I want to limit my activities to not hang the computer and to not bother the other persons. I want to limit my resources usage to not use all of them (because if a process can, it uses them all). There are 2 approaches, the priority one and strict limit one. Problem is, Linux doesn't manage priorities very well, meaning you'll slow down other activities even with the worse priority. So it let the strict limit But I want to limit only my shell, not the account that many persons use. Do you have suggestions regarding priorities or strict limits?
aac (145 rep)
Apr 15, 2022, 08:27 AM • Last activity: Dec 2, 2024, 10:56 AM
0 votes
0 answers
51 views
Ulimits ignored on openSUSE
I'm on **openSUSE Leap 15.6 x86_64, kernel 6.4.0-150600.23.25-default**. My open files software ulimit is set to very low 1024. While this was never big issue on Debian based systems I have hard time figuring out OpenSUSE way to increase it permanently. `ulimit -Sn 10240` changes it temporarily and...
I'm on **openSUSE Leap 15.6 x86_64, kernel 6.4.0-150600.23.25-default**. My open files software ulimit is set to very low 1024. While this was never big issue on Debian based systems I have hard time figuring out OpenSUSE way to increase it permanently. ulimit -Sn 10240 changes it temporarily and I could enter it probably somewhere **bashrc** but I would like more clean way. I'm wondering why this doesn't work: **/etc/security/limits.conf:** # # * soft nofile 65535 #* soft core 0 #* hard rss 10000 #@student hard nproc 20 #@faculty soft nproc 20 The **/etc/sysctl.conf** as well as **/etc/sysctl.d/** are all commented. The **/etc/pam.d/login** contains: session include common-session and **/etc/pam.d/common-session** contains: session required pam_limits.so Despite that limits are not applied per my configuration or overridden somewhere else. Hardware limit is 524288 so bigger than above 65535 Also: cat /proc/1/cmdline /usr/lib/systemd/systemd--switched-root--system--deserialize=32(base) n
nusch (61 rep)
Nov 10, 2024, 07:26 PM
25 votes
2 answers
50072 views
/etc/security/limits.conf not applied
I have `/etc/security/limits.conf`, that seems not been applied: a soft nofile 1048576 # default: 1024 a hard nofile 2097152 a soft noproc 262144 # default 128039 a hard noproc 524288 Where `a` is my username, when I run `ulimit -Hn` and `ulimit -Sn`, it shows: 4096 1024 There's only one other file...
I have /etc/security/limits.conf, that seems not been applied: a soft nofile 1048576 # default: 1024 a hard nofile 2097152 a soft noproc 262144 # default 128039 a hard noproc 524288 Where a is my username, when I run ulimit -Hn and ulimit -Sn, it shows: 4096 1024 There's only one other file in the /etc/security/limits.d that the content is: scylla - core unlimited scylla - memlock unlimited scylla - nofile 200000 scylla - as unlimited scylla - nproc 8096 I tried also append those values to /etc/security/limits.conf then restarting, and do this: echo ' session required pam_limits.so ' | sudo tee -a /etc/pam.d/common-session but it didn't work. My OS is Ubuntu 17.04.
Kokizzu (10481 rep)
May 21, 2017, 08:25 AM • Last activity: Oct 20, 2024, 01:30 PM
87 votes
2 answers
136742 views
How to set ulimits on service with systemd?
How would you set a ulimit on a systemd service unit? This [stackoverflow question explains that systemd ignores system ulimits][1] What would the syntax look like to set the following ulimits? ulimit -c ulimit -v ulimit -m [Unit] Description=Apache Solr After=syslog.target network.target remote-fs....
How would you set a ulimit on a systemd service unit? This stackoverflow question explains that systemd ignores system ulimits What would the syntax look like to set the following ulimits? ulimit -c ulimit -v ulimit -m [Unit] Description=Apache Solr After=syslog.target network.target remote-fs.target nss-lookup.target [Service] Type=forking SOLR_INSTALL_DIR=/opt/solr SOLR_ENV=/etc/default/solr.in.sh RUNAS=solr SOLR_PID_DIR="/var/solr" SOLR_HOME="/opt/solr/server/solr" LOG4J_PROPS="/var/solr/log4j.properties" SOLR_LOGS_DIR="/opt/solr/server/logs" SOLR_PORT="8389" PIDFile=/var/solr/solr-8389.pid ExecStart=/opt/solr/bin/solr start ExecStatus=/opt/solr/bin/solr status ExecStop=/opt/solr/bin/solr stop Restart=on-failure User=solr SuccessExitStatus=143 0 [Install] WantedBy=multi-user.target
spuder (18573 rep)
Feb 16, 2017, 10:41 PM • Last activity: Oct 7, 2024, 07:10 AM
1 votes
1 answers
112 views
Difference between ulimit -a and /proc/$PID/limits
In Linux, there are user limits for accessing system resources. Shell built-in command `ulimit` can be used to see user limits for the current user. ```sh ulimit -a # soft limits ulimit -a -H # hard limits ``` Then I also can see per process soft/hard limit by looking at `/proc/$PID/limits`. ```sh #...
In Linux, there are user limits for accessing system resources. Shell built-in command ulimit can be used to see user limits for the current user.
ulimit -a       # soft limits
ulimit -a -H    # hard limits
Then I also can see per process soft/hard limit by looking at /proc/$PID/limits.
# For example, the limits on firefox process:
PID=$(ps -A | grep firefox | awk '{print $1;}' | head -n1)
cat /proc/$PID/limits

# OR in short:
cat /proc/$(ps -A | grep firefox | awk '{print $1;}' | head -n1)/limits
I am wondering what is the difference between these two output? I see /proc/$PID/limits having some limits larger than ulimit -a -H (Hard limits) output for the same resource. Can process spawned by a user have its limits exceed user limits (ulimit)? --- I tried to include my question in a question with similar goals: https://unix.stackexchange.com/review/suggested-edits/470961 . The edit was rejected.
Amith (313 rep)
Aug 26, 2024, 02:26 PM • Last activity: Sep 4, 2024, 03:00 PM
1 votes
1 answers
3349 views
system MemoryMax by percentage not working?
I am trying to configure my .service file to limit how much memory a given service can use up before being terminated, by percentage of system memory (10% as an upper limit in this case): [Unit] Description=MQTT Loop After=radioLoop.service [Service] Type=simple Environment=PYTHONIOENCODING=UTF-8 Ex...
I am trying to configure my .service file to limit how much memory a given service can use up before being terminated, by percentage of system memory (10% as an upper limit in this case): [Unit] Description=MQTT Loop After=radioLoop.service [Service] Type=simple Environment=PYTHONIOENCODING=UTF-8 ExecStart=/usr/bin/python3 -u /opt/pilot/mqttLoop.py WorkingDirectory=/opt/pilot StandardOutput=journal Restart=on-failure User=pilot MemoryMax=10% [Install] WantedBy=multi-user.target The line of interest is the MemoryMax line, which I've tried to configured based on my understanding of the systemd docs . My version of systemd is: systemd 241 (241) +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid But it does not work. # ps -m -o lwp,rss,pmem,pcpu,unit -u pilot LWP RSS %MEM %CPU UNIT - 76244 30.3 8.5 mqttLoop.service 1232 - - 7.0 mqttLoop.service 1249 - - 1.7 mqttLoop.service 1254 - - 0.2 mqttLoop.service I'm getting well above 10% (30% there), and then it does not restart the process. I've tried exchanging MemoryMax for MemoryLimit (the older variant of same value), but it has no effect. What am I missing? UPDATE ---- I have determined that the systemd settings for processing counting are correctly turned on. # grep -i "memory" system.conf #DefaultMemoryAccounting=yes But I note the following in my kernel configuration: enter image description here Will it be enough that I rebuild my kernel with the Memory Controller option selected?
Travis Griggs (1681 rep)
Aug 27, 2019, 06:09 PM • Last activity: Aug 27, 2024, 06:57 AM
1 votes
1 answers
520 views
Setting `ulimit -l unlimited` for non-root user
I am playing with [llama.cpp][1] on OpenBSD-current. The program has a `--mlock` flag to force loading the model into physical memory (i.e. disable `mmap`). However, running as a non-root user gives `warning: failed to mlock 730202112-byte buffer (after previously locking 0 bytes): Resource temporar...
I am playing with llama.cpp on OpenBSD-current. The program has a --mlock flag to force loading the model into physical memory (i.e. disable mmap). However, running as a non-root user gives warning: failed to mlock 730202112-byte buffer (after previously locking 0 bytes): Resource temporarily unavailable despite the system having 64GiB memory installed. I suspect this is a ulimit problem, since "Maximum size that may be locked into memory" is only 87381kB, and I cannot increase it as a non-root user.
$ ulimit -a
Maximum size of core files created                         (kB, -c) unlimited
Maximum size of a process’s data segment                   (kB, -d) 4194304
Maximum size of files created by the shell                 (kB, -f) unlimited
Maximum size that may be locked into memory                (kB, -l) 87381
Maximum resident set size                                  (kB, -m) 64901696
Maximum number of open file descriptors                        (-n) 128
Maximum stack size                                         (kB, -s) 8192
Maximum amount of CPU time in seconds                 (seconds, -t) unlimited
Maximum number of processes available to current user          (-u) 1310

$ ulimit -l unlimited
ulimit: Permission denied when changing resource of type 'Maximum size that may be locked into memory'
The obvious approach is to su - and ulimit -l unlimited, but then I have to run llama.cpp as root, which is probably not a good idea. How can I increase ulimit -l for a non-root user?
nalzok (431 rep)
Aug 21, 2024, 05:19 PM • Last activity: Aug 22, 2024, 12:05 AM
1 votes
0 answers
119 views
"ulimit -c unlimited" fails when done "sudo user -c shell-some-script"
I have on a Linux server with SuSE Linux Enterprise 15 SP5 the following situation: I have two unpriv users "sisis" and "jenkins" which are both allowed (based on entries in /etc/security/limits.conf) to run the shell build-in command "ulimit -c unlimited": ``` linux:~ # su - jenkins jenkins@linux:~...
I have on a Linux server with SuSE Linux Enterprise 15 SP5 the following situation: I have two unpriv users "sisis" and "jenkins" which are both allowed (based on entries in /etc/security/limits.conf) to run the shell build-in command "ulimit -c unlimited":
linux:~ # su - jenkins

jenkins@linux:~> id
uid=1003(jenkins) gid=100(users) Gruppen=100(users)
jenkins@linux:~> ulimit -c unlimited ; echo $?
0
exit

linux:~ # su - sisis
sisis@linux:~> id
uid=900118(sisis) gid=900118(sisis) Gruppen=900118(sisis),403(userdevl)
sisis@linux:~> ulimit -c unlimited ; echo $?
0
exit
But, when the user "jenkins" wants to do this in a shell script as user "sisis", this gives an error:
linux:~ # su - jenkins
jenkins@linux:~> cat /tmp/ulimit.sh
#!/bin/sh
export LANG=C
ulimit -c unlimited
jenkins@linux:~> sudo -u sisis -g sisis /tmp/ulimit.sh
/tmp/ulimit.sh: line 3: ulimit: core file size: cannot modify limit: Operation not permitted
The background of this silly question is, that for test automation from another server (a Jenkins Continuous Integration server) jobs are started remotely as
ssh jenkins@linux bash CATserver_start.sh
+ sudo -u sisis /opt/lib/sisis/catserver/etc/S99catserver.testdb start
i.e. the "ulimit -c unlimited" command is issued in the above shellscript /opt/lib/sisis/catserver/etc/S99catserver.testdb which in production is started as user "sisis", but in test automation from the Jenkins CI server via SSH as user "jenkins". Any ideas? Additional information to answer the question in the comment: The real command is as given:
ssh jenkins@linux bash CATserver_start.sh
+ sudo -u sisis /opt/lib/sisis/catserver/etc/S99catserver.testdb start
The exmples given with the script /tmp/ulimit.sh was only to simplfy the problem. The script /opt/lib/sisis/catserver/etc/S99catserver.testdb uses as she-bang #!/bin/bash and after su - sisis the user uses also a bash:
linux:~ # su - sisis
sisis@linux:~> ps
  PID TTY          TIME CMD
21146 pts/1    00:00:00 bash

sisis@linux:~> ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14448
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65535
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
Matthias Apitz (31 rep)
Aug 13, 2024, 05:43 AM • Last activity: Aug 13, 2024, 10:55 AM
Showing page 1 of 20 total questions