Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
2
votes
1
answers
2394
views
how to verify current open files on specific service
on our rhel server 7.6 version we have the following systemctl service /etc/systemd/system/test-infra.service and the value of LimitNOFILE is systemctl show test-infra.service | grep LimitNOFILE LimitNOFILE=65535 so I assume the number of open files is max 65535 per this service is it possible to pr...
on our rhel server 7.6 version we have the following systemctl service
/etc/systemd/system/test-infra.service
and the value of LimitNOFILE is
systemctl show test-infra.service | grep LimitNOFILE
LimitNOFILE=65535
so I assume the number of open files is max 65535 per this service
is it possible to print the current of open files that are used by this service?
or how to show how many files this service is using?
yael
(13936 rep)
Jan 5, 2021, 11:58 AM
• Last activity: Jul 6, 2025, 04:06 AM
4
votes
2
answers
436
views
While working on a file in a gui application, how to quickly open the parent directory of a file in the file browser?
I am coming fro MacOS system, and in that system I am addicted to one specific function: while working with (almost) any file, in (almost) any application, I can click in the title bar of the window showing the file, and open the parent folder of that file in the filesystem. In the MacOS this is a f...
I am coming fro MacOS system, and in that system I am addicted to one specific function: while working with (almost) any file, in (almost) any application, I can click in the title bar of the window showing the file, and open the parent folder of that file in the filesystem.
In the MacOS this is a functionality that works for almost any application, and it is based on an interplay between the Operating System and (most of) the applications written for that OS.
I would like to have something similar on a Linux system (I am using Mint, with Cinnamon Desktop Environment).
I am aware that maybe I need to write some script, and maybe part of the solution is described here in this U&L question/answer . What is still missing is a way to get the path to the file.
So, the minimal functionality I need is to (quickly) get the filepath to the document I am visualizing, while I am in a GUI application with a document window open.
Thanks for any insight.


Fabio
(535 rep)
Sep 24, 2021, 11:22 AM
• Last activity: May 12, 2025, 12:26 AM
35
votes
6
answers
52478
views
Read "/proc" to know if a process has opened a port
I need to know if a process with a given PID has opened a port without using external commands. I must then use the `/proc` filesystem. I can read the `/proc/$PID/net/tcp` file for example and get information about TCP ports opened by the process. However, on a multithreaded process, the `/proc/$PID...
I need to know if a process with a given PID has opened a port without using external commands.
I must then use the
/proc
filesystem. I can read the /proc/$PID/net/tcp
file for example and get information about TCP ports opened by the process. However, on a multithreaded process, the /proc/$PID/task/$TID
directory will also contains a net/tcp
file. My question is :
do I need to go over all the threads net/tcp
files, or will the port opened by threads be written into the process net/tcp
file.
rmonjo
(453 rep)
Aug 29, 2015, 01:11 PM
• Last activity: May 5, 2025, 04:35 PM
72
votes
1
answers
74516
views
How do I monitor opened files of a process in realtime?
I know I can view the open files of a process using `lsof` *at that moment in time* on my Linux machine. However, a process can open, alter and close a file so quickly that I won't be able to see it when monitoring it using standard shell scripting (e.g. `watch`) as explained in ["monitor open proce...
I know I can view the open files of a process using
lsof
*at that moment in time* on my Linux machine. However, a process can open, alter and close a file so quickly that I won't be able to see it when monitoring it using standard shell scripting (e.g. watch
) as explained in ["monitor open process files on linux (real-time)"](https://serverfault.com/questions/219323/monitor-open-process-files-on-linux-real-time) .
So, I think I'm looking for a simple way of auditing a process and see what it has done over the time passed. It would be great if it's also possible to see what network connections it (tried to) make and to have the audit start before the process got time to run without the audit being started.
Ideally, I would like to do this:
sh $ audit-lsof /path/to/executable
4530.848254 OPEN read /etc/myconfig
4530.848260 OPEN write /var/log/mylog.log
4540.345986 OPEN read /home/gert/.ssh/id_rsa 1.2.3.4:80 |
[...]
4541.023485 CLOSE /home/gert/.ssh/id_rsa 1.2.3.4:80 | this when polling
Would this be possible using strace
and some flags to not see every system call?
gertvdijk
(14517 rep)
Dec 19, 2012, 01:16 PM
• Last activity: Feb 20, 2025, 10:40 AM
22
votes
5
answers
43205
views
Largest allowed maximum number of open files in Linux
Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)? I'm thinking server usage here, not embedded systems. Programs using huge amounts of open files...
Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)?
I'm thinking server usage here, not embedded systems. Programs using huge amounts of open files can of course eat memory and be slow, but I'm interested in adverse effects if the limit is configured much larger than necessary (e.g. memory consumed by just the configuration).
Sampo
(397 rep)
Jan 1, 2017, 10:19 PM
• Last activity: Aug 11, 2024, 02:29 AM
5
votes
3
answers
2183
views
What is the referent of a file descriptor?
My understanding is that a _file descriptor_ is an integer which is a key in the kernel's per-process mapping to objects such as `open()`ed files, pipes, sockets, etc. Is there a proper, short, and specific name for “open files/sockets/pipes/...”, the referents of file descriptors? Calling them “fil...
My understanding is that a _file descriptor_ is an integer which is a key in the kernel's per-process mapping to objects such as
open()
ed files, pipes, sockets, etc.
Is there a proper, short, and specific name for “open files/sockets/pipes/...”, the referents of file descriptors?
Calling them “files” leads to confusion with unopened files stored in the file system. Simply referring to file descriptors does not adequately describe the semantics (e.g. copying *the integer* between processes is useless).
Consulting [The Open Group Base Specifications](http://pubs.opengroup.org/onlinepubs/009695399/) and my own system's manpages leads me to the conclusion that the referent of a file descriptor is an _object_ and when it is specifically an open file it is, well, an _open file_. Is there a more specific term than _object_?
Kevin Reid
(633 rep)
Apr 15, 2011, 11:52 AM
• Last activity: Apr 29, 2024, 07:47 PM
2
votes
2
answers
836
views
Are there "non-standard" streams in Linux/Unix?
The so-called "standard streams" in Linux are stdin, stdout, and stderr. They must be called "standard" for a reason. Are there non-standard streams? Are those non-standard streams fundamentally treated differently by the kernel?
The so-called "standard streams" in Linux are stdin, stdout, and stderr. They must be called "standard" for a reason. Are there non-standard streams? Are those non-standard streams fundamentally treated differently by the kernel?
user56834
(137 rep)
Oct 25, 2021, 10:57 AM
• Last activity: Nov 29, 2023, 03:03 AM
172
votes
4
answers
402968
views
Find and remove large files that are open but have been deleted
How does one find large files that have been deleted but are still open in an application? How can one remove such a file, even though a process has it open? The situation is that we are running a process that is filling up a log file at a terrific rate. I know the reason, and I can fix it. Until th...
How does one find large files that have been deleted but are still open in an application? How can one remove such a file, even though a process has it open?
The situation is that we are running a process that is filling up a log file at a terrific rate. I know the reason, and I can fix it. Until then, I would like to rm or empty the log file without shutting down the process.
Simply doing
rm output.log
removes only references to the file, but it continues to occupy space on disk until the process is terminated. Worse: after rm
ing I now have no way to find where the file is or how big it is! Is there any way to find the file, and possibly empty it, even though it is still open in another process?
I specifically refer to Linux-based operating systems such as Debian or RHEL.
dotancohen
(16493 rep)
Mar 20, 2013, 07:31 AM
• Last activity: Nov 12, 2023, 03:33 PM
30
votes
4
answers
7677
views
How can we know who's at the other end of a pseudo-terminal device?
If I do a: echo foo > /dev/pts/12 Some process will read that `foo\n` from its file descriptor to the master side. Is there a way to find out what that(those) process(es) is(are)? Or in other words, how could I find out which xterm/sshd/script/screen/tmux/expect/socat... is at the other end of `/dev...
If I do a:
echo foo > /dev/pts/12
Some process will read that
foo\n
from its file descriptor to the master side.
Is there a way to find out what that(those) process(es) is(are)?
Or in other words, how could I find out which xterm/sshd/script/screen/tmux/expect/socat... is at the other end of /dev/pts/12
?
lsof /dev/ptmx
will tell me the processes that have file descriptors on the master side of any pty. A process itself can use ptsname()
(TIOCGPTN
ioctl) to find out the slave device based on its own fd to the master side, so I could use:
gdb --batch --pid "$the_pid" -ex "print ptsname($the_fd)"
for each of the pid/fd returned by lsof
to build up that mapping, but is there a more direct, reliable and less intrusive way to get that information?
Stéphane Chazelas
(579282 rep)
Jun 11, 2014, 08:12 PM
• Last activity: Oct 19, 2023, 07:30 AM
9
votes
2
answers
13844
views
How to increase the maximum number of open files on Fedora?
I want to increase the maximum number of open files in Fedora 27, since the default settings are too low: $ ulimit -Sn 1024 $ ulimit -Hn 4096 First, I ensure that the system-wide setting is high enough, by adding the following line to `/etc/sysctl.conf`: fs.inotify.max_user_watches=524288 fs.file-ma...
I want to increase the maximum number of open files in Fedora 27,
since the default settings are too low:
$ ulimit -Sn
1024
$ ulimit -Hn
4096
First, I ensure that the system-wide setting is high enough, by adding
the following line to
/etc/sysctl.conf
:
fs.inotify.max_user_watches=524288
fs.file-max=100000
Then, I set the user-specific settings by adding the following lines to
/etc/security/limits.conf
(root
must be added separately since the
Wildcard matches all users _except_ root):
* soft nofile 100000
* hard nofile 100000
root soft nofile 100000
root hard nofile 100000
To ensure that the above settings are actually loaded, I have added
the following line to /etc/pam.d/login
:
session required pam_limits.so
After rebooting my computer and logging in, I still get the same
results for ulimit -Sn
and ulimit -Hn
. Only the system-wide
setting have been set:
$ cat /proc/sys/fs/file-max
100000
I'm a bit at a loss as to what to do... Anybody have any ideas how I might diagnose/solve this?
Wouter Beek
(241 rep)
Mar 4, 2018, 05:22 PM
• Last activity: Jul 27, 2023, 07:54 PM
11
votes
1
answers
13313
views
ulimit vs file-max
Could someone explain limit on open files in linux? The problem is that one of my applications if reporting "Too many open files". I have ulimit -n 1024 but cat /proc/sys/fs/file-max 6578523 and cat /proc/sys/fs/file-nr 1536 So I already have 1536 > 1024. What is `ulimit -n` then? This is very confu...
Could someone explain limit on open files in linux? The problem is that one of my applications if reporting "Too many open files".
I have
ulimit -n
1024
but
cat /proc/sys/fs/file-max
6578523
and
cat /proc/sys/fs/file-nr
1536
So I already have 1536 > 1024. What is
ulimit -n
then? This is very confusing.
xaxa
(249 rep)
Jun 3, 2018, 08:20 AM
• Last activity: Jul 7, 2023, 03:02 AM
39
votes
7
answers
199936
views
Best way to free disk space from deleted files that are held open
Hi I have many files that have been deleted but for some reason the disk space associated with the deleted files is unable to be utilized until I explicitly kill the process for the file taking the disk space $ lsof /tmp/ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME cron 1623 root 5u REG 0,21...
Hi I have many files that have been deleted but for some reason the disk space associated with the deleted files is unable to be utilized until I explicitly kill the process for the file taking the disk space
$ lsof /tmp/
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
cron 1623 root 5u REG 0,21 0 395919638 /tmp/tmpfPagTZ4 (deleted)
The disk space taken up by the deleted file above causes problems such as when trying to use the tab key to autocomplete a file path I get the error
bash: cannot create temp file for here-document: No space left on device
But after I run kill -9 1623
the space for that PID is freed and I no longer get the error.
My questions are:
- why is this space not immediately freed when the file is first deleted?
- what is the best way to get back the file space associated with the deleted files?
and please let me know any incorrect terminology I have used or any other relevant and pertinent info regarding this situation.
BryanK
(803 rep)
Jan 30, 2015, 06:56 PM
• Last activity: May 22, 2023, 07:12 AM
144
votes
8
answers
242240
views
How do I find out which processes are preventing unmounting of a device?
Sometimes, I would like to unmount a **usb device** with `umount /run/media/theDrive`, but I get a `drive is busy` error. How do I find out which processes or programs are accessing the device?
Sometimes, I would like to unmount a **usb device** with
umount /run/media/theDrive
, but I get a drive is busy
error.
How do I find out which processes or programs are accessing the device?
Stefan
(26020 rep)
Oct 14, 2010, 04:23 PM
• Last activity: May 8, 2023, 08:30 PM
10
votes
4
answers
6607
views
when clicking open folder the system launches VSCode
Hi everybody I want to start to say thank you for your time! I have a problem and don't really know what to do to solve the problem. When i download something and I click on the arrow in Firefox to see my downloads and then click on the folder next to the application name it should open the folder w...
Hi everybody I want to start to say thank you for your time!
I have a problem and don't really know what to do to solve the problem. When i download something and I click on the arrow in Firefox to see my downloads and then click on the folder next to the application name it should open the folder where it is saved? (I think something like moz/.tmp) anyway when I click on the folder it opens VSCode. what did i do wrong?
even after "extraction completed successfully" and i click Show the Files it opens VSCode
Running Linux Lite 4.8 x86_64



DanyCode
(281 rep)
Apr 6, 2020, 12:51 PM
• Last activity: Apr 14, 2023, 01:56 PM
46
votes
5
answers
58361
views
How do I tell a script to wait for a process to start accepting requests on a port?
I need a command that will wait for a process to start accepting requests on a specific port. Is there something in linux that does that? while (checkAlive -host localhost -port 13000 == false) do some waiting ...
I need a command that will wait for a process to start accepting requests on a specific port.
Is there something in linux that does that?
while (checkAlive -host localhost -port 13000 == false)
do some waiting
...
Will
(603 rep)
Dec 31, 2010, 03:53 PM
• Last activity: Jan 18, 2023, 12:30 AM
6
votes
3
answers
757
views
Moving an open file to a different device
I have an application running that is generating a large (~200GB) output file, and takes about 35 hours to run (currently I'm about 12 hours in). The application just opens the file once then keeps it open as it is writing until it is complete; the application also does a lot of random access writes...
I have an application running that is generating a large (~200GB) output file, and takes about 35 hours to run (currently I'm about 12 hours in). The application just opens the file once then keeps it open as it is writing until it is complete; the application also does a lot of random access writes to the file (i.e. not sequential writes).
Right now the file is being saved to my local hard drive but I just decided that when it's done, I'm going to move it to a different device instead (a network drive, NTFS mounted via SMB).
To save time instead of moving the file later, is there some way I can suspend the program and somehow move the current partially complete file to the other device, do some tricks, and resume the program so it is now using the new location?
I'm pretty much positive that the answer is no but I thought I'd ask, sometimes there are surprising tricks out there...
Jason C
(1585 rep)
Jun 10, 2014, 10:11 PM
• Last activity: Aug 20, 2022, 10:29 PM
0
votes
1
answers
646
views
How to check what service is using a particular configuration file?
I'm new to Linux, and I need to know if there is a way to check which service or program is using a particular configuration file. A different deployment on the server is failing, and the error message is: ```none file /opt/deployment/dev/deploy.cfg busy ``` This means that the file `deploy.cfg` is...
I'm new to Linux, and I need to know if there is a way to check which service or program is using a particular configuration file.
A different deployment on the server is failing, and the error message is:
file /opt/deployment/dev/deploy.cfg busy
This means that the file deploy.cfg
is busy.
1. How may I tell who or what keeps the file busy.
2. How would I make the file "not busy".
paty CB
(1 rep)
Aug 18, 2022, 05:14 PM
• Last activity: Aug 18, 2022, 06:17 PM
1
votes
1
answers
1434
views
Number of open file descriptors won't change when I set them in CentOS 6.3?
uname -a Linux lab.testing.com 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009 i686 i686 i386 GNU/Linux ulimit -Hn 1024 ulimit -Sn 1024 ulimit -n 1024 I opened `/etc/sysctl.conf` in `vi` and added this at the bottom: fs.file-max = 65536 I opened `/etc/security/limits.conf` in `vi` and added this a...
uname -a
Linux lab.testing.com 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009 i686 i686 i386 GNU/Linux
ulimit -Hn
1024
ulimit -Sn
1024
ulimit -n
1024
I opened
/etc/sysctl.conf
in vi
and added this at the bottom:
fs.file-max = 65536
I opened /etc/security/limits.conf
in vi
and added this at the bottom:
* soft open files 8192
* hard open files 10240
testuser soft open files 8192
testuser hard open files 10240
Whether I login as root
or as testuser
, and whether I reboot or cold-boot the box, I still get:
ulimit -Hn
1024
ulimit -Sn
1024
ulimit -n
1024
I can set the open file descriptors to 10240 at the command line but that only sets it for the session and then it's lost when I logout/login and/or reboot/cold-boot:
[root@lab ~]# ulimit -n 10240
[root@lab ~]# ulimit -Hn
10240
[root@lab ~]# ulimit -Sn
10240
[root@lab ~]# ulimit -n
10240
What am I missing?
And in case someone wants to see the initial values:
[root@itnm ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 46464
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 46464
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
SirDino
(11 rep)
Dec 9, 2014, 05:08 AM
• Last activity: Jul 6, 2022, 10:00 PM
88
votes
5
answers
248220
views
How to list the open file descriptors (and the files they refer to) in my current bash session
I am running in an interactive bash session. I have created some file descriptors, using exec, and I would like to list what is the current status of my bash session. Is there a way to list the currently open file descriptors?
I am running in an interactive bash session. I have created some file descriptors, using exec, and I would like to list what is the current status of my bash session.
Is there a way to list the currently open file descriptors?
blueFast
(1378 rep)
Dec 28, 2016, 04:50 AM
• Last activity: Jun 8, 2022, 04:50 PM
7
votes
2
answers
4124
views
If I open the same file twice in Okular, switch to the existing window
I have always been confused why the file manager in Linux cannot stop applications from opening a single file twice at the same time? Specifically, I want to stop the PDF file reader Okular from opening the file `A.pdf` again when I have already opened it. I need to get an warning or just show me th...
I have always been confused why the file manager in Linux cannot stop applications from opening a single file twice at the same time?
Specifically, I want to stop the PDF file reader Okular from opening the file
A.pdf
again when I have already opened it. I need to get an warning or just show me the opened copy of the file A.pdf
.
More generally, I would like this to happen with any application, not just Okular. I want to make the document management behavior in Linux the same as in Windows.
lovelyzlf
(113 rep)
Nov 20, 2013, 04:22 AM
• Last activity: May 23, 2022, 05:38 PM
Showing page 1 of 20 total questions