Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
73
votes
8
answers
39917
views
monitor files (à la tail -f) in an entire directory (even new ones)
I normally watch many logs in a directory doing `tail -f directory/*`. The problem is that a new log is created after that, it will not show in the screen (because `*` was expanded already). Is there a way to monitor every file in a directory, even those that are created after the process has starte...
I normally watch many logs in a directory doing
tail -f directory/*
.
The problem is that a new log is created after that, it will not show in the screen (because *
was expanded already).
Is there a way to monitor every file in a directory, even those that are created after the process has started?
santiagozky
(833 rep)
May 31, 2012, 10:48 AM
• Last activity: Jul 16, 2025, 11:52 PM
8
votes
1
answers
963
views
What is the `-0` flag in `tail`?
The Logstash documentation said about the [input file plugin][1] that it > Stream events from files, normally by tailing them in a manner similar to tail -0F but optionally reading them from the beginning. The [tail man page][2] said that -f, --follow[={name|descriptor}] output appended data as the...
The Logstash documentation said about the input file plugin that it
> Stream events from files, normally by tailing them in a manner similar to tail -0F but optionally reading them from the beginning.
The tail man page said that
-f, --follow[={name|descriptor}]
output appended data as the file grows;
an absent option argument means 'descriptor'
-F same as --follow=name --retry
But what is exactly the
-0
flag in tail
?
glacier
(391 rep)
Jun 24, 2025, 12:06 PM
• Last activity: Jun 24, 2025, 01:05 PM
3
votes
2
answers
3892
views
How to write Docker logs to a file in real time (à la `tail -f`)
I docker that output logs stdout stderr, which can be viewed using: docker logs -f $LOGS_CONTAINER_ID I also added 'sed', which puts the container id on each line: docker logs -f $LOGS_CONTAINER_ID | sed "s/^/$LOGS_CONTAINER_ID /" If I run it, I get something like: container112 error 10:20:10 proble...
I docker that output logs stdout stderr, which can be viewed using:
docker logs -f $LOGS_CONTAINER_ID
I also added 'sed', which puts the container id on each line:
docker logs -f $LOGS_CONTAINER_ID | sed "s/^/$LOGS_CONTAINER_ID /"
If I run it, I get something like:
container112 error 10:20:10 problem
container112 info 10:20:09 not problem
container112 error 10:20:01 problem
where "container112" is $LOGS_CONTAINER_ID.
SO FAR SO GOOD. Now I want to output the above command to a file (log.out),
so I wrote the following command:
docker logs -f $LOGS_CONTAINER_ID | sed "s/^/$LOGS_CONTAINER_ID /" >> log.out
What happens is that it writes the logs to log.out, but it doesn't get new logs (if I open a new session and run
tail -f log.out
, I don't get output).
So I also tried:
tail -f $(docker logs -f $LOGS_CONTAINER_ID | sed "s/^/$LOGS_CONTAINER_ID /") >> log.out
But it also didn't work.
What is the problem?
Yagel
(143 rep)
Mar 3, 2019, 08:09 PM
• Last activity: Jun 21, 2025, 08:03 PM
326
votes
25
answers
281182
views
How to have tail -f show colored output
I'd like to be able to tail the output of a server log file that has messages like: INFO SEVERE etc, and if it's `SEVERE`, show the line in red; if it's `INFO`, in green. What kind of alias can I setup for a `tail` command that would help me do this?
I'd like to be able to tail the output of a server log file that has messages like:
INFO
SEVERE
etc, and if it's
SEVERE
, show the line in red; if it's INFO
, in green. What kind of alias can I setup for a tail
command that would help me do this?
Amir Afghani
(7373 rep)
Mar 1, 2011, 07:13 PM
• Last activity: May 9, 2025, 02:50 PM
14
votes
1
answers
725
views
Why does `tail -c 4097 /dev/zero` exit immediately instead of blocking?
I observe that, on Ubuntu 24.04.2 with `coreutils` version `9.4-3ubuntu6`, running: ``` bash $ tail -c 4097 /dev/zero $ echo $? 0 ``` exits immediately with a status code of 0. I expected the command to block indefinitely since /dev/zero is an endless stream. In contrast, the following commands beha...
I observe that, on Ubuntu 24.04.2 with
coreutils
version 9.4-3ubuntu6
, running:
bash
$ tail -c 4097 /dev/zero
$ echo $?
0
exits immediately with a status code of 0. I expected the command to block indefinitely since /dev/zero is an endless stream.
In contrast, the following commands behave as expected (i.e., they block until interrupted):
bash
$ tail -c 4096 /dev/zero
^C
$ echo $?
130
$ cat /dev/zero | tail -c 4097
^C
$ echo $?
130
## Debug attempt
The strace output shows differences between the two invocations:
| strace tail -c 4096 /dev/zero | strace tail -c 4097 /dev/zero |
|---------------------------------------------------------------------|---------------------------------------------------------------------|
| … | … |
| close(3) = 0 | close(3) = 0 |
| openat(AT_FDCWD, "/dev/zero", O_RDONLY) = 3 | openat(AT_FDCWD, "/dev/zero", O_RDONLY) = 3 |
| fstat(3, {st_mode=S_IFCHR\|0666, st_rdev=makedev(0x1, 0x5), …}) = 0 | fstat(3, {st_mode=S_IFCHR\|0666, st_rdev=makedev(0x1, 0x5), …}) = 0 |
| lseek(3, -4096, SEEK_END) = 0 | lseek(3, -4097, SEEK_END) = 0 |
| read(3, "\0\0\0\0\0\0\0\0\00"…, 8192) = 8192 | read(3, "\0\0\0\0\0\0\0\0\0\\…, 4097) = 4097 |
| read(3, "\0\0\0\0\0\0\0\0\00"…, 8192) = 8192 | fstat(1, {st_mode=S_IFIFO\|0600, st_size=0, …}) = 0 |
| read(3, "\0\0\0\0\0\0\0\0\00"…, 8192) = 8192 | write(1, "\0\0\0\0\0\0\0\0\0\\…, 4096 |
| read(3, "\0\0\0\0\0\0\0\0\00"…, 8192) = 8192 | close(3) = 0 |
| read(3, "\0\0\0\0\0\0\0\0\00"…, 8192) = 8192 | write(1, "\0", 1) = 1 |
| read(3, "\0\0\0\0\0\0\0\0\00"…, 8192) = 8192 | close(1) = 0 |
| read(3, "\0\0\0\0\0\0\0\0\00"…, 8192) = 8192 | close(2) = 0 |
| read(3, "\0\0\0\0\0\0\0\0\00"…, 8192) = 8192 | exit_group(0) = ? |
| read(3, "\0\0\0\0\0\0\0\0\00"…, 8192) = 8192 | ~~+~~ exited with 0 ~~+~~ |
| read(3, "\0\0\0\0\0\0\0\0\00"…, 8192) = 8192 | |
| read(3, "\0\0\0\0\0\0\0\0\00"…, 8192) = 8192 | |
| … | |
Isidro Arias
(161 rep)
Apr 16, 2025, 10:02 AM
• Last activity: Apr 17, 2025, 08:25 AM
1
votes
4
answers
2183
views
How to show a real-time count of the number of lines added per second to files?
I'm working with a directory of multiple log files. I'm trying to show a running count, updated periodically, of the number of lines written to each log file. I need to see the number of lines written per period (e.g. per second) to each file, indexed by file name. For bonus points, the lines should...
I'm working with a directory of multiple log files.
I'm trying to show a running count, updated periodically, of the number of lines written to each log file.
I need to see the number of lines written per period (e.g. per second) to each file, indexed by file name.
For bonus points, the lines should be sorted in decreasing order of the number of lines written per second.
This doesn't have to use something like
ncurses
(e.g. like top); it's ok if it simply writes updates every period (e.g. every second).
How do I accomplish this task?
---
I've included my solution as an answer, but I suspect there's a better way to do this... Hoping to learn a better way!
rinogo
(303 rep)
Sep 25, 2020, 05:17 PM
• Last activity: Mar 9, 2025, 01:30 PM
15
votes
4
answers
27502
views
What's an alternative to tail -f that has convenient scrolling?
I'm usually inside GNU Screen or `tmux`, and that doesn't give me great scrolling functionality. Is there an alternative to `tail -f` that allows me to quickly scroll up? A tool that is like `most` is to `less` and `more`. *[This question](https://unix.stackexchange.com/questions/21808/tail-f-equiva...
I'm usually inside GNU Screen or
tmux
, and that doesn't give me great scrolling functionality. Is there an alternative to tail -f
that allows me to quickly scroll up?
A tool that is like most
is to less
and more
.
*[This question](https://unix.stackexchange.com/questions/21808/tail-f-equivalent) is related but far from specific. I'm really looking for something that lets me scroll.*
the
(920 rep)
Jul 3, 2013, 03:47 PM
• Last activity: Mar 8, 2025, 07:24 AM
108
votes
7
answers
186695
views
Combining tail && journalctl
I'm tailing logs of my own app and Postgres. tail -f /tmp/myapp.log /var/log/postgresql/postgresql.main.log I need to include [pgpool][1]'s logs. It used to be syslog, but now it is in `journalctl`. Is there a way to tie `tail -f` && `journalctl -f` together? [1]: https://www.pgpool.net
I'm tailing logs of my own app and Postgres.
tail -f /tmp/myapp.log /var/log/postgresql/postgresql.main.log
I need to include pgpool 's logs. It used to be syslog, but now it is in
journalctl
.
Is there a way to tie tail -f
&& journalctl -f
together?
bikey
(1081 rep)
Oct 7, 2016, 02:25 PM
• Last activity: Jan 13, 2025, 02:40 PM
0
votes
1
answers
87
views
Does "less" have "--retry" option like "tail"?
I'm using **less** to continuously trace Squid log file (as well as UFW log) with this command: less --follow-name -K +F /var/log/squid/access.log And at the time of rotation of Squid log **less** quits. I guess this happens because when an old file is renamed the new one is not created immediately...
I'm using **less** to continuously trace Squid log file (as well as UFW log) with this command:
less --follow-name -K +F /var/log/squid/access.log
And at the time of rotation of Squid log **less** quits. I guess this happens because when an old file is renamed the new one is not created immediately but with a delay, though in the case of UFW log file this doesn't happen and **less** successfully switches to the new file.
So is there a method or option to make **less** wait for a new file to appear?
EliiO
(3 rep)
Nov 17, 2024, 01:10 AM
• Last activity: Nov 19, 2024, 09:32 AM
66
votes
6
answers
111917
views
Best way to follow a log and execute a command when some text appears in the log
I have a server log that outputs a specific line of text into its log file when the server is up. I want to execute a command once the server is up, and hence do something like the following: tail -f /path/to/serverLog | grep "server is up" ...(now, e.g., wget on server)? What is the best way to do...
I have a server log that outputs a specific line of text into its log file when the server is up. I want to execute a command once the server is up, and hence do something like the following:
tail -f /path/to/serverLog | grep "server is up" ...(now, e.g., wget on server)?
What is the best way to do this?
jonderry
(2109 rep)
Apr 26, 2011, 09:26 PM
• Last activity: Oct 16, 2024, 10:00 PM
70
votes
5
answers
118000
views
Grep from the end of a file to the beginning
I have a file with about 30.000.000 lines (Radius Accounting) and I need to find the last match of a given pattern. The command: tac accounting.log | grep $pattern gives what I need, but it's too slow because the OS has to first read the whole file and then send to the pipe. So, I need something fas...
I have a file with about 30.000.000 lines (Radius Accounting) and I need to find the last match of a given pattern.
The command:
tac accounting.log | grep $pattern
gives what I need, but it's too slow because the OS has to first read the whole file and then send to the pipe.
So, I need something fast that can read the file from the last line to the first.
tijuco
(831 rep)
Feb 2, 2014, 03:38 PM
• Last activity: Oct 6, 2024, 10:46 AM
2
votes
2
answers
2648
views
tail not working on mac terminal
Due to my lack of experience with script language, (shame for a Mac user) I have referred to several sources: [link][1] seemed resolved with `ls *.extension | xargs -n 1 tail -n +2` This didn't for me, even after adding `> merged.txt` at the end, nor the following: for f in *.txt do tail -n +2 $f >>...
Due to my lack of experience with script language, (shame for a Mac user) I have referred to several sources:
link seemed resolved with
ls *.extension | xargs -n 1 tail -n +2
This didn't for me, even after adding > merged.txt
at the end,
nor the following:
for f in *.txt
do
tail -n +2 $f >> /path/to/some/dir/with/files/file_name
done
I also tried sed -e'1d' $FILE
in replacement of the tail command. Didn't work.
tail -n +2 file_name.extension
, cat LIN_1994-11_0100.txt | tail -n +2
,
awk 'FNR != 1' *.extension
has no effect to the file.
I am uncertain if this has anything to do with the current issue.
Or whether the link is related to the issue.
If anyone could find the reason for this problem or way out of it..would be majorly grateful. I have transferred this issue from another community here to receive more insight if I could.
dia
(133 rep)
Jan 16, 2018, 06:17 PM
• Last activity: Oct 6, 2024, 04:02 AM
2
votes
2
answers
4934
views
How to continuously tail a log, find all files (sed), and display (cat) the found files
How to I continuously `tail -f` a log, find all files (`sed`), and display (`cat`) the found files ## example data in audit logs. tail -f /var/log/httpd/modsec_audit.log | sed 's/[^\/]*/\./;s/].*$//g' ### output ./apache/20180508/20180508-1428/20180508-142802-WvH6QgoeANwAAMwsFZ4AAAAF ./apache/201805...
How to I continuously
tail -f
a log, find all files (sed
), and display (cat
) the found files
## example data in audit logs.
tail -f /var/log/httpd/modsec_audit.log | sed 's/[^\/]*/\./;s/].*$//g'
### output
./apache/20180508/20180508-1428/20180508-142802-WvH6QgoeANwAAMwsFZ4AAAAF
./apache/20180508/20180508-1428/20180508-142803-WvH6QgoeANwAAMwtFfcAAAAG
./apache/20180508/20180508-1428/20180508-142803-WvH6QwoeANwAAMwuFlUAAAAH
./apache/20180508/20180508-1513/20180508-151357-WvIFBQoeANwAAMwnE@4AAAAA
./apache/20180508/20180508-1513/20180508-151357-WvIFBQoeANwAAMwoFD8AAAAB
./apache/20180508/20180508-1516/20180508-151608-WvIFiAoeANwAAMz1FSwAAAAA
./apache/20180508/20180508-1516/20180508-151609-WvIFiQoeANwAAMz2FYIAAAAB
./apache/20180508/20180508-1516/20180508-151611-WvIFiwoeANwAAMz3FeEAAAAC
./apache/20180508/20180508-1516/20180508-151611-WvIFiwoeANwAAMz4Fj4AAAAD
./apache/20180508/20180508-2112/20180508-211205-WvJY9QoeANwAAM1MFCoAAAAA
### works with echo
echo "./apache/20180508/20180508-1428/20180508-142802-WvH6QgoeANwAAMwsFZ4AAAAF" | sed 's/[^\/]*/\./;s/].*$//g' | awk '{print $0}' | xargs cat
### works with cat
cat /var/log/httpd/modsec_audit.log | sed 's/[^\/]*/\./;s/].*$//g' | awk '{print $0}' | xargs cat
### does not work with tail...
tail -f /var/log/httpd/modsec_audit.log | sed 's/[^\/]*/\./;s/].*$//g' | awk '{print $0}' | xargs cat
I assume the tailing does not work because the script never terminates and sed
is still caching the results until termination of the script.
***Is there a way to make this work, continuously?***
Artistan
(463 rep)
May 9, 2018, 03:28 AM
• Last activity: Sep 12, 2024, 12:35 PM
14
votes
1
answers
656
views
"Tail -f" on symlink that points to a file on another drive has interval stops, but not when tailing the original file
I'm encountering a strange behaviour with `tail -f`, when tailing a symlink on an Ubuntu machine. Apparently, `tail` uses the default update interval of 1 second. If tailing the original file, it seems to update immediately. lrwxrwxrwx 1 root root 49 Sep 9 20:28 firewall_log-01.log -> /var/data/log/...
I'm encountering a strange behaviour with
tail -f
, when tailing a symlink on an Ubuntu machine.
Apparently, tail
uses the default update interval of 1 second. If tailing the original file, it seems to update immediately.
lrwxrwxrwx 1 root root 49 Sep 9 20:28 firewall_log-01.log -> /var/data/log/firewall_log-01.log
Doing
$ tail -f firewall_log-01.log
"pauses" every second (roughly, haven't measured exactly), which fits the defaults for tail:
-s, --sleep-interval=N
with -f, sleep for approximately N seconds (default 1.0)
When doing
$ tail -f /var/data/log/firewall_log-01.log
the output gets updated immediately, apparently not concerned with the default setting for interval.
When I run tail -s 0.01
:
$ tail -f -s 0.01 firewall_log-01.log
I basically have what I need, but I'm still confused by why this even happens.
I can do a workaround by adding an alias to ~/.bashrc
, but this seems like a kind of dirty workaround.
Is there any logical reason for tail to behave like this?
Whiskeyjack1101
(141 rep)
Sep 11, 2024, 02:05 PM
• Last activity: Sep 12, 2024, 08:02 AM
16
votes
5
answers
11430
views
How do I read the last lines of a huge log file?
I have a log of 55GB in size. I tried: ``` cat logfile.log | tail ``` But this approach takes a lot of time. Is there any way to read huge files faster or any other approach?
I have a log of 55GB in size.
I tried:
cat logfile.log | tail
But this approach takes a lot of time. Is there any way to read huge files faster or any other approach?
Yi Qiang Ji
(162 rep)
Feb 20, 2024, 03:52 PM
• Last activity: Jul 6, 2024, 01:22 PM
24
votes
6
answers
26640
views
Show filename at begining of each line when tailing multiple files at once?
when tailing multiple files at once as shown below, is any there any way to show the file name at the start of each line? tail -f one.log two.log current output ==> one.log two.log <== contents of one.log here... contents of two.log here.. Looking for something like one.log: contents of one.log here...
when tailing multiple files at once as shown below, is any there any way to show the file name at the start of each line?
tail -f one.log two.log
current output
==> one.log two.log <==
contents of one.log here...
contents of two.log here..
Looking for something like
one.log: contents of one.log here...
one.log: contents of one.log here...
two.log: contents of two.log here...
two.log: contents of two.log here...
mtk
(28468 rep)
Apr 13, 2015, 09:44 AM
• Last activity: Jun 24, 2024, 08:07 PM
0
votes
1
answers
40
views
Run a few identical process and then kill one of them
Problem: Need to run several process, especially it will be “tail -f log.log >> wrile_log.log” proces to collect log. Commands can be run at the different time for same log file. This is not problem, but.. How do I kill one of them process and not interrupted another?
Problem:
Need to run several process, especially it will be “tail -f log.log >> wrile_log.log” proces to collect log.
Commands can be run at the different time for same log file. This is not problem, but..
How do I kill one of them process and not interrupted another?
Alex Alex
(3 rep)
Jun 18, 2024, 05:44 PM
• Last activity: Jun 18, 2024, 06:37 PM
1
votes
1
answers
206
views
How to know when a following tail moves from old file to the new one
So, I am working with tail -F ( or tail --follow=filename). Now it works as advertised and when a rollover occurs it will move to the new file. This is great and helps me keep track of my logs. The issue is I want to know when tail moves from old file to the new one. **The situation is as follows:**...
So, I am working with tail -F ( or tail --follow=filename). Now it works as advertised and when a rollover occurs it will move to the new file.
This is great and helps me keep track of my logs. The issue is I want to know when tail moves from old file to the new one.
**The situation is as follows:**
I have a huge log file that takes 15 mins to process. Now lets say at minute 5, a rollover occurs. Tail has the file description open, and uses it to complete tailing process and then moves to the new one.
Now I keep a history of which file and which line I have last logged. I do this by increasing the number by the lines processes (its multiprocess program, but what else can I do?).
The issue comes that after the rollover, the new file is starting from line 0, but my line_number has already been increase to 5 million. So, for this new file which has say 100 logs, I'll store the line number as 5,000,100.
I used watchdog to find when the file rolls over to reset the line number to 0, but in case the rollover happens during the initial run, like say at 5minute mark of the 15 min run, then I still end up with a 3mil+ number.
Since line_number is used to continue from where I last left in case the program died accidentally, this can mean loss of data.
Just to note:
I am running this command from python (CPython)
user607688
(13 rep)
Apr 8, 2024, 11:21 AM
• Last activity: Apr 8, 2024, 12:10 PM
1
votes
3
answers
310
views
How to tail continuously a log file that is being deleted and recreated?
I need to extract information from a log file that is deleted and recreated every time a program runs. After detecting that the file exists (again), I would like to `tail` it for a certain regexp. The regexp will be matched a few times, but the result is always the same and I want to print it just o...
I need to extract information from a log file that is deleted and recreated every time a program runs. After detecting that the file exists (again), I would like to
tail
it for a certain regexp.
The regexp will be matched a few times, but the result is always the same and I want to print it just once and after that go back to monitoring when the file is re-created.
I looked at ways of detecting file creation. One way would be via inotifywait
, but that requires installing a separate package.
Perhaps a simpler way is to take advantage that tail prints to stderr
when a file that is being tailed is deleted and created:
tail: '/path/to/debug.log' has become inaccessible: No such file or directory
tail: '/path/to/debug.log' has appeared; following new file
So I applied this solution which is working:
debug_file="/path/to/debug.log"
while true; do
# Monitor the log file until the 'new file' message appears
( tail -F $debug_file 2>&1 & ) | grep -q "has appeared; following new file"
# After the new file message appears, switch to monitoring for the regexp
tail -F "$debug_file" | while read -r line; do
id=$(echo "$line" | sed -n 's/.* etc \([0-9]\+\),.*/\1/p')
if [ -n "$id" ]; then
echo "ID: $id"
break # Exit the inner loop after the first match
fi
done
done
But I don't like that this solution starts 2 different tail
processes. Is there a way to achieve the same result, but using just 1 tail
process?
And then switch 'modes', start by looking for file creation, then look for the regexp and once that is found go back to 'standby' mode waiting for the log file to be deleted and created again.
Is inotifywait a more elegant solution? Ideally I would like a solution I could port easily to Windows CMD.
user2066480
(173 rep)
Mar 19, 2024, 05:27 PM
• Last activity: Mar 20, 2024, 12:16 PM
0
votes
0
answers
514
views
How can I save the output of tail to my clipboard or somewhere on a SSH session where there is only a unidirectional connection and server is limited?
I have to report some logs from some of the servers in our infrastructure which I have limited permissions on. Every time I copy 1000 lines of logs and paste it into Slack, I can't do scp from the server to my local computer. I can only scp from my local and grab a file and paste it in a directory o...
I have to report some logs from some of the servers in our infrastructure which I have limited permissions on. Every time I copy 1000 lines of logs and paste it into Slack, I can't do scp from the server to my local computer. I can only scp from my local and grab a file and paste it in a directory on my laptop. I cannot install any packages on this server as I have no permission like that.
The command I'm using is this:
tail -n 1000 ./mylogfile.log
My process is this:
1. SSH into server
2. Run that tail command
2. Copy 1000 lines of code
3. Paste and report it in Slack
What I want to achieve:
1. Run that command
2. Somehow get that log file on my local system as a file or in my clipboard (preferred), which is hard because of limitations I have
My limitations are:
1. I can only see this server through VPN, the server can't see me, and I can't do SCP from the server.
2. I can't install any new packages
In summary, I want to get that 1000 lines of log through an easy clean way without copying despite the limitations I have.
Thank you everyone.
Ilgar
(3 rep)
Mar 4, 2024, 11:33 AM
• Last activity: Mar 15, 2024, 10:29 AM
Showing page 1 of 20 total questions