Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

12 votes
4 answers
20270 views
How to redirect stderr in a variable but keep stdout in the console
My goal is to call a command, get stderr in a variable, but keep stdout (and only stdout) on the screen. Yep, that's the opposite of what most people do :) For the moment, the best I have is : #!/bin/bash pull=$(sudo ./pull "${TAG}" 2>&1) pull_code=$? if [[ ! "${pull_code}" -eq 0 ]]; then error "[${...
My goal is to call a command, get stderr in a variable, but keep stdout (and only stdout) on the screen. Yep, that's the opposite of what most people do :) For the moment, the best I have is : #!/bin/bash pull=$(sudo ./pull "${TAG}" 2>&1) pull_code=$? if [[ ! "${pull_code}" -eq 0 ]]; then error "[${pull_code}] ${pull}" exit "${E_PULL_FAILED}" fi echo "${pull}" But this can only show the stdout in case of success, and after the command finish. I want to have stdout on live, is this possible ? **EDIT** Thanks to @sebasth, and with the help of https://unix.stackexchange.com/questions/430161/redirect-stderr-and-stdout-to-different-variables-without-temporary-files , I write this : #!/bin/bash { sudo ./pull "${TAG}" 2> /dev/fd/3 pull_code=$? if [[ ! "${pull_code}" -eq 0 ]]; then echo "[${pull_code}] $(cat<&3)" exit "${E_PULL_FAILED}" fi } 3<
Doubidou (223 rep)
Oct 9, 2018, 08:46 AM • Last activity: Jun 10, 2025, 06:30 PM
17 votes
1 answers
8354 views
`docker logs foo | less` isn't searchable or scrollable but `docker logs foo 2>&1 | less` is
Using either `docker logs foo | less` or `docker logs foo 2>&1 | less` produces readable text, but only with the stderr redirect can one scroll or type `/somepattern` and obtain matches. Without it, searching gives "Nothing to search (press RETURN)" and a column of `~`'s. Given that stderr and stdou...
Using either docker logs foo | less or docker logs foo 2>&1 | less produces readable text, but only with the stderr redirect can one scroll or type /somepattern and obtain matches. Without it, searching gives "Nothing to search (press RETURN)" and a column of ~'s. Given that stderr and stdout aren't the same, why does less show them the same until I start doing something in less? This may be some weird multi-window vim thing that I just don't understand. Thoughts?
MagicWindow (311 rep)
Mar 29, 2016, 09:01 PM • Last activity: May 23, 2025, 08:37 AM
4 votes
3 answers
3660 views
Redirect stdout of a Windows command line program under wine
I run a Windows command line program (that I can not make accessible) with wine. It apparently writes something to stdout and I'm trying to capture that output, but I can't redirect it. No matter if I redirect stdout or stderr to a file, the program output is still printed on the console and is not...
I run a Windows command line program (that I can not make accessible) with wine. It apparently writes something to stdout and I'm trying to capture that output, but I can't redirect it. No matter if I redirect stdout or stderr to a file, the program output is still printed on the console and is not written to the file. When I redirect stderr, the wine output goes away, but the program's output is stilled printed on screen. wine program.exe > out # File is empty, program output printed on screen wine program.exe 2> out # Wine output gets redirected to file, program output still printed on screen If I redirect both, the output is neither printed on screen nor written to the file. **Edit:** On Windows, behavior is similar, but when redirecting both, everything is still printed on screen. Here are some examples with the complete output. $ wine program.exe fixme:winediag:start_process Wine Staging 1.9.23 is a testing version containing experimental patches. fixme:winediag:start_process Please mention your exact version when filing bug reports on winehq.org. output by program.exe fixme:msvcrt:__clean_type_info_names_internal (0x100aaa54) stub When I try to redirect the output, this happens: $ wine program.exe > out 2>&1 $ cat out fixme:winediag:start_process Wine Staging 1.9.23 is a testing version containing experimental patches. fixme:winediag:start_process Please mention your exact version when filing bug reports on winehq.org. fixme:msvcrt:__clean_type_info_names_internal (0x100aaa54) stub I.e., the program's console output is completely missing. The program still works fine and writes some files it's supposed to write. As a check, I did the same with pngcrush, and I get what I'd expect. Without redirection: $ wine pngcrush_1_8_10_w32.exe test.png out.png fixme:winediag:start_process Wine Staging 1.9.23 is a testing version containing experimental patches. fixme:winediag:start_process Please mention your exact version when filing bug reports on winehq.org. | pngcrush-1.8.10 | Copyright (C) 1998-2002, 2006-2016 Glenn Randers-Pehrson | Portions Copyright (C) 2005 Greg Roelofs | This is a free, open-source program. Permission is irrevocably | granted to everyone to use this version of pngcrush without | payment of any fee. | Executable name is pngcrush_1_8_10_w32.exe | It was built with bundled libpng-1.6.26 | and is running with bundled libpng-1.6.26 | Copyright (C) 1998-2004, 2006-2016 Glenn Randers-Pehrson, | Copyright (C) 1996, 1997 Andreas Dilger, | Copyright (C) 1995, Guy Eric Schalnat, Group 42 Inc., | and bundled zlib-1.2.8.1-motley, Copyright (C) 1995 (or later), | Jean-loup Gailly and Mark Adler, | and using "clock()". | It was compiled with gcc version 4.8.0 20121015 (experimental). Recompressing IDAT chunks in test.png to out.png Total length of data found in critical chunks = 431830 Critical chunk length, method 1 (ws 15 fm 0 zl 4 zs 0) = 495979 Critical chunk length, method 2 (ws 15 fm 1 zl 4 zs 0) > 495979 Critical chunk length, method 3 (ws 15 fm 5 zl 4 zs 1) = 495354 Critical chunk length, method 6 (ws 15 fm 5 zl 9 zs 0) = 457709 Critical chunk length, method 9 (ws 15 fm 5 zl 2 zs 2) > 457709 Critical chunk length, method 10 (ws 15 fm 5 zl 9 zs 1) = 451813 Best pngcrush method = 10 (ws 15 fm 5 zl 9 zs 1) = 451813 (4.63% critical chunk increase) (4.63% filesize increase) CPU time decode 4.407583, encode 17.094248, other 4294967296.000000, total 17.180143 sec With redirection: $ wine pngcrush_1_8_10_w32.exe test.png out.png > out 2>&1 $ cat out fixme:winediag:start_process Wine Staging 1.9.23 is a testing version containing experimental patches. fixme:winediag:start_process Please mention your exact version when filing bug reports on winehq.org. | pngcrush-1.8.10 | Copyright (C) 1998-2002, 2006-2016 Glenn Randers-Pehrson | Portions Copyright (C) 2005 Greg Roelofs | This is a free, open-source program. Permission is irrevocably | granted to everyone to use this version of pngcrush without | payment of any fee. | Executable name is pngcrush_1_8_10_w32.exe | It was built with bundled libpng-1.6.26 | and is running with bundled libpng-1.6.26 | Copyright (C) 1998-2004, 2006-2016 Glenn Randers-Pehrson, | Copyright (C) 1996, 1997 Andreas Dilger, | Copyright (C) 1995, Guy Eric Schalnat, Group 42 Inc., | and bundled zlib-1.2.8.1-motley, Copyright (C) 1995 (or later), | Jean-loup Gailly and Mark Adler, | and using "clock()". | It was compiled with gcc version 4.8.0 20121015 (experimental). Recompressing IDAT chunks in test.png to out.png Total length of data found in critical chunks = 431830 Critical chunk length, method 1 (ws 15 fm 0 zl 4 zs 0) = 495979 Critical chunk length, method 2 (ws 15 fm 1 zl 4 zs 0) > 495979 Critical chunk length, method 3 (ws 15 fm 5 zl 4 zs 1) = 495354 Critical chunk length, method 6 (ws 15 fm 5 zl 9 zs 0) = 457709 Critical chunk length, method 9 (ws 15 fm 5 zl 2 zs 2) > 457709 Critical chunk length, method 10 (ws 15 fm 5 zl 9 zs 1) = 451813 Best pngcrush method = 10 (ws 15 fm 5 zl 9 zs 1) = 451813 (4.63% critical chunk increase) (4.63% filesize increase) CPU time decode 4.339310, encode 17.137527, other 4.294083, total 17.182100 sec What could be the cause of that not working for the other program? wine stdout stderr io-redirection
Thomas W. (323 rep)
Dec 8, 2016, 12:48 PM • Last activity: May 20, 2025, 09:40 AM
13 votes
5 answers
10564 views
Dynamically trim stdout line width in Bash
Lately, I have been experimenting with the `ps` command, and sometimes long paths wrap to the next line (or two) and make it hard to read. I want to pipe the `ps` output into another program to limit the output to `x` number of characters. Here is what I have so far, but it doesn't work quite right:...
Lately, I have been experimenting with the ps command, and sometimes long paths wrap to the next line (or two) and make it hard to read. I want to pipe the ps output into another program to limit the output to x number of characters. Here is what I have so far, but it doesn't work quite right: ps aux | cut -c1-$(stty size | cut -d' ' -f2) $(stty size | cut -d' ' -f2) evaluates to 167, but doesn't seem to be valid input for cut. Is there a way to get this type of syntax to work in bash?
lentils (133 rep)
Apr 18, 2014, 04:16 AM • Last activity: Apr 20, 2025, 05:00 PM
1 votes
0 answers
30 views
reading stdout from /proc
I'm running the following C program ``` // hello.c #include #include int main() { while (1) { printf("Hello\n"); fflush(stdout); sleep(1); } return 0; } ``` I launched the program using ``` ./hello ``` and I can see on the terminal its messages. Then I get its pid using. However if I do ``` tail -f...
I'm running the following C program
// hello.c

#include 
#include 

int main() {
  while (1) {
    printf("Hello\n");
    fflush(stdout);
    sleep(1);
  }

  return 0;
}
I launched the program using
./hello
and I can see on the terminal its messages. Then I get its pid using. However if I do
tail -f /proc//fd/1
I do not get anything. Ditto if I use
cat /proc//fd/1
I think that I probably misunderstood some concepts about stdout and /proc. It is possible to do what I'm trying to do?
MaPo (319 rep)
Apr 4, 2025, 10:04 AM
87 votes
8 answers
21877 views
How to trick a command into thinking its output is going to a terminal
Given a command that changes its behaviour when its output is going to a terminal (e.g. produce coloured output), how can that output be redirected in a pipeline while preserving the changed behaviour? There must be a utility for that, which I am not aware of. Some commands, like `grep --color=alway...
Given a command that changes its behaviour when its output is going to a terminal (e.g. produce coloured output), how can that output be redirected in a pipeline while preserving the changed behaviour? There must be a utility for that, which I am not aware of. Some commands, like grep --color=always, have option flags to force the behaviour, but the question is how to work around programs that rely solely on testing their output file descriptor. If it matters, my shell is bash on Linux.
Amir (1891 rep)
Dec 16, 2015, 01:39 PM • Last activity: Mar 25, 2025, 03:01 PM
7 votes
2 answers
983 views
How to find out if a command wrote to stdout or not
I'd like to figure out if an external program ran successfully and wrote something to stdout, or if it did not write anything, which happens in case an error occurred. The program unfortunately always returns with exit status 0 and no stderr output. The check should also not modify or hide whatever...
I'd like to figure out if an external program ran successfully and wrote something to stdout, or if it did not write anything, which happens in case an error occurred. The program unfortunately always returns with exit status 0 and no stderr output. The check should also not modify or hide whatever is being written to stdout by the program. --- If program output is short, it's possible to use command substitution to capture it, store it in an environment variable and print it again:
output="$(external_program)"
printf %s "$output"
if [ -z "$output" ]; then
    # error: external program didn't write anything
fi
Possible dealbreakers: - Depends on size of output, see https://unix.stackexchange.com/questions/357843/setting-a-long-environment-variable-breaks-a-lot-of-commands - Possibly modifies output, notably removes newlines, see https://unix.stackexchange.com/questions/164508/why-do-newline-characters-get-lost-when-using-command-substitution - If external program output is binary data, it's not a good idea to store that in an environment variable (?) --- Another possibility would be to write the output to a temporary file or pipe. This could be wrapped into a function which communicates the result via exit status:
output_exist() (
    temporary_file="$(mktemp)" || exit
    trap 'rm "$temporary_file"' EXIT
    tee "$temporary_file"
    test -s "$temporary_file"
)

if ! external_program | output_exist; then
    # error: external program didn't write anything
fi
Possible dealbreakers: - Temporary file or pipe that has to be taken care of, ensure clean-up etc., convenience vs. portability, e.g. POSIX only has mktemp(3), no mktemp(1) - Very large output may lead to resource/performance issues --- What alternatives are there? Is there a more straightforward solution?
finefoot (3554 rep)
Mar 9, 2025, 05:55 PM • Last activity: Mar 10, 2025, 09:20 AM
5 votes
2 answers
294 views
Duplicate stdout to pipe into another command with named pipe in POSIX shell script function
``` mkfifo foo printf %s\\n bar | tee foo & tr -s '[:lower:]' '[:upper:]' <foo wait rm foo ``` This is a working POSIX shell script of what I want to do: - `printf %s\\n bar` is symbolic for an external program producing stdout - `tr -s '[:lower:]' '[:upper:]'` is symbolic for another command that i...
mkfifo foo
printf %s\\n bar | tee foo &
tr -s '[:lower:]' '[:upper:]' 
This is a working POSIX shell script of what I want to do: - printf %s\\n bar is symbolic for an external program producing stdout - tr -s '[:lower:]' '[:upper:]' is symbolic for another command that is supposed to receive the stdout and do something with it - tee duplicates stdout to named pipe foo And the output is as expected:
bar
BAR
Now I'd like to tidy up the code so it becomes external_program | my_function. Something like this:
f() (
  mkfifo foo
  tee foo &
  tr -s '[:lower:]' '[:upper:]' 
But now there is no output at all.
finefoot (3554 rep)
Mar 9, 2025, 01:34 PM • Last activity: Mar 9, 2025, 03:19 PM
117 votes
5 answers
14295 views
Do progress reports/logging information belong on stderr or stdout?
Is there an official POSIX, GNU, or other guideline on where progress reports and logging information (things like "Doing foo; foo done") should be printed? Personally, I tend to write them to stderr so I can redirect stdout and get only the program's actual output. I was recently told that this is...
Is there an official POSIX, GNU, or other guideline on where progress reports and logging information (things like "Doing foo; foo done") should be printed? Personally, I tend to write them to stderr so I can redirect stdout and get only the program's actual output. I was recently told that this is not good practice since progress reports aren't actually errors and only error messages should be printed to stderr. Both positions make sense, and of course you can choose one or the other depending on the details of what you are doing, but I would like to know if there's a commonly accepted standard for this. I haven't been able to find any specific rules in POSIX, the GNU coding standards, or any other such widely accepted lists of best practices. We have a few similar questions, but they don't address this exact issue: * https://unix.stackexchange.com/q/79315/22222 : The accepted answer suggests what I tend to do, keep the program's final output on stdout and anything else to stderr. However, this is just presented as a user's opinion, albeit supported by arguments. * https://unix.stackexchange.com/q/8813/22222 : This is specific to help messages but cites the GNU coding standard. This is the sort of thing I'm looking for, just not restricted to help messages only. So, are there any official rules on where progress reports and other informative messages (which aren't part of the program's actual output) should be printed?
terdon (251585 rep)
Dec 20, 2016, 10:13 AM • Last activity: Feb 23, 2025, 02:37 PM
5 votes
2 answers
830 views
What is the most succinct way of terminating the rest of a pipeline if a command fails?
Consider the following: command1 | command2 | command3 As I understand pipelines every command is run regardless of any errors which may occur. When a command returns stderr, it is not piped to the next command, but the next one is still run (unless you use `|&`). I want any error which may occur to...
Consider the following: command1 | command2 | command3 As I understand pipelines every command is run regardless of any errors which may occur. When a command returns stderr, it is not piped to the next command, but the next one is still run (unless you use |&). I want any error which may occur to terminate the rest of the pipeline. I thought set -o pipefail would accomplish this, but it simply terminates anything which may come after the pipeline if anything in the pipeline failed, ie: (set -o pipefail; cmd1 | cmd2 && echo "I won't run if any of the previous commands fail") So, What is the most succinct way terminate the **rest** of the pipeline if any of its commands fail? I also need it to exit with the proper stderr of the command which failed. I'm doing this from a command-line context, not a shell script, hence why I'm looking for brevity. Thoughts?
Audun Olsen (217 rep)
May 9, 2019, 11:40 AM • Last activity: Feb 4, 2025, 07:47 AM
6 votes
2 answers
766 views
Why is mawk's output (STDOUT) buffered even though it is the terminal?
I am aware that `STDOUT` is usually buffered by commands like `mawk` (but not `gawk`), `grep`, `sed`, and so on, unless used with the appropriate options (i.e. `mawk --Winteractive`, or `grep --line-buffered`, or `sed --unbuffered`). But the buffering doesn't happen when `STDOUT` is a terminal/tty,...
I am aware that STDOUT is usually buffered by commands like mawk (but not gawk), grep, sed, and so on, unless used with the appropriate options (i.e. mawk --Winteractive, or grep --line-buffered, or sed --unbuffered). But the buffering doesn't happen when STDOUT is a terminal/tty, in which case it is line buffered. Now, what I don't get is why STDOUT is buffered outside of a loop send to a pipe, even though the final destination is the terminal. A basic example :
$ while sleep 3; do echo -n "Current Time is ";date +%T; done | mawk '{print $NF}'
^C
Nothing happens for a long time, because mawk seems to be buffering it's output. I wasn't expecting that. **mawk's output is the terminal, so why is its STDOUT buffered ?** Indeed, with the -Winteractive option the output is rendering every 3 seconds :
$ while sleep 3; do echo -n "Current Time is ";date +%T; done | mawk -Winteractive '{print $NF}'
10:57:05
10:57:08
10:57:11
^C
Now, this behavior is clearly mawk related, because it isn't reproduced if I use for example grep. Even without its --line-buffered option, grep doesn't buffer its STDOUT, which is the expected behavior given that grep's STDOUT is the terminal :
$ while sleep 3; do echo -n "Current Time is ";date +%T; done | grep Current
Current Time is 11:01:44
Current Time is 11:01:47
Current Time is 11:01:50
^C
ChennyStar (1969 rep)
Jun 25, 2021, 11:46 AM • Last activity: Jan 18, 2025, 03:56 PM
-1 votes
1 answers
92 views
Why is the printf Output Order Not Respected in a Concurrent Scenario with Piped or Redirected stdout?
We are in a concurrent scenario where we have **n** concurrent processes. By using a synchronization policy (e.g., using pipes or signals), each process can print using *printf("string\n")* only if it receives a token from the previous process, ensuring a specific printing order. When stdout is atta...
We are in a concurrent scenario where we have **n** concurrent processes. By using a synchronization policy (e.g., using pipes or signals), each process can print using *printf("string\n")* only if it receives a token from the previous process, ensuring a specific printing order. When stdout is attached to a terminal (e.g., a console), the expected printing order is respected, meaning processes print in the exact order defined by the synchronization policy. However, when stdout is piped or redirected to a file or another process, the resulting printing order may not be respected, even though the processes still follow the synchronization policy correctly. We know that *printf* is buffered by default, meaning it does not immediately write to stdout but instead accumulates output in a buffer before flushing it. In a concurrent environment, it seems the operating system writes each process's buffer in an order independent of the intended synchronization policy, causing out-of-order prints. This is unexpected given that each process follows the policy correctly. However, this problem does not occur if one of the following solutions is implemented: 1. Forcing a flush after each call to printf using *fflush(stdout)*; 2. Disabling buffering for stdout in each child process using setvbuf(stdout, NULL, _IONBF, 0);. 3. Using the system call write(1, "string\n", strlen("string\n")); instead of printf, since write bypasses buffering and directly writes to stdout. Does anyone know why this happens? What is the operating system's policy regarding this?
Spartacus (1 rep)
Dec 11, 2024, 07:01 PM • Last activity: Dec 12, 2024, 05:48 AM
86 votes
5 answers
21662 views
Can I configure my shell to print STDERR and STDOUT in different colors?
I want to set my terminal up so `stderr` is printed in a different color than `stdout`; maybe red. This would make it easier to tell the two apart. Is there a way to configure this in `.bashrc`? If not, is this even possible? - - - **Note**: This question was merged with [another](https://unix.stack...
I want to set my terminal up so stderr is printed in a different color than stdout; maybe red. This would make it easier to tell the two apart. Is there a way to configure this in .bashrc? If not, is this even possible? - - - **Note**: This question was merged with [another](https://unix.stackexchange.com/q/53563/22565) that asked for stderr, stdout *and the user input echo* to be output in *3 different colours*. Answers may be addressing either question.
Naftuli Kay (41346 rep)
May 1, 2011, 10:59 PM • Last activity: Oct 13, 2024, 01:22 PM
0 votes
2 answers
665 views
STDOUT + STDERR output ... is there any difference between considering the output to be an empty string vs NULL
I'm writing some application code that is used to execute Linux shell commands, and it then logs the command details into an SQL database. This includes the output of STDOUT + STDERR (separately). After the command has been executed, and assuming the process didn't output anything... could there be...
I'm writing some application code that is used to execute Linux shell commands, and it then logs the command details into an SQL database. This includes the output of STDOUT + STDERR (separately). After the command has been executed, and assuming the process didn't output anything... could there be any reason to leave the STDOUT/STDERR fields as NULL -vs- setting them to be empty strings? To put the question another way: is there technically any difference between these two things? - A process that doesn't output anything to STDOUT - A process that outputs an empty string to STDOUT (and nothing else) And to put the question another way again... does it make sense to make these columns NOT NULL in SQL?
LaVache (423 rep)
Dec 6, 2018, 03:02 AM • Last activity: Oct 6, 2024, 09:27 AM
0 votes
1 answers
863 views
Set StandardOutput=null in the service file which starts a script is equivalent to the redirection of the standard output of the script to /dev/null?
### My current service and shell script ### I have a systemd service file called `myservice.service`. The service is started at boot. The service starts the shell script `/usr/bin/myscript.sh` as you can see below in the section `[Service]`: ``` ... [Service] Type=forking ExecStart=/usr/bin/myscript...
### My current service and shell script ### I have a systemd service file called myservice.service. The service is started at boot. The service starts the shell script /usr/bin/myscript.sh as you can see below in the section [Service]:
...
[Service]
Type=forking
ExecStart=/usr/bin/myscript.sh
PIDFile=/dev/shm/myscript.pid
...
The content of the script is:
#!/bin/sh
/usr/bin/script-python.py > /dev/null &
echo $! > /dev/shm/myscript.pid
The shell script myscript.sh starts the Python script: script-python.py. By the redirection > /dev/null the standard output of script-python.py is sent to /dev/null and so it is lost. ### Changes to use Service->StandardOutput=null ### In the [systemd documentation of StandardOutput](https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html#StandardOutput=) I have read the following sentence: > **null** connects standard output to /dev/null, i.e. everything written to it will be lost. So I'm thinking to execute the following changes to my files: 1. myservice.service becomes (add StandardOutput=null):
...
[Service]
Type=forking
ExecStart=/usr/bin/myscript.sh
PIDFile=/dev/shm/myscript.pid
StandardOutput=null
...
2. myscript.sh becomes (remove >/dev/null):
#!/bin/sh
/usr/bin/script-python.py &
echo $! > /dev/shm/myscript.pid
### My question ### My question is: if I make the modifications showed above the result **is exactly the same** of my current situation: all outputs of script-python.py is lost? There aren't any difference?
User051209 (498 rep)
Oct 2, 2024, 10:23 AM • Last activity: Oct 3, 2024, 12:46 PM
0 votes
2 answers
428 views
Redirecting stdout with > and stderr with >> to same file leaves out stderr
I'm redirecting stdout and stderr to the same file by using > and >> respectively: ``` rsync -a --exclude cache/ src_folder/ target_folder/ 1>out_err.log 2>>out_err.log ``` However the stderror is not logged in the file. What happens to the stderror when using >> instead of > to redirect both to the...
I'm redirecting stdout and stderr to the same file by using > and >> respectively:
rsync -a --exclude cache/ src_folder/ target_folder/ 1>out_err.log 2>>out_err.log
However the stderror is not logged in the file. What happens to the stderror when using >> instead of > to redirect both to the same file? My intention was to write stdout to a new file and then append stderr to that same file.
bit (1176 rep)
Sep 26, 2024, 08:48 PM • Last activity: Sep 27, 2024, 09:38 AM
2 votes
1 answers
120 views
In zsh, annotate each line in a file to which both stdout and stderr have been redirected with the line's source (stdout or stderr)
In zsh, how can I annotate each line in a file to which both stdout and stderr have been redirected with the line's source (stdout or stderr)? I want output with the source name prepended to the line, like: ``` stdout: stderr: ```
In zsh, how can I annotate each line in a file to which both stdout and stderr have been redirected with the line's source (stdout or stderr)? I want output with the source name prepended to the line, like:
stdout: 
stderr:
XDR (451 rep)
Aug 14, 2024, 08:18 AM • Last activity: Aug 14, 2024, 08:52 AM
24 votes
3 answers
6352 views
tee stdout to stderr?
I'd like to send stdout from one process to the stdin of another process, but also to the console. Sending stdout to stdout+stderr, for instance. For example, I've got `git edit` aliased to the following: git status --short | cut -b4- | xargs gvim --remote I'd like the list of filenames to be sent t...
I'd like to send stdout from one process to the stdin of another process, but also to the console. Sending stdout to stdout+stderr, for instance. For example, I've got git edit aliased to the following: git status --short | cut -b4- | xargs gvim --remote I'd like the list of filenames to be sent to the screen as well as to xargs. So, is there a tee-like utility that'll do this? So that I can do something like: git status --short | \ cut -b4- | almost-but-not-quite-entirely-unlike-tee | \ xargs gvim --remote
Roger Lipscombe (1780 rep)
Apr 30, 2014, 09:10 AM • Last activity: Aug 11, 2024, 09:13 AM
7 votes
1 answers
2093 views
Logging the output of remote commands on multiple ssh servers without delay
I've written a simple bash script that ssh's into 3 hosts (2 remote, 1 my own for testing) and runs a long running gui program that outputs text to the terminal that I'd like to log. #!/bin/bash ssh -f -X user@remote1 '(python -u long_running_program.py &> ~/log1.txt)' ssh -f -X user@remote2 '(pytho...
I've written a simple bash script that ssh's into 3 hosts (2 remote, 1 my own for testing) and runs a long running gui program that outputs text to the terminal that I'd like to log. #!/bin/bash ssh -f -X user@remote1 '(python -u long_running_program.py &> ~/log1.txt)' ssh -f -X user@remote2 '(python -u long_running_program.py &> ~/log2.txt)' ssh -f -X user@localhost '(python -u long_running_program.py &> ~/log3.txt)' multitail -s 3 ~/log* From the above, ssh is called with the f parameter to run in the background and allow the script to continue execution. On the remote server, the python program is called with the unbuffered switch and the output redirected to a log file (note that the home directories of the remote and local machines are all on a network mounted drive so using ~ gives the same path on all of them). The above technically works, however the 2 logs (log1,log2) from the remote machine are very slow to update. It can be 20+seconds before the file is updated (regardless of if you are using multitail, vi, etc to view the file), while the local log file (log3) update is immediate. To confirm this should not be happening, I can manually ssh into the remotes and run the program without redirecting the output to file. The output is streamed continuously without delay, as expected. I have also tried various things like trying to turn off buffering and using script/tee/etc to no avail. Does anyone know what's causing this and/or how to resolve it?
user67081 (171 rep)
Sep 5, 2017, 11:44 PM • Last activity: Jul 15, 2024, 01:00 PM
0 votes
1 answers
179 views
How to put all function/command call output results to different corresponding vars: for stderr, for stdout, for string result and for return code?
I want to extend question [How to store standard error in a variable](https://stackoverflow.com/q/962255/1455694) and get general solution (bash function or functions) to extract from the called function/command together with `standard error` also all other possible outcomes. The function should cap...
I want to extend question [How to store standard error in a variable](https://stackoverflow.com/q/962255/1455694) and get general solution (bash function or functions) to extract from the called function/command together with standard error also all other possible outcomes. The function should capture and return as vars for the called function/command the following: - stderr - stdout - result string out var - return code API for the function should be convenient to use. --- Example of test function that produces all different output results:
func_to_test_all_outs() {
    local param="${1:?}"; shift
    local -n out_result=${1:?}; shift

    echo "test error output" >&2
    echo "test normal output"

    out_result=""
    if [[ "$param" = "return_string_result" ]]; then
        out_result="result string"
    fi

    return 3
}
More comprehensive/general example of test function that produces all different output results including spawning background processes to produce stout and stderro, interleaving stdout/stderr output, and including leading/trailing/mid white space plus potential command injection, variable expansion, and globbing chars in output streams and does redirection of FD3 in case the calling script relies on that:
func_to_test_all_outs() {
    local param="${1:?}"; shift
    local -n out_result=${1:?}; shift

    ( sleep 2; printf '\n   first  \n$RANDOM *\nerror output\n' >&2; ) &
    exec 3>&1
    printf ' first  \n$(date)\n*\nnormal output\n\n' >&3

    exec 3>&2
    printf '\nsecond $RANDOM *\nerror output\n\n\n' >&3
    ( sleep 2; printf 'second\ntest\n$(date)\n*\nnormal output\n\n' ) &

    ( printf 'third\n$(date)\n*\nnormal output\n\n' ) &

    out_result=""
    if [[ "$param" = "return_string_result" ]]; then
        printf -v out_result '\n\nresult\t\n\nstring\n\n\n'
    fi

    return 127
}
Anton Samokat (289 rep)
Jun 27, 2024, 01:02 PM • Last activity: Jun 30, 2024, 11:40 AM
Showing page 1 of 20 total questions