Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

1 votes
1 answers
59 views
coproc redirect to stdout
Suppose I have some process that when ready will output to `stdout`. Even though I want my script to run asynchronously with respect to that process, I still want the script to block and wait on that first line. Modelled bellow with `sleep` and `echo` is exactly the behaviour I need in working order...
Suppose I have some process that when ready will output to stdout. Even though I want my script to run asynchronously with respect to that process, I still want the script to block and wait on that first line. Modelled bellow with sleep and echo is exactly the behaviour I need in working order:
coproc monitor {
  sleep 2
  echo "init"
  sleep 1
  echo "foo"
  echo "bar"
  echo "baz"
}

read -u ${monitor} line
echo started
exec 3<&${monitor}
cat <&3 &

sleep 2
The script starts and creates the coprocess, then it waits on that first line via read -u, and finally it attaches 3 to ${monitor} so that we can then use cat in yet another background process to pipe stuff from monitor to actual stdout. Thus we get:
# waits 2 seconds
started
# after 1 second:
foo
bar
baz
I am not too happy with these two lines though
exec 3<&${monitor}
cat <&3 &
Is there no better way of achieving this? It seems like a rather roundabout of doing things. But everything I tried hasn't worked. For example cat <&"${monitor}" works, but then it blocks the script. { cat <&"${monitor}" ; } & gives Bad file descriptor for reasons I don't understand (but even then, I am still suing yet another background process, which seems sily).
Mathias Sven (273 rep)
May 27, 2025, 06:11 PM • Last activity: May 28, 2025, 04:11 PM
0 votes
1 answers
436 views
Using coprocess to write a name referenced variable in BASH
I currently try to start background process with coproc and update a name reference variable. My not working code: ``` function updateVariable(){ local -n myVar="${1}" #i=0; while : do sleep 1 myVar="ok" #((++i)) done } capture=""; coproc mycoproc { updateVariable capture; } ``` This does not work a...
I currently try to start background process with coproc and update a name reference variable. My not working code:
function updateVariable(){
  local -n myVar="${1}"
  #i=0;
  while :
  do
    sleep 1
    myVar="ok"
    #((++i))
  done
}

capture=""; coproc mycoproc { updateVariable capture; }
This does not work as I exspected.
$capture
is just empty. I would exspect it to be "ok". Thanks a lot!
kon (123 rep)
Jul 28, 2020, 12:52 PM • Last activity: Mar 20, 2025, 02:08 PM
1 votes
1 answers
1415 views
Coproc in bash script
I'm trying to do a simple shell script that will make my raspberry's bluetooth discoverable but i'm facing some issues. My raspberry is running Raspbian. Running this through command line works perfectly: coproc bluetoothctl echo -e 'discoverable on' >&${COPROC[1]} But when i create a shell script d...
I'm trying to do a simple shell script that will make my raspberry's bluetooth discoverable but i'm facing some issues. My raspberry is running Raspbian. Running this through command line works perfectly: coproc bluetoothctl echo -e 'discoverable on' >&${COPROC} But when i create a shell script doing the following: #! /bin/bash coproc bluetoothctl echo -e 'discoverable on' >&${COPROC} with the command line "bash test_script.sh", the script is exectued correctly but the state of the bluetooth remains the same. Can someone give me a hand? Thanks!
Kamigaku (13 rep)
Aug 22, 2019, 12:14 PM • Last activity: Jan 25, 2025, 11:56 PM
1 votes
1 answers
241 views
Bash coproc with python
I am playing with the `coproc` of Bash and I am failing to understand something. I started with the following example: # Example#1 ``` $ coproc MY_BASH { bash; } [1] 95244 $ echo 'ls -l; echo EOD' >&"${MY_BASH[1]}" $ is_done=false; while [[ "$is_done" != "true" ]]; do > read var if [[ $var == "EOD"...
I am playing with the coproc of Bash and I am failing to understand something. I started with the following example: # Example#1
$ coproc MY_BASH { bash; }
 95244
$ echo 'ls -l; echo EOD' >&"${MY_BASH}"
$ is_done=false; while [[ "$is_done" != "true" ]]; do
>   read var    if [[ $var == "EOD" ]]; then
>      is_done="true"
>   else
>      echo $var
>   fi
> done
total 0
-rw-rw-r-- 1 username username 0 Nov 11 13:00 file10.txt
-rw-rw-r-- 1 username username 0 Nov 11 13:00 file1.txt
-rw-rw-r-- 1 username username 0 Nov 11 13:00 file2.txt
-rw-rw-r-- 1 username username 0 Nov 11 13:00 file3.txt
-rw-rw-r-- 1 username username 0 Nov 11 13:00 file4.txt
-rw-rw-r-- 1 username username 0 Nov 11 13:00 file5.txt
-rw-rw-r-- 1 username username 0 Nov 11 13:00 file6.txt
-rw-rw-r-- 1 username username 0 Nov 11 13:00 file7.txt
-rw-rw-r-- 1 username username 0 Nov 11 13:00 file8.txt
-rw-rw-r-- 1 username username 0 Nov 11 13:00 file9.txt
$
Here we can see that the current Bash shell is able to create a coprocess and interact with it. # Example 2 In this case, I switch from a bash coprocess to a python coprocess:
$ coproc MY_BASH { python; }
 95244
$ echo 'print("hello"); print("EOD");' >&"${MY_BASH}"
$ is_done=false; while [[ "$is_done" != "true" ]]; do
>   read var    if [[ $var == "EOD" ]]; then
>      is_done="true"
>   else
>      echo $var
>   fi
> done
In this scenario the program hangs and gets blocked. I have the impression that I am forgetting to send something in the input. Any help to better understand what is going on will be appreciated.
chemacabeza (89 rep)
Nov 11, 2022, 12:13 PM • Last activity: Apr 14, 2023, 10:13 PM
2 votes
1 answers
6338 views
How does bash know about its parent's coprocess in this situation, and why does a shebang line change it?
`outer.sh`: ```bash ls -l /proc/$$/exe coproc cat ./inner.sh kill $! ``` `inner.sh`: ```bash ls -l /proc/$$/exe set | grep COPROC || echo No match found coproc cat kill $! ``` When I run `./outer.sh`, this gets printed: ```none lrwxrwxrwx 1 joe joe 0 Jun 16 22:47 /proc/147876/exe -> /bin/bash lrwxrw...
outer.sh:
ls -l /proc/$$/exe
coproc cat
./inner.sh
kill $!
inner.sh:
ls -l /proc/$$/exe
set | grep COPROC || echo No match found
coproc cat
kill $!
When I run ./outer.sh, this gets printed:
lrwxrwxrwx 1 joe joe 0 Jun 16 22:47 /proc/147876/exe -> /bin/bash
lrwxrwxrwx 1 joe joe 0 Jun 16 22:47 /proc/147879/exe -> /bin/bash
No match found
./inner.sh: line 3: warning: execute_coproc: coproc [147878:COPROC] still exists
Since COPROC and COPROC_PID aren't set in the child, how does it know about the one from the parent to be able to give me that warning? Also, I discovered that if I add #!/bin/bash to the top of inner.sh, or if I call bash ./inner.sh instead of just ./inner.sh from outer.sh, then the warning goes away. Why does this change anything, since it's getting ran with a bash subprocess either way?
Joseph Sible-Reinstate Monica (4220 rep)
Jun 17, 2022, 02:49 AM • Last activity: Jun 17, 2022, 07:33 AM
0 votes
1 answers
661 views
Return value vs return status of a coprocess
I don't understand this sentence about **Coprocesses** in the **bash** man page: > Since the coprocess is created as an asynchronous command, the **coproc** command always returns success. The return status of a coprocess is the exit status of 'command'. What is the difference between a "success" *r...
I don't understand this sentence about **Coprocesses** in the **bash** man page: > Since the coprocess is created as an asynchronous command, the **coproc** command always returns success. The return status of a coprocess is the exit status of 'command'. What is the difference between a "success" *return value* and a "success/error" (0/non-0) *exit status*? How differently are they handled by the **bash**? And how can one catch them to see the difference?
The Quark (402 rep)
Sep 18, 2021, 10:52 PM • Last activity: Oct 16, 2021, 02:45 PM
121 votes
4 answers
44005 views
How do you use the command coproc in various shells?
Can someone provide a couple of examples on how to use [`coproc`][1]? [1]: http://wiki.bash-hackers.org/syntax/keywords/coproc
Can someone provide a couple of examples on how to use coproc ?
slm (378985 rep)
Aug 10, 2013, 04:28 PM • Last activity: Jul 24, 2021, 06:47 PM
0 votes
0 answers
453 views
zsh, zpty: How to read the output of a process after it has exited?
Start some command with `zpty`: ``` zpty -d x ; zpty x 'echo hi' ; sleep 1 ``` How do I read its output now that it has exited? ``` zpty -r x ``` Returns 2, and this behavior seems expected per the manpage.
Start some command with zpty:
zpty -d x ; zpty x 'echo hi' ; sleep 1
How do I read its output now that it has exited?
zpty -r x
Returns 2, and this behavior seems expected per the manpage.
HappyFace (1694 rep)
Apr 12, 2021, 06:26 PM
6 votes
2 answers
908 views
coproc and named pipe behaviour under command substitution
I have a requirement to make a function in a zsh shell script, that is called by command substitution, communicate state with subsequent calls to the same command substitution. Something like C's static variables in functions (very crudely speaking). To do this I tried 2 approaches - one using copro...
I have a requirement to make a function in a zsh shell script, that is called by command substitution, communicate state with subsequent calls to the same command substitution. Something like C's static variables in functions (very crudely speaking). To do this I tried 2 approaches - one using coprocessors, and one use named pipes. The named pipes approach, I can't get to work - which is frustrating because I think it will solve the only problem I have with coprocessors - that is, if I enter into a new zsh shell from the terminal, I don't seem to be able to see the coproc of the parent zsh session. I've create simplified scripts to illustrate the issue below - if you're curious about what I'm trying to do - it's adding a new stateful component to the bullet-train zsh theme, that will be called by the command substituted build_prompt() function here: https://github.com/caiogondim/bullet-train.zsh/blob/d60f62c34b3d9253292eb8be81fb46fa65d8f048/bullet-train.zsh-theme#L692 **Script 1 - Coprocessors**
#!/usr/bin/env zsh

coproc cat
disown
print 'Hello World!' >&p

call_me_from_cmd_subst() {
    read get_contents &p
    print 'Response Sent!'
}

# Run this first
call_me_from_cmd_subst

# Then comment out the above call
# And run this instead
#print "$(call_me_from_cmd_subst)"

# Hello Response!
read finally  /tmp/foo.bar &

call_me_from_cmd_subst() {
    get_contents=$(cat /tmp/foo.bar)
    print "Retrieved: $get_contents"
    print 'Hello Response!' > /tmp/foo.bar &!
    print 'Response Sent!'
}

# Run this first
call_me_from_cmd_subst

# Then comment out the above call
# And run this instead
#print "$(call_me_from_cmd_subst)"

# Hello Response!
cat /tmp/foo.bar
In their initial forms they both produce exactly the same output:
$ ./named-pipe.zsh
Retrieved: Hello World!
Response Sent!
Hello Response!

$ ./coproc.zsh
Retrieved: Hello World!
Response Sent!
Hello Response!
Now if I switch the coproc script to call using the command substitution nothing changes:
# Run this first
#call_me_from_cmd_subst

# Then comment out the above call
# And run this instead
print "$(call_me_from_cmd_subst)"
That is reading and writing to the coprocess from the subprocess created by command substituion causes no issue. I was a little suprised by this - but it's good news! But if I make the same change in the named piped examples the script blocks - with no output. To try to guage why I ran it with
-x
, giving:
+named-pipe.zsh:3> rm -rf /tmp/foo.bar
+named-pipe.zsh:4> mkfifo /tmp/foo.bar
+named-pipe.zsh:15> call_me_from_cmd_subst
+call_me_from_cmd_subst:1> get_contents=+call_me_from_cmd_subst:1> cat /tmp/foo.bar
+named-pipe.zsh:5> print 'Hello World!'
+call_me_from_cmd_subst:1> get_contents='Hello World!'
+call_me_from_cmd_subst:2> print 'Retrieved: Hello World!'
+call_me_from_cmd_subst:4> print 'Response Sent!'
It looks to me like the subprocess created by the command substitution won't terminate whilst the following line hasn't terminated (I've played with using
&
,
&!
, and
here with no change in result).
print 'Hello Response!' > /tmp/foo.bar &!
To demonstrate this I can manually fire-in a cat to read the response:
$ cat /tmp/foo.bar
Hello Response!
The script now waits at the final cat command as there is nothing in the pipe to read. ________________ My questions are: 1. Is it possible to construct the named pipe to behave exactly like the coprocess in the presence of a command substitution? 2. Can you explain why a coprocess can demonstrably be read and written to from a subprocess, but if I manually create a subshell (by typing
) into the console, I can no longer access it (in fact I can create a new coproc that will operate independantly of its parent and exit, and continue using the parent's!). 3. If 1 is possible, I assume named pipes will have no such complicates as in 2 because the named pipe is not tied to a particular shell process? To explain what I mean in 2 and 3:
$ coproc cat
 24516
$ print -p test
$ read -ep
test
$ print -p test_parent
$ zsh
$ print -p test_child
print: -p: no coprocess
$ coproc cat
 28424
$ disown
$ print -p test_child
$ read -ep
test_child
$ exit
$ read -ep
test_parent
I can't see the coprocess from inside the child zsh, yet I can see it from inside a command substitution subprocess? Finally I'm using Ubuntu 18.04:
$ zsh --version
zsh 5.4.2 (x86_64-ubuntu-linux-gnu)
Phil (175 rep)
Mar 8, 2021, 04:25 PM • Last activity: Mar 9, 2021, 12:25 AM
1 votes
0 answers
46 views
Usage of cvlc with coproc
I'm trying to feed to file descriptor `.mp3` files the with the dummy interface of `vlc`, i.e., `cvlc`, in order to add to a playlist on the fly or override the entire file descriptor with new data (new `.mp3` file): ``` coproc cvlc cvlc filename.mp3 >& OR >>&"${COPROC[1]}" ``` This will run but wil...
I'm trying to feed to file descriptor .mp3 files the with the dummy interface of vlc, i.e., cvlc, in order to add to a playlist on the fly or override the entire file descriptor with new data (new .mp3 file):
coproc cvlc
cvlc filename.mp3 >& OR >>&"${COPROC}"
This will run but will wait for the prompt to back which is not expected since its directing output to file descriptor.
amosmoses (11 rep)
Feb 24, 2020, 02:14 PM • Last activity: Feb 24, 2020, 02:56 PM
7 votes
2 answers
2352 views
Is it possible to have multiple concurrent coprocesses?
The *intent* of the test script 1 below is to start an "outer" coprocess (running `seq 3`), read from this coprocess in a `while`-loop, and for each line read, print a line identifying the current iteration of the outer loop, start an "inner" coprocess (also running `seq`, with new arguments), read...
The *intent* of the test script1 below is to start an "outer" coprocess (running seq 3), read from this coprocess in a while-loop, and for each line read, print a line identifying the current iteration of the outer loop, start an "inner" coprocess (also running seq, with new arguments), read from this inner coprocess in a nested while loop, and then clean up this inner coprocess. The nested while loop prints some output for each line it reads from the inner coprocess. #!/bin/bash # filename: coproctest.sh PATH=/bin:/usr/bin coproc OUTER { seq 3; } SAVED_OUTER_PID="${OUTER_PID}" exec {OUTER_READER}1 I've only recently started to work with coprocesses, and there is still a lot I don't understand. As a result, this script almost certainly contains incorrect, awkward, or unnecessary code. Please feel free to comment on and/or fix these weaknesses in your responses.
kjo (16299 rep)
Jul 20, 2019, 04:16 PM • Last activity: Jul 22, 2019, 03:28 PM
3 votes
1 answers
431 views
Why does this gawk coprocess hang?
While having a go at https://unix.stackexchange.com/q/368234/70524, I tried [GNU awk's coprocess feature][1]: gawk -F, -v cmd='date +"%Y-%m-%d %H:%M:%S" -f-' '{print $5 |& cmd; cmd |& getline d; $5 = d}1' foo This command hangs. I thought this might be because `date` is waiting to read the entire in...
While having a go at https://unix.stackexchange.com/q/368234/70524 , I tried GNU awk's coprocess feature : gawk -F, -v cmd='date +"%Y-%m-%d %H:%M:%S" -f-' '{print $5 |& cmd; cmd |& getline d; $5 = d}1' foo This command hangs. I thought this might be because date is waiting to read the entire input, so I tried to close the sending half of the pipeline: gawk -F, -v cmd='date +"%Y-%m-%d %H:%M:%S" -f-' '{print $5 |& cmd; close(cmd, "to"); cmd |& getline d; $5 = d}1' foo This works (yes, I know I should set OFS=,, but for now...). However, date seems to have no problem processing input as it comes in. This gives the first line of output immediately: d='Thu Apr 27 2017 23:19:47 GMT+0700 (ICT)' (echo "$d"; sleep 1m; echo "$d") | date +"%Y-%m-%d %H:%M:%S" -f- What's going on?
muru (77471 rep)
May 31, 2017, 06:32 AM • Last activity: Jun 19, 2019, 03:15 PM
10 votes
4 answers
6032 views
Run command in background with foreground terminal access
I am trying to create a function that can run an arbitrary command, interact with the child process (specifics omitted), and then wait for it to exit. If successful, typing `run ` will appear to behave just like a bare ` `. If I weren't interacting with the child process I would simply write: ``` ru...
I am trying to create a function that can run an arbitrary command, interact with the child process (specifics omitted), and then wait for it to exit. If successful, typing run will appear to behave just like a bare ``. If I weren't interacting with the child process I would simply write:
run() {
    "$@"
}
But because I need to interact with it while it runs, I have this more complicated setup with coproc and wait.
run() {
    exec {in}&1 {err}>&2
    { coproc "$@" 0&$out 2>&$err; } 2>/dev/null
    exec {in}&- {err}>&-

    # while child running:
    #     status/signal/exchange data with child process

    wait
}
(This is a simplification. While the coproc and all the redirections aren't really doing anything useful here that "$@" & couldn't do, I need them all in my real program.) The "$@" command could be anything. The function I have works with run ls and run make and the like, but it fails when I do run vim. It fails, I presume, because Vim detects that it is a background process and doesn't have terminal access, so instead of popping up an edit window it suspends itself. I want to fix it so Vim behaves normally. **How can I make coproc "$@" run in the "foreground" and the parent shell become the "background"?** The "interact with child" part neither reads from nor writes to the terminal, so I don't need it to run in the foreground. I'm happy to hand over control of the tty to the coprocess. It is important for what I'm doing that run() be in the parent process and "$@" be in its child. I can't swap those roles. But I *can* swap the foreground and background. (I just don't know how to.) Note that I am not looking for a Vim-specific solution. And I would prefer to avoid pseudo-ttys. My ideal solution would work equally well when stdin and stdout are connected to a tty, to pipes, or are redirected from files: run echo foo # should print "foo" echo foo | run sed 's/foo/bar/' | cat # should print "bar" run vim # should open vim normally --- > Why using coprocesses? I could have written the question without coproc, with just run() { "$@" & wait; } I get the same behavior with just &. But in my use case I am using the FIFO coproc sets up and I thought it best not to oversimplify the question in case there's a difference between cmd & and coproc cmd. > Why avoiding ptys? run() could be used in an automated context. If it's used in a pipeline or with redirections then there wouldn't be any terminal to emulate; setting up a pty would be a mistake. > Why not using expect? I'm not trying to automate vim, send it any input or anything like that.
John Kugelman (2087 rep)
Mar 21, 2019, 05:54 PM • Last activity: Mar 31, 2019, 06:29 PM
1 votes
1 answers
495 views
How does bash coprocess achieve its pipelining?
Note this passage from [man bash][1] ( emphasis mine ): > Coprocesses > > A coprocess is a shell command preceded by the coproc reserved word. A > coprocess is executed asynchronously in a subshell, as if the command > had been terminated with the & control operator, **with a two-way pipe** > establ...
Note this passage from man bash ( emphasis mine ): > Coprocesses > > A coprocess is a shell command preceded by the coproc reserved word. A > coprocess is executed asynchronously in a subshell, as if the command > had been terminated with the & control operator, **with a two-way pipe** > established between the executing shell and the coprocess. Now, as we know unlike other *nix systems, Linux pipes are unidirectional (also ref man pipe(7) , Portability section). So how does bash coproces achieve the "two-way pipe" without one existing on Linux ?
Sergiy Kolodyazhnyy (16909 rep)
Jan 10, 2019, 04:30 AM • Last activity: Jan 10, 2019, 04:55 AM
5 votes
2 answers
2574 views
Is coproc <command> the same as <command> &?
I have read that `$coproc ` is different from `$ &` in that `coproc` will execute `command` in a sub-shell process. But when I tested it, it worked just like `$ &`. The test is as follow: First: test the behavior of `$ &`. 1. Run `$nano &` on *tty1* 1. On another tty, output from `$ps -t tty1 --fore...
I have read that $coproc is different from $ & in that coproc will execute command in a sub-shell process. But when I tested it, it worked just like $ &. The test is as follow: First: test the behavior of $ &. 1. Run $nano & on *tty1* 1. On another tty, output from $ps -t tty1 --forest indicates nano process is child process of the -bash process (login bash shell process -> no sub-shell process was created) Second: test the behavior of $coproc 1. Run $coproc nano on *tty1* 1. On another tty, output from $ps -t tty1 --forest is the same as above (no sub-shell process was created) So is $coproc simply the same as $ &? The shell used was a **bash shell**
Tran Triet (715 rep)
Oct 1, 2018, 12:05 PM • Last activity: Oct 1, 2018, 01:46 PM
-3 votes
2 answers
505 views
inter process comunication in bash read commands in multiple background functions
I'm trying to get the following code to read from a main input then read that in one function then be able to send that into the input of another function but i am having trouble getting it to work and it NEEDS to be inputted into the read command to be able to be parsed coproc test { for i in $(seq...
I'm trying to get the following code to read from a main input then read that in one function then be able to send that into the input of another function but i am having trouble getting it to work and it NEEDS to be inputted into the read command to be able to be parsed coproc test { for i in $(seq 0 9) do sleep 1 echo $i done } input() { while read -u 3 gr do echo sent: $gr # this should send to output function done } output() { while read idk do echo received: $idk | sed s/r/R/ | sed 's/5/five/' # this should receive from input function done } exec 3>&${test} exec 4>&${test} input 4 & # remove >&4 to see output export input_PID=$! output &- 4>&- echo finished I've tried every kind of redirection of the fd but nothing seems to work, please help, the idea is that a user will be able to (based on what the output function reads) send commands into the input function without needing to duplicate the code for each function, and vice versa (input function can send to output function as well), sending to fd3 or fd4 doesn't work as it seems to bypass read itself and send directly to the command that is receiving the final output Note that this is just a minimal example, the full use is here: Basic process layout of the script (each indent represents the source depth or sub process depth): Code_Bash . ./modules/module_controller . ./modules/module_trap . ./modules/module_finish until [[ $FIN == 1 ]] ; do . ./modules/module_loader . ./modules/module_colors . ./modules/module_tracker . ./modules/module_config . ./modules/module_kill_all_panes . ./modules/module_irc_session . ./modules/module_input . ./modules/module_irc_read . ./modules/module_output . ./modules/module_handler . ./modules/module_user_input . ./modules/module_array if [[ -z $IRC_NC_PID && $IRC_FIN == "0" ]] ; then . ./modules/module_coproc_nc . ./modules/module_rest_of_script fi . ./modules/module_null done https://github.com/mgood7123/UPM/blob/master/Files/Code/modules/module_loader (this is continuously executed) https://github.com/mgood7123/UPM/blob/master/Files/Code/modules/module_coproc_nc https://github.com/mgood7123/UPM/blob/master/Files/Code/modules/module_irc_read (core module ran in separate background process) https://github.com/mgood7123/UPM/blob/master/Files/Code/modules/module_irc_session (core module ran in separate background process) https://github.com/mgood7123/UPM/blob/master/Files/Code/modules/module_rest_of_script (THIS RUNS THE BACKGROUND FUNCTIONS FOR THE IRC_READ AND IRC_SESSION)
Clark Kent (11 rep)
Sep 17, 2017, 12:58 PM • Last activity: Sep 17, 2017, 05:40 PM
5 votes
1 answers
184 views
Joined pipelines
Considering a routine such as this one: alpha() { echo a b c |tr ' ' '\n'; } which outputs a stream, I would like to take the output stream, transform it, and `paste` it with the original output stream. If I take use upcasing as a sample transformation, I can achieve what I want with: $ mkfifo p1 p2...
Considering a routine such as this one: alpha() { echo a b c |tr ' ' '\n'; } which outputs a stream, I would like to take the output stream, transform it, and paste it with the original output stream. If I take use upcasing as a sample transformation, I can achieve what I want with: $ mkfifo p1 p2 $ alpha | tee p1 >( tr a-z A-Z > p2) >/dev/null & $ paste p1 p2 a A b B c C My question is, is there a better way to do this, preferably one not involving named pipes?
Petr Skocik (29590 rep)
Jul 25, 2015, 12:06 AM • Last activity: May 31, 2017, 10:32 PM
Showing page 1 of 17 total questions