Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
-1
votes
1
answers
81
views
Serving a file (e.g. in Apache) from a named pipe (made with mkfifo)
Let's say I use Apache, and that I am in `/var/www/html/`. I do: mkfifo test.tar tar cvf - ~/test/* > test.tar & In a browser, when trying to download `http://localhost/test.tar` I get: ERR_EMPTY_RESPONSE (didn’t send any data) Is there a specific parameter of `mkfifo` that would the pipe look reall...
Let's say I use Apache, and that I am in
/var/www/html/
. I do:
mkfifo test.tar
tar cvf - ~/test/* > test.tar &
In a browser, when trying to download http://localhost/test.tar
I get:
ERR_EMPTY_RESPONSE
(didn’t send any data)
Is there a specific parameter of mkfifo
that would the pipe look really like a regular file? Here the problems seems to come from the fact the file is a named pipe.
In my real example, I might use different webservers (not Apache), like [CivetWeb
](https://github.com/civetweb/civetweb) , but first I want to analyze if it works with Apache (that I use the most).
Basj
(2579 rep)
Jul 13, 2025, 03:26 PM
• Last activity: Jul 21, 2025, 07:11 AM
3
votes
1
answers
1113
views
How to pipe all output streams to another process?
Take the following Bash script `3-output-writer.sh`: ``` lang-bash echo A >&1 echo B >&2 echo C >&3 ``` Of course when ran as `. 3-output-writer.sh` it gets the error `3: Bad file descriptor`, because Bash doesn't know what to do with the 3rd output stream. One can easily `. 3-output-writer.sh 3>fil...
Take the following Bash script
3-output-writer.sh
:
lang-bash
echo A >&1
echo B >&2
echo C >&3
Of course when ran as . 3-output-writer.sh
it gets the error 3: Bad file descriptor
, because Bash doesn't know what to do with the 3rd output stream. One can easily . 3-output-writer.sh 3>file.txt
, though, and Bash is made happy.
But here's the question: how do I pipe all these into another process, so that it would have all three to work with? Is there any way other than creating three named pipes, as in,
lang-bash
mkfifo pipe1 pipe2 pipe3 # prepare pipes
. 3-output-writer.sh 1>pipe1 2>pipe2 3>pipe3 & # background the writer, awaiting readers
3-input-reader pipe1 pipe2 pipe3 # some sort of reader, careful not to block
rm pipe1 pipe2 pipe3
?
Sinus tentaclemonster
(139 rep)
Aug 6, 2020, 07:30 PM
• Last activity: Oct 5, 2024, 01:36 PM
5
votes
2
answers
607
views
Can I use named pipes to achieve temporal uncoupling?
I have 2 applications that pipe their data: application1 | application2 Basically, application1 generates a log with events that application2 processes. The issue is that I frequently update application2. Meaning, I need to stop these, update the binaries, and restart it. In that small duration appl...
I have 2 applications that pipe their data:
application1 | application2
Basically, application1 generates a log with events that application2 processes. The issue is that I frequently update application2. Meaning, I need to stop these, update the binaries, and restart it. In that small duration application1 can miss data.
I read about using named pipes using
mkfifo
and thought that could be an ideal solution. Keep application1 running and have it write to the file backed named pipe so that when application is updated no data is lost and once application2 starts is gets the data.
Testing this with cat
to emulate reader/writer is works until there is no longer a reader. This is unexpected.
An alternative would be to use an actual file but that has issues:
- It remains on disk and does not act like FIFO
- It requires some form of rotation to prevent files growing super large
- AFAIK when the reader is at the end (tail?) if will need to probe behind a timer if the file has grown in size which increased the processing latency
I'm in control of the reader, current behavior is already that it will auto restart but I'm not in control of the writer. I can only pipe its output to something else.
1. Can named pipes be configured in a way so that these are durable?
2. I read about "pinning" to pipe by the writer, but I fail to get that to work
3. Can I prevent a pipe from getting closed once the reader exits?
4. Are there alternatives that behave like a pipe?
---
Solution https://unix.stackexchange.com/a/784153/585995 works!
mkfifo /tmp/mypipe
writerapp 1/tmp/mypipe
The writer keeps running while I would restart the reader:
readerapp
btw, you can test this yourself by using cat
Ramon Smits
(153 rep)
Sep 27, 2024, 08:33 AM
• Last activity: Sep 27, 2024, 11:56 AM
7
votes
1
answers
697
views
FIFO capture using cat not working as intended?
Hi I am trying to use a Unix FIFO to communicate between a Python script and a shell script. The intention is for the shell script to capture all output of the python script. So far I have the following: ```bash #!/bin/bash # Test IPC using FIFOs. ## Create a named pipe (FIFO). mkfifo ./myfifo ## La...
Hi I am trying to use a Unix FIFO to communicate between a Python script and a shell script. The intention is for the shell script to capture all output of the python script. So far I have the following:
#!/bin/bash
# Test IPC using FIFOs.
## Create a named pipe (FIFO).
mkfifo ./myfifo
## Launch the Python script asynchronously and re-direct its output to the FIFO.
python3 helloworld.py > ./myfifo &
PID_PY=$!
echo "Python script (PID=$PID_PY) launched."
## Read from the FIFO using cat asynchronously.
## Note that running asynchronously using & runs it the program (in this case cat
)
## in a child shell "subshell", so I will collect the output in a file.
echo "Reading FIFO."
>output.log cat ./myfifo &
PID_CAT=$!
## Sleep for 10 seconds.
sleep 10
## Kill the Python script.
kill -15 $PID_PY && echo "Python script (PID=$PID_PY) killed."
## Kill the cat!
kill -15 $PID_CAT
## Remove the pipe when done.
rm -fv ./myfifo
## Check for the existence of the output log file and print it.
[[ -f output.log ]] && cat output.log || echo "No logfile found!." 1>&2
However when I open the log file output.log
, it is empty which is why the last command returns empty. Is there something I am doing wrong. I understand the above might be easily accomplished using an anonymous pipe like so: python3 helloworld.py | cat >output.log
(or even python3 helloworld.py > output.log
for that matter) but my intention is to understand the use of named pipes in Unix/Linux.
The python script just prints something to stdout
every 1 second:
if __name__ == "__main__":
import time
try:
while True:
print("Hello, World")
time.sleep(1)
except KeyboardInterrupt:
print('Exiting.')
finally:
pass
First User
(345 rep)
Sep 25, 2024, 08:58 AM
• Last activity: Sep 25, 2024, 09:23 AM
-1
votes
3
answers
203
views
Empty named pipe
I try to create a named pipe, however, when I store data in it, it is still empty. ```shell $ mkfifo myfifo ``` ```shell $ cat > myfifo 123 123 123 ^[[D ^C ``` ```shell $ ls > myfifo ^C ``` ```shell $ cat < myfifo ``` (no output)
I try to create a named pipe, however, when I store data in it, it is still empty.
$ mkfifo myfifo
$ cat > myfifo
123
123
123
^[[D
^C
$ ls > myfifo
^C
$ cat < myfifo
(no output)
Irina
(139 rep)
Aug 19, 2024, 03:11 PM
• Last activity: Aug 20, 2024, 11:51 PM
0
votes
1
answers
380
views
Setting large fs.pipe-max-size
When I increase `fs.pipe-max-size` like so: ``` bash echo "fs.pipe-max-size = N" >> /etc/sysctl.conf sysctl -p ``` *(`N` is ~4-10Mbytes)* And use `F_SETPIPE_SZ` to change named pipe sizes to `N`, sometimes it fails with "operation not permitted" error. The system has ~20 pipes and I set the same pip...
When I increase
fs.pipe-max-size
like so:
bash
echo "fs.pipe-max-size = N" >> /etc/sysctl.conf
sysctl -p
*(N
is ~4-10Mbytes)*
And use F_SETPIPE_SZ
to change named pipe sizes to N
, sometimes it fails with "operation not permitted" error.
The system has ~20 pipes and I set the same pipe buffer size on all of them.
The question is:
- is it because I hit some kind of a total kernel pipe buffer memory capacity (btw the system has 30G RAM)?
- Or is it because I use N
that isn't divisible by a page size so F_SETPIPE_SZ
might set the size above the fs.pipe-max-size
limit and it will fail as "operation not permitted"? Makes sense, I think I saw in logs values larger than I asked.
- Or is it something totally else?
JAre
(125 rep)
Jul 6, 2024, 06:44 AM
• Last activity: Jul 6, 2024, 01:09 PM
0
votes
0
answers
37
views
Strange incongruent output for both nc and fifo
I have this exact code: #!/bin/bash gtimeout(){ if type -f gtimeout &> /dev/null; then command gtimeout "$@" else timeout "$@" fi } export -f gtimeout; on_first_match(){ local pattern="$1" # The pattern to search for while IFS= read -r line; do if [[ "$line" == *"$pattern"* ]]; then exit 0 fi done }...
I have this exact code:
#!/bin/bash
gtimeout(){
if type -f gtimeout &> /dev/null; then
command gtimeout "$@"
else
timeout "$@"
fi
}
export -f gtimeout;
on_first_match(){
local pattern="$1" # The pattern to search for
while IFS= read -r line; do
if [[ "$line" == *"$pattern"* ]]; then
exit 0
fi
done
}
export -f on_first_match
write_to_server(){
echo -e "$?" | nc 0.0.0.0 3333
}
export -f write_to_server
(
set +e;
sleep 2;
(
gtimeout 15 docker logs -f rabbitmq-node1 | on_first_match 'Starting listener on';
write_to_server
) &
(
gtimeout 15 docker logs -f mongodb-node1 | on_first_match 'waiting for connections';
write_to_server
) &
(
gtimeout 15 docker logs -f elasticsearch-node1 | on_first_match 'started';
write_to_server
) &
(
gtimeout 15 docker logs -f rabbitmq-node2 | on_first_match 'Starting listener on';
write_to_server
) &
(
gtimeout 15 docker logs -f mongodb-node2 | on_first_match 'waiting for connections';
write_to_server
) &
(
gtimeout 15 docker logs -f elasticsearch-node2 | on_first_match 'started';
write_to_server
) &
(
gtimeout 15 docker logs -f rabbitmq-node3 | on_first_match 'Starting listener on';
write_to_server
) &
(
gtimeout 15 docker logs -f mongodb-node3 | on_first_match 'waiting for connections';
write_to_server
) &
(
gtimeout 15 docker logs -f elasticsearch-node3 | on_first_match 'started';
write_to_server
) &
) &
gtimeout 30 nc -k -l 3333 | while IFS= read -r line; do
echo "Received: $line";
done
the output I get is like so:
Error response from daemon: No such container: mongodb-node1
Error response from daemon: No such container: rabbitmq-node2
Error response from daemon: No such container: rabbitmq-node1
Error response from daemon: No such container: elasticsearch-node2
Error response from daemon: No such container: elasticsearch-node1
Error response from daemon: No such container: mongodb-node2
Error response from daemon: No such container: mongodb-node3
Error response from daemon: No such container: rabbitmq-node3
Error response from daemon: No such container: elasticsearch-node3
Received: 0
Received: 0
Received: 0
Received: 0
Received: 0
(with just 5 received lines, sometimes 4, sometimes 6 but never 9 as expected)
but I expect this:
Error response from daemon: No such container: mongodb-node1
Error response from daemon: No such container: rabbitmq-node2
Error response from daemon: No such container: rabbitmq-node1
Error response from daemon: No such container: elasticsearch-node2
Error response from daemon: No such container: elasticsearch-node1
Error response from daemon: No such container: mongodb-node2
Error response from daemon: No such container: mongodb-node3
Error response from daemon: No such container: rabbitmq-node3
Error response from daemon: No such container: elasticsearch-node3
Received: 0
Received: 0
Received: 0
Received: 0
Received: 0
Received: 0
Received: 0
Received: 0
Received: 0
with 9 received lines.
Anyone know what is going on here? (The same behavior happens when I switch the code out for a mkfifo instead of a socket).
Alexander Mills
(10734 rep)
Nov 22, 2023, 06:07 AM
4
votes
2
answers
4059
views
How to write something to named pipe even if there are no readers
I have this little test script: rm fooo | cat mkfifo fooo echo 'bar' > fooo # blocks here echo 'done' I am guessing that because there is nobody reading from the named pipe, that the write call will block until then. Is there some way to write even if there are no readers or to check to see if there...
I have this little test script:
rm fooo | cat
mkfifo fooo
echo 'bar' > fooo # blocks here
echo 'done'
I am guessing that because there is nobody reading from the named pipe, that the write call will block until then.
Is there some way to write even if there are no readers or to check to see if there are no readers?
Alexander Mills
(10734 rep)
Jun 4, 2019, 07:00 PM
• Last activity: May 1, 2023, 07:06 PM
0
votes
1
answers
517
views
How to guarantee that that only a specific process reads from a named pipe?
Suppose that, at time (1), I create a named pipe using Python with the goal that eventually this Python process would write something to that named pipe. Why? Because, at time (2), there is another process that is expected to read from that named pipe. So, basically, it's IPC via named pipes. Why is...
Suppose that, at time (1), I create a named pipe using Python with the goal that eventually this Python process would write something to that named pipe. Why? Because, at time (2), there is another process that is expected to read from that named pipe.
So, basically, it's IPC via named pipes. Why is this neat? Because it looks like a file, so that the other process that can only read files, can be communicated to via this named pipe mechanism as a convenient IPC without needing to rewrite the other process.
**But there is a problem:** suppose that between time (1) and time (2), an evil process started reading from the named pipe 1st before that intended process. This way, my Python script may end up sending data to an unintended process. So I am not concerned if the hijacker starts writing to the process in my specific risk model (I'm only concerned about the hijacking reading from the pipe before the intended process).
**Question:** is there any mechanism to ensure no other process but the intended one reads from the IPC other than the intended process?
caveman
(173 rep)
Aug 12, 2020, 05:06 AM
• Last activity: Apr 19, 2023, 01:10 AM
1
votes
0
answers
267
views
Unable to redirect stdout for background terraform process after it received input from named pipe
I have a terraform file: ```terraform terraform { required_version = "1.3.5" } locals { a = "foo" b = "bar" } ``` in a bash terminal, I can do: ```bash $ echo "local.a" | terraform console "foo" $ echo "local.b" | terraform console "bar" ``` Now what I'm trying to do is start a process running `terr...
I have a terraform file:
terraform {
required_version = "1.3.5"
}
locals {
a = "foo"
b = "bar"
}
in a bash terminal, I can do:
$ echo "local.a" | terraform console
"foo"
$ echo "local.b" | terraform console
"bar"
Now what I'm trying to do is start a process running terraform console
in the background and feed it commands.
This is what I've tried (following this answer https://serverfault.com/a/815253) :
$ mkfifo /tmp/srv-input
$ tail -f /tmp/srv-input | terraform console >>output.txt 2>&1 &
this starts the background process correctly:
$ ps -ax | grep terraform
6030 pts/0 Sl 0:01 terraform console
if I then run:
$ echo "local.a" > /tmp/srv-input
the output file, output.txt
is empty.
$ cat output.txt
$
If I run:
$ echo "local.c" > /tmp/srv-input # invalid input
the output file, output.txt
contains the (expected) error:
$ cat output.txt
╷
│ Error: Reference to undeclared local value
│
│ on line 1:
│ (source code not available)
│
│ A local value with the name "c" has not been declared. Did you mean "a"?
╵
+ Exit 1 tail -f /tmp/srv-input | terraform console >> output.txt 2>&1
----------
Why is only the stderr being redirected to the log file, but not stdout?
Foo
(242 rep)
Dec 15, 2022, 02:07 PM
1
votes
1
answers
191
views
Speed up grep usage inside bash script
I am currently working on creating a bash script that is supposed to process large log files from one of my programs. When I first started the script took around 15 seconds to complete which wasn't bad but I wanted to improve it. I implemented a queue with `mkfifo` and reduced the parse time to 6 se...
I am currently working on creating a bash script that is supposed to process large log files from one of my programs. When I first started the script took around 15 seconds to complete which wasn't bad but I wanted to improve it. I implemented a queue with
mkfifo
and reduced the parse time to 6 seconds. I wanted to ask you guys is there any way to improve the parsing speed of the script.
The current version of the script:
#!/usr/bin/env bash
# $1 is server log file
# $2 is client logs file directory
declare -A orders_array
fifo=$HOME/.fifoDate-$$
mkfifo $fifo
# Queue for time conversion
exec 5> >(exec stdbuf -o0 date -f - +%s%3N >$fifo)
exec 6 >(exec stdbuf -o0 grep -oP '[0-9a-f]*-[0-9a-f]*-[0-9a-f]*-[0-9a-f]*-[0-9a-f]*' >$fifo)
exec 8&5 "${line:1:26}"
read -t 1 -u6 converted_time
orders_array[$order_id]=$converted_time
done &7 "$line"
read -t 1 -u8 id
echo >&5 "${line:1:26}"
read -t 1 -u6 converted_time
time_diff="$(($converted_time - orders_array[$id]))"
echo "$id -> $time_diff ms"
done GatewayCommon::States::Executed]
[2022-12-07 07:36:18.209567] [MarketOrderTransitionsa4ec2abf-059f-4452-b503-ae58da2ce1ff] [info] [log_action] [(lambda at ../subprojects/market_session/private_include/MarketSession/MarketOrderTransitions.hpp:57:25) for event: MarketMessages::OrderExecuted]
[2022-12-07 07:36:18.209574] [MarketOrderTransitionsa4ec2abf-059f-4452-b503-ae58da2ce1ff] [info] [log_process_event] [boost::sml::v1_1_0::back::on_entry]
the id is in square brackets after MarketOrderTransitions (a4ec2abf-059f-4452-b503-ae58da2ce1ff)
Client
[2022-12-07 07:38:47.545433] [twap_algohawk] [info] [] [Event received (OrderExecuted): {"MessageType":"MarketMessages::OrderExecuted","averagePrice":"49.900000","counterPartyIds":{"activeId":"dIh5wYd/S4ChqMQSKMxEgQ**","executionId":"2295","inactiveId":"","orderId":"3dOKjIoURqm8JjWERtInkw**"},"cumulativeQuantity":"1200.000000","executedPrice":"49.900000","executedQuantity":"1200.000000","executionStatus":"Executed","instrument":[["Symbol","5"],["Isin","5"],["SecurityIDSource","4"],["Mic","MARS"]],"lastFillMarket":"MARS","leavesQuantity":"0.000000","marketSendTime":"07:38:31.972000000","orderId":"a4ec2abf-059f-4452-b503-ae58da2ce1ff","orderPrice":"49.900000","orderQuantity":"1200.000000","propagationData":[],"reportId":"Qx2k73f7QqCqcT0LTEJIXQ**","side":"Buy","sideDetails":"Unknown","transactionTime":"00:00:00.000000000"}]
The id in the client log is inside orderId tag (there is 2 of them and I use the second one)
The wanted output is:
98ddcfca-d838-4e49-8f10-b9f780a27470 -> 854 ms
5a266ca4-67c6-4482-9068-788a3520b2f3 -> 18 ms
2e8d28de-eac0-4776-85ab-c75d9719b7c6 -> 58950 ms
409034eb-4e55-4e39-901a-eba770d497c0 -> 56172 ms
5b1dc7e8-fae0-43d2-86ea-d3df4dbe810b -> 52505 ms
5249ac24-39d2-40f5-8adf-dcf0410aebb5 -> 17446 ms
bef18cb3-8cef-4d8a-b244-47fed82f21ea -> 1691 ms
7c53c950-23fd-497e-a011-c07363d5fe02 -> 18194 ms
I am in particular concerned only about the "order executed" messages in the log files
Dzamba
(11 rep)
Dec 12, 2022, 09:30 AM
• Last activity: Dec 13, 2022, 03:32 PM
1
votes
1
answers
299
views
Process (mplayer) doesn't read from named pipe when started from webserver (lighttpd)
# tl;dr $ sudo -u www-data mplayer -slave -input file=/srv/mplayer.fifo -playlist /srv/list & $ lsof /srv/mplayer.fifo | tail +2 mplayer 21059 www-data 4u FIFO 179,2 0t0 2359331 /srv/mplayer.fifo $ cat /var/www/html/test #!/usr/bin/bash mplayer -slave -input file=/srv/mplayer.fifo -playlist /srv/lis...
# tl;dr
$ sudo -u www-data mplayer -slave -input file=/srv/mplayer.fifo -playlist /srv/list &
$ lsof /srv/mplayer.fifo | tail +2
mplayer 21059 www-data 4u FIFO 179,2 0t0 2359331 /srv/mplayer.fifo
$ cat /var/www/html/test
#!/usr/bin/bash
mplayer -slave -input file=/srv/mplayer.fifo -playlist /srv/list &
$ curl 'http://localhost/test' # mplayer starts playback (and keeps playing)
$ lsof /srv/mplayer.fifo
# no output!?
# details
On my **Raspberry Pi**, I have a *lighttpd* server running. It's supposed to start and control an *mplayer* process. The webserver starts mplayer with
-slave -input file=/srv/mplayer.fifo
. (So mplayer reads and executes commands from that file.) In order to skip to the next song, one of the webserver scripts writes pt_skip 1
to /srv/mplayer.fifo
. This indeed works when mplayer was run from command line. But when started from lighttpd, mplayer does not read commands from /srv/mplayer.fifo
. I don't understand why. Here's what I did:
Setup
$ mkfifo /srv/mplayer.fifo
$ chmod o+w /srv/mplayer.fifo
$ ls -l /srv/mplayer.fifo
prw-r--rw- 1 root root 0 Aug 7 12:11 /srv/mplayer.fifo
Test (ran from command line)
$ sudo -u www-data mplayer -ao alsa -slave -input file=/srv/mplayer.fifo -playlist /srv/list -shuffle
$ lsof /srv/mplayer.fifo
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
mplayer 21059 www-data 4u FIFO 179,2 0t0 2359331 /srv/mplayer.fifo
$ ps aux | grep mplayer
root 21058 0.0 0.2 4680 2400 pts/0 S+ 12:13 0:00 sudo -u www-data mplayer -ao alsa -slave -input file=/srv/mplayer.fifo -playlist /srv/list -shuffle
www-data 21059 11.6 3.1 127928 30008 pts/0 SL+ 12:13 0:01 mplayer -ao alsa -slave -input file=/srv/mplayer.fifo -playlist /srv/list -shuffle
That's like expected. But if I run mplayer from lighttpd ...
$ cat /var/www/html/play
#!/usr/bin/bash
mplayer -ao alsa -slave -input file=/srv/mplayer.fifo -playlist /srv/list -shuffle &
... **it starts mplayer**, but the mplayer instance is not reading from /srv/mplayer.fifo
. lsof
doesn't produce any output:
$ lsof /srv/mplayer.fifo
$ ps aux | grep mplayer
www-data 21177 15.3 3.1 128212 29744 ? SL 12:30 0:01 mplayer -ao alsa -slave -input file=/srv/mplayer.fifo -playlist /srv/list -shuffle
I can also see mplayer is not reading from the pipe, because writing to it blocks. The mplayer logs don't show anything unusual. Do you have an idea why mplayer doesn't read from the named pipe when run from lighttpd?
steffen
(121 rep)
Aug 7, 2022, 12:56 PM
• Last activity: Aug 15, 2022, 01:02 PM
0
votes
1
answers
597
views
Adding 'Progress bar / counter' to a parallelised For Loop
I've been greatly inspired by this question: https://unix.stackexchange.com/questions/103920/parallelize-a-bash-for-loop to parallelise some tools I've written that involve very loooong while read loops (ie doing the same task / set of tasks across paths given in an input file. The input file would...
I've been greatly inspired by this question: https://unix.stackexchange.com/questions/103920/parallelize-a-bash-for-loop to parallelise some tools I've written that involve very loooong while read loops (ie doing the same task / set of tasks across paths given in an input file. The input file would contain around 90,000 lines, and growing).
I've done all the work to 'shoehorn' PSkocik's 'N processes with a FIFO-based semaphore' example into my code...
-bash
# initialize a semaphore with a given number of tokens
open_sem(){
mkfifo /tmp/pipe-$$
exec 3/tmp/pipe-$$
rm /tmp/pipe-$$
local i=$1
for((;i>0;i--)); do
printf %s 000 >&3
done
}
# run the given command asynchronously and pop/push tokens
run_with_lock(){
local x
# this read waits until there is something to read
read -u 3 -n 3 x && ((0==x)) || exit $x
(
( "$@"; )
# push the return code of the command to the semaphore
printf '%.3d' $? >&3
)&
}
N=4
open_sem $N
for thing in {a..g}; do
run_with_lock task $thing
done
However, my 'old' code had a nice progress counter built into the read loop (code below is abbreviated) and yes, I know there is a weird mix of echo, awk and printf I tend to reuse code from other scripts I've written where maybe some of the code was based off of other online examples etc... I'm sure I can tidy this up... but its works and I'm the only one using this code!:
-bash
## $temp1 is the file with 90,000 lines to read over
## $YELLOW is a global variable exported from my bashrc with the escape code for yellow text
## $GREEN is a global variable exported from my bashrc with the escape code for green text
## $CL is a global variable exported from my bashrc with the escape code for Clear Line
## $NC is a global variable exported from my bashrc with the escape code to revert text colour back to normal
num_lines="$(cat $temp1 | wc -l)"
percent_per_line="$(awk "BEGIN {print 100/$num_lines}")"
progress_percent='0'
current_line='1'
echo -ne "${CL}${YELLOW}PROGRESS: ${progress_percent}% ${NC}\r"
while read line; do
############################################
##commands to process $line data as needed##
############################################
progress_percent="$(awk "BEGIN {print $progress_percent + $percent_per_line }")"
awk -v y=$YELLOW -v nc=$NC -v progress=$progress_percent -v current_line=$current_line -v total_lines=$num_lines 'BEGIN {printf (y"\033[2KPROGRESS: %.3f%% (%d OF %d)\n\033[1A"nc, progress, current_line, total_lines) }'
#I think I mixed my global var escape codes with typed out ones cause I was I forgot / have no need to export \033[2K and \033[1A for anything else?
((current_line++))
done < "$temp1"
echo -e "${CL}${GREEN}PROGRESS: 100.000% (${num_lines} OF ${num_lines})${NC}"
I'm trying to find a way to again, shoehorn something with a similar output into the 'new' FIFO-semaphore code....
I just can't work out how! does it go into the run_with_lock function, if so where, and I would need to pass into that function the percent_per_line and num_lines variables but it passes $@
within it :( I feel I'm not fully understanding how the FIFO-semaphore works as if I did I would use another semaphore message type thing to pass the data I need around...
Any help would be massively appreciated as it helps me learn and improve!!
user279851
Apr 6, 2021, 06:14 AM
• Last activity: Apr 8, 2021, 02:36 AM
2
votes
2
answers
79
views
Process in Pipe which Processes 256 bytes at a Time
I have a c program on a Cyclone 5 that does an FFT using the connected FPGA. This program currently takes 256 bytes from `stdin` and then process it gives the FFT results on `stdout`. I run it like this from the Linux bash on the Cyclone 5. ./fpga_fft < input_s16le_audio.pcm This only evaluates the...
I have a c program on a Cyclone 5 that does an FFT using the connected FPGA. This program currently takes 256 bytes from
stdin
and then process it gives the FFT results on stdout
. I run it like this from the Linux bash on the Cyclone 5.
./fpga_fft < input_s16le_audio.pcm
This only evaluates the first 256 bytes. How do I do this, so that the program is continuously called with the stdin stream until all from the *.pcm file is read?
Ideas:
cat input_s16le_audio.pcm|xargs ./fpga_fft
Somehow xargs needs to be told to process 256 bytes at the time in chronological sequential order (not parallel).
Flying Swissman
(123 rep)
Jun 25, 2020, 11:23 AM
• Last activity: Jul 23, 2020, 06:27 AM
2
votes
1
answers
210
views
Using tee and paste results in a deadlock
I am trying to redirect stdout of a command into two "branches" using tee for separate processing. Finally I need to merge results of both "branches" using paste. I came up with the following code for the producer: ``` bash mkfifo a.fifo b.fifo python -c 'print(("0\t"+"1"*100+"\n")*10000)' > sample....
I am trying to redirect stdout of a command into two "branches" using tee for separate processing. Finally I need to merge results of both "branches" using paste. I came up with the following code for the producer:
bash
mkfifo a.fifo b.fifo
python -c 'print(("0\t"+"1"*100+"\n")*10000)' > sample.txt
cat sample.txt | tee >(cut -f 1 > a.fifo) >(cut -f 2 > b.fifo) | awk '{printf "\r%lu", NR}'
# outputs ~200 lines instantly
# and then ~200 more once I read from pipes
and then in a separate terminal I start the consumer:
bash
paste a.fifo b.fifo | awk '{printf "\r%lu", NR}'
# outputs ~200 once producer is stopped with ctrl-C
The problem is that it hangs. This behaviour seems to depend on the input length:
1. If input lines are smaller (i.e. if second column contains 30 characters instead of 100) it works fine.
2. If a.fifo
and b.fifo
are fed with the same (or similar in length) input it looks like it also works fine.
The problem seemingly arises when I feed short chunks in say a.fifo
and long in b.fifo
. This behaviour does not depend on the order in which I specify pipes in paste
.
I am not very familiar with Linux and its piping logic but it seems that somehow it deadlocks. My question is whether this can be reliably implemented somehow? If so, how? Maybe there are other ways without using tee
and paste
?
HollyJolf
(21 rep)
May 6, 2020, 11:57 PM
• Last activity: May 9, 2020, 02:34 AM
0
votes
1
answers
60
views
Why don't named pipes respect the order at which readers were attached?
I have this test script: #!/usr/bin/env bash fif="foooz"; rm "$fif" ; mkfifo "$fif" ( cat "$fif" | cat && echo "1") & sleep 0.1 ( cat "$fif" | cat && echo "2") & sleep 0.1 ( cat "$fif" | cat && echo "3") & echo "first" > "$fif" wait; the output I get is varied, I these varieties: first 1 ---- first...
I have this test script:
#!/usr/bin/env bash
fif="foooz"; rm "$fif" ; mkfifo "$fif"
( cat "$fif" | cat && echo "1") &
sleep 0.1
( cat "$fif" | cat && echo "2") &
sleep 0.1
( cat "$fif" | cat && echo "3") &
echo "first" > "$fif"
wait;
the output I get is varied, I these varieties:
first
1
----
first
2
----
first
1
2
----
first
3
my question is - why doesn't the order at which readers are attached to the named pipe matter/respected? Seems lame that it's almost random?
Alexander Mills
(10734 rep)
Jun 5, 2019, 12:50 AM
• Last activity: Jun 5, 2019, 03:20 PM
5
votes
1
answers
4600
views
How to check for presence of named pipe on the file system
I tried using the -f flag to test if a named pipe is present if [[ ! -f "$fifo" ]]; then echo 'There should be a fifo.lock file in the dir.' > /dev/stderr return 0; fi this check does not seem correct. So perhaps a named-pipe is not a file, but something else?
I tried using the -f flag to test if a named pipe is present
if [[ ! -f "$fifo" ]]; then
echo 'There should be a fifo.lock file in the dir.' > /dev/stderr
return 0;
fi
this check does not seem correct. So perhaps a named-pipe is not a file, but something else?
user282164
Jun 4, 2019, 06:38 PM
• Last activity: Jun 4, 2019, 07:16 PM
0
votes
1
answers
204
views
Run program when/instead of writing to FIFO?
I have a program that writes data every second to a FIFO. Now I want to alter some of this data and write it to another FIFO. What would be the best approach? Can I somehow pipe this directly to my program (I have no control over/source code of the original program)? Or would I have to create a prog...
I have a program that writes data every second to a FIFO. Now I want to alter some of this data and write it to another FIFO.
What would be the best approach? Can I somehow pipe this directly to my program (I have no control over/source code of the original program)? Or would I have to create a program that runs in the background and constantly polls the FIFO for new input?
TJJ
(101 rep)
May 17, 2019, 11:17 AM
• Last activity: May 18, 2019, 05:34 PM
6
votes
1
answers
2040
views
Why doesn't mkfifo with a mode of 1755 grant read permissions and sticky bit to the user?
I'm creating a server and client situation where i want to create a pipe so they can communicate. I created the pipe in the server code with `mkfifo("fifo",1755);`: - 1 for only user that created and root to be able to delete it or rename it, - 7 for give read, write and exec to user, and - 5 for bo...
I'm creating a server and client situation where i want to create a pipe so they can communicate.
I created the pipe in the server code with
mkfifo("fifo",1755);
:
- 1 for only user that created and root to be able to delete it or rename it,
- 7 for give read, write and exec to user, and
- 5 for both group and other to only give them read and exec.
The problem is that later in the server code I open the fifo to read from it ("fifo",O_RDONLY);
but when i execute it, it shows me an perror that denies me acess to the fifo.
I went to see the permissions of the pipe fifo and it says
p-wx--s--t
so:
- p
stands for pipe,
- -
means the user has no read. I don't know how when I gave it with the 7,
- s
group executes has user. I don't how if i gave 1 so supposedly it should give to user and others the ability to only read and execute and others have t that was expected.
Do I have a misunderstanding of the permissions?
Joao Parente
(163 rep)
Apr 11, 2019, 10:01 AM
• Last activity: Apr 11, 2019, 05:21 PM
1
votes
2
answers
188
views
Two named PIPEs (PIPE_in/PIPE_out) connected with `tail -f` | String sent to PIPE_in doesn't reach PIPE_out
1.Create named PIPEs, `pipe_in` and `pipe_out` by running: $ mkfifo pipe_in $ mkfifo pipe_out 2.Connect `pipe_in` to `pipe_out`: TERM0: $ tail -f pipe_in > pipe_out 3.Send string `hello world!` to `pipe_in` and expect it to arrive at `pipe_out`: TERM1: $ tail -f pipe_out TERM2: $ echo "hello world!"...
1.Create named PIPEs,
pipe_in
and pipe_out
by running:
$ mkfifo pipe_in
$ mkfifo pipe_out
2.Connect pipe_in
to pipe_out
:
TERM0: $ tail -f pipe_in > pipe_out
3.Send string hello world!
to pipe_in
and expect it to arrive at pipe_out
:
TERM1: $ tail -f pipe_out
TERM2: $ echo "hello world!" > pipe_in
I can only see the string arriving at pipe_out
if I kill command in 2.
.
It seems to be a buffering issue so I decided to run all commands above with stdbuf -i0 -e0 -o0
but it didn't work.
fmagno
(113 rep)
Mar 11, 2019, 04:27 PM
• Last activity: Apr 7, 2019, 02:47 PM
Showing page 1 of 20 total questions