Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

-1 votes
1 answers
81 views
Serving a file (e.g. in Apache) from a named pipe (made with mkfifo)
Let's say I use Apache, and that I am in `/var/www/html/`. I do: mkfifo test.tar tar cvf - ~/test/* > test.tar & In a browser, when trying to download `http://localhost/test.tar` I get: ERR_EMPTY_RESPONSE (didn’t send any data) Is there a specific parameter of `mkfifo` that would the pipe look reall...
Let's say I use Apache, and that I am in /var/www/html/. I do: mkfifo test.tar tar cvf - ~/test/* > test.tar & In a browser, when trying to download http://localhost/test.tar I get: ERR_EMPTY_RESPONSE (didn’t send any data) Is there a specific parameter of mkfifo that would the pipe look really like a regular file? Here the problems seems to come from the fact the file is a named pipe. In my real example, I might use different webservers (not Apache), like [CivetWeb](https://github.com/civetweb/civetweb) , but first I want to analyze if it works with Apache (that I use the most).
Basj (2579 rep)
Jul 13, 2025, 03:26 PM • Last activity: Jul 21, 2025, 07:11 AM
0 votes
2 answers
44 views
when opening a FIFO for reading+writing, and blocking - how to tell if other side opened for reading or writing?
If I open a fifo for reading+writing in blocking mode: fd = open("foo", "O_RDWR"); Then it blocks until someone on the other end opens it for either reading or writing. But how do I know which? if(???) { read(fd, .......); } else { write(fd, ......); } close(fd); (I need that close, because if I am...
If I open a fifo for reading+writing in blocking mode: fd = open("foo", "O_RDWR"); Then it blocks until someone on the other end opens it for either reading or writing. But how do I know which? if(???) { read(fd, .......); } else { write(fd, ......); } close(fd); (I need that close, because if I am writing I need to send an EOF to the other end. And any writer will send a single line through and close, so I also need to close if I read.) What do I put instead of the ??? to figure out what the other end did? Is non-blocking and select() my only option? Is there a way to inspect fd and see if it's ready for reading or writing?
Ariel (117 rep)
May 30, 2025, 06:30 PM • Last activity: May 31, 2025, 06:15 AM
0 votes
1 answers
3259 views
Named pipe buffer after process end
I am creating named pipes in Ubuntu 18 and 16 environments in C language using gcc as compiler (`mkfifo()` and `open()`). One of the things I noticed that the named pipes remain in the filesystem after the process ends. My process is a endless process that runs in a `while(1)` loop because of my req...
I am creating named pipes in Ubuntu 18 and 16 environments in C language using gcc as compiler (mkfifo() and open()). One of the things I noticed that the named pipes remain in the filesystem after the process ends. My process is a endless process that runs in a while(1) loop because of my requirements and the only way to exit is ctrl-c or the kill command in linux. I might add a ctrl-c signal to properly handle these situations but this is not the question. Given that the named pipe remains in the filesystem (for example /tmp/named_pipe1), do I need to check if the named pipe exists in the filesystem and delete it in the beginning of the process (because the file persists in the system), or is it redundant because even the file stays in the filesystem, it's buffer is deleted and I can use it like a new fresh fifo ? Because I don't want the fifo buffers to be mixed when I ctrl-c exit the previous run of the code and start the new one. I require an empty buffer when I restart the code. **Note:** The system is not restarted between the runs of the process. Just the process is re-run. Thanks in advance.
Max Paython (101 rep)
Jan 25, 2020, 09:47 PM • Last activity: Apr 22, 2025, 09:05 AM
0 votes
2 answers
115 views
Split named pipe into two ends: One read only, one write only
I'm trying to set up some inter-process communication, and as a first idea I want to use a named pipe to forward messages from one process to the other. The thing I'm getting stuck on is ensuring that the first process only ever writes to the pipe, and the second process only ever reads from the pip...
I'm trying to set up some inter-process communication, and as a first idea I want to use a named pipe to forward messages from one process to the other. The thing I'm getting stuck on is ensuring that the first process only ever writes to the pipe, and the second process only ever reads from the pipe. Now, normally I could just write the program for the first process to only ever write to the pipe, and vice versa for the second process, but I'm somewhat paranoid and want the program to throw an error if it ever even tries to open the pipe in the wrong mode. Essentially, what I'm trying to do is to "split" the pipe in half, so that the write-end is with my first program:
program1/
├── write-end-of-pipe
└── program1.c
and so that my read-end is with my second program:
program2/
├── read-end-of-pipe
└── program2.c
Ideally there could be actual files on the file system for the read/write ends of the pipe. ----- Right now, my best solution is the following: 1. Create a named pipe in program1 and change the file permissions to be write only by this user:
cd ~/program1
    mkfifo write-end-of-pipe
    chmod 200 write-end-of-pipe
2. Create a named pipe in program2 and change the file permissions to be read only by this user:
cd ~/program2
    mkfifo read-end-of-pipe
    chmod 400 read-end-of-pipe
3. Create a new user for forwarding, and have it run a daemon that forwards from the first named pipe to the second: (Assuming the user is already created:)
su - forwarding-daemon-user
    chmod 400 ~/program1/write-end-of-pipe
    chmod 200 ~/program2/read-end-of-pipe
    cat write-end-of-pipe > read-end-of-pipe &
----- The problem I have with this is twofold: 1. It requires two pipes instead of just one. 2. It requires a daemon. (My cat command earlier is not robust enough on its own, but it gets the point across.) Is there a better way to create two files for a pipe, one of which can only be written to, and the other only read? ----- Edit 1: For some context on what I want to use this inter-process communication for: I think it would be cool if I could have a "screen-share" where one process written in one language creates a screen (essentially a video stream), and another process in an entirely different language displays the screen it receives. (Example use case: One language makes it easy to render a game, but you want to write your GUI in another language.) The reason for my paranoia about reading/writing is that I want to be able to swap out different programs that create screens or view screens, and I don't want to worry about connecting two screen creators together (if I provide a read end and a write end, then one program gets the wrong end if I make such a mistake, and the error is caught immediately instead of later when a pipe overflows) or otherwise broken programs causing silent problems. Edit 2: I feel like I should give an explanation why having two separate users for the reading/writing is not an ideal solution for me. The main reason is that it is a significant step on the path towards containerization. Once you say, "I'll just control permissions via separate users/namespaces," you need to figure out how you are going to *set up those users/namespaces*, and that essentially means putting the program in a container. I want to avoid this because it's not as flexible; suddenly every program now needs a standardized way of being invoked, the named pipe connecting the two programs needs to be created by a special user that can peer into these containers, etc. Hence, I would prefer to avoid multiple users, and similar techniques that require wrapping the program to be called. A solution that *would* work well would be if there was some way to "mount" the named pipe so that it has two different file permissions from the two different locations to be written to/read from, but I don't think this is possible on Linux.
Joseph Camacho (109 rep)
Feb 28, 2025, 06:16 AM • Last activity: Mar 10, 2025, 01:52 PM
3 votes
1 answers
598 views
Using make variable in bash scripting as part of a makefile command
I've created a makefile command to automate a publishing workflow using Pandoc and the [Generic Preprocessor (GPP)](https://files.nothingisreal.com/software/gpp/gpp.html). It is as follows: ``` TODAY = $(shell date +'%Y%m%d-%H%M') MACROS = utils/gpp/macros.md TEMPLATE = utils/template.docx GPP = uti...
I've created a makefile command to automate a publishing workflow using Pandoc and the [Generic Preprocessor (GPP)](https://files.nothingisreal.com/software/gpp/gpp.html) . It is as follows:
TODAY = $(shell date +'%Y%m%d-%H%M')
MACROS = utils/gpp/macros.md
TEMPLATE = utils/template.docx

GPP = utils/gpp/gpp.exe
MACROS = utils/gpp/_macros.pp

METADATA = metadata.yaml
FM = content/frontmatter/*.md
MM = content/mainmatter/*.md
CONTENT = $(FM) $(MM)

default: docx

docx:
	for file in $(CONTENT); do \
		fifo=$$(mktemp -u); \
		FIFOS+=("$$fifo"); \
		cat $(MACROS) "$$file" | \
		$(GPP) -DWORD -x -o "$$fifo" & \
	done; \
	pandoc metadata.yaml "$${FIFOS[@]}" -f markdown -t docx \
	--citeproc --csl $(CSL) \
	--reference-doc=$(TEMPLATE) \
	--file-scope \
	-o dist/$(TODAY).docx
It runs fine on the [Git terminal](https://git-scm.com/downloads) on Windows. However on my on Ubuntu 24.04 system, I'm getting a /bin/sh: 1: Syntax error: "(" unexpected error, and it points towards the line for file in $(CONTENT); do \. I've tried the following to no avail: 1. for file in ( $(CONTENT) ); do \ 2. for file in "$(CONTENT)"; do \ Why is this happening, and how can I rectify it? (Yes, I'm aware that the GPP link has to be updated for Ubuntu but it doesn't even reach that line) The repository is public if anyone wants to try it locally: https://github.com/khalid-hussain/editorial-project-template
Khalid Hussain (135 rep)
Feb 12, 2025, 08:00 PM • Last activity: Feb 13, 2025, 04:54 AM
0 votes
1 answers
88 views
How can I make writes to named pipe block if the reader closes, and vice versa?
Right now if I write to a name pipe and then close the reader, the writer gets a SIGPIPE and then exits. For example, ```sh $ mkfifo pipe $ cat pipe & # read from the pipe in the background $ cat > pipe # write to the pipe line1 line2 ... ``` If I then stop the reader process, the writer process die...
Right now if I write to a name pipe and then close the reader, the writer gets a SIGPIPE and then exits. For example,
$ mkfifo pipe
$ cat pipe & # read from the pipe in the background
$ cat > pipe # write to the pipe
line1
line2
...
If I then stop the reader process, the writer process dies because of the SIGPIPE the reader process sent when stopping. I don't want this to happen. If the writer tries writing to the pipe *before* a reader appears, the writer blocks and waits for a reader. I want behavior like this. If the reader closes the pipe, the writer should block and wait for a new reader. Likewise, if I stop the writer process, the reader process closes because it sees an EOF (or rather, a 0 byte read). I would also like for it to block until a new writer appears, instead. Does cat have a flag for doing this? If not, this is is probably a pretty simple C program.
Joseph Camacho (109 rep)
Jan 30, 2025, 07:00 PM • Last activity: Jan 30, 2025, 09:21 PM
5 votes
1 answers
2336 views
How should I set up a systemd service to auto-start a server and let me pass commands to it?
#### Goal: I'm trying to get a Minecraft server to run on computer boot with systemd on Fedora. I have a few self-imposed criteria that I need to meet to be able to properly manage my server(s): 1. It has to run as the `minecraft` system user I made with the home dir `/opt/minecraft`. I attempted th...
#### Goal: I'm trying to get a Minecraft server to run on computer boot with systemd on Fedora. I have a few self-imposed criteria that I need to meet to be able to properly manage my server(s): 1. It has to run as the minecraft system user I made with the home dir /opt/minecraft. I attempted this one by addusering and then adding the line User=minecraft and WorkingDirectory=/opt/minecraft/ 2. It has to be scalable and work with an arbitrary number of servers. I attempted this by using a template service and then changing the WorkingDirectory line to WorkingDirectory=/opt/minecraft/%i to let me pass in a directory. 3. I have to be able to pass commands into it somehow. This is the one I'm stuck on. I have tried using a socket unit and then hooking that up to /run/minecraft%I, but I haven't been able to get that to work. If you aren't familiar with Minecraft servers, they have this interactive console thingy that you can pass commands into. In the past, I have used tmux send with the server running in a tmux session, but the issue with that is that it doesn't start automatically and feels inelegant. #### Attempted solution: /usr/local/lib/systemd/system/minecraft@.service:
[Unit]
Description=Minecraft server: %i

# only run after networking is ready
After=network-online.target
Wants=network-online.target

[Service]
Type=simple

# restart if the server crashes
Restart=on-failure
RestartSec=5s

# set the input and outputs to a socket unit and the journal resp.
Sockets=minecraft@%i.socket
StandardInput=socket                     
StandardOutput=journal
StandardError=journal

# set the user and directory to the correct values
User=minecraft
WorkingDirectory=/opt/minecraft/%i/

# run the start script for the specified server
ExecStart=/bin/bash /opt/minecraft/%i/start.sh

[Install]
WantedBy=default.target
/usr/local/lib/systemd/system/minecraft@.socket:
[Unit]
Description=Socket for Minecraft server: %i

[Socket]
# listen to a pipe for input
ListenFIFO=%t/minecraft%I.stdin

Service=minecraft@%i.service
#### Problem: When I try to start the server with sudo systemctl start minecraft@1_17_1.service (I have the server installed in /opt/minecraft/1_17_1/), it fails:
Job for minecraft@1_17_1.service failed because of unavailable resources or another system error.
See "systemctl status minecraft@1_17_1.service" and "journalctl -xeu minecraft@1_17_1.service" for details.
This prompted me to run systemctl status minecraft@1_17_1.service:
● minecraft@1_17_1.service - Minecraft server: 1_17_1
     Loaded: loaded (/usr/local/lib/systemd/system/minecraft@.service; enabled; vendor preset: disabled)
     Active: activating (auto-restart) (Result: resources) since Thu 2021-11-04 14:37:27 EDT; 163ms ago
TriggeredBy: × minecraft@1_17_1.socket
        CPU: 0
And also journalctl -xeu minecraft@1_17_1.service
Nov 04 14:51:01 riley-fedora systemd: minecraft@1_17_1.service: Got no socket.
Nov 04 14:51:01 riley-fedora systemd: minecraft@1_17_1.service: Failed to run 'start' task: Invalid argument
Nov 04 14:51:01 riley-fedora systemd: minecraft@1_17_1.service: Failed with result 'resources'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel 
░░ 
░░ The unit minecraft@1_17_1.service has entered the 'failed' state with result 'resources'.
Nov 04 14:51:01 riley-fedora systemd: Failed to start Minecraft server: 1_17_1.
░░ Subject: A start job for unit minecraft@1_17_1.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel 
░░ 
░░ A start job for unit minecraft@1_17_1.service has finished with a failure.
░░ 
░░ The job identifier is 55890 and the job result is failed.
I saw that it was seemingly angry with my minecraft@.socket file, so I ran systemctl status minecraft@1_17_1.socket:
× minecraft@1_17_1.socket - Socket for Minecraft server: 1_17_1
     Loaded: loaded (/usr/local/lib/systemd/system/minecraft@.socket; static)
     Active: failed (Result: resources)
   Triggers: ● minecraft@1_17_1.service
     Listen: /run/minecraft1_17_1.stdin (FIFO)

Nov 04 14:52:35 riley-fedora systemd: minecraft@1_17_1.socket: Failed with result 'resources'.
Nov 04 14:52:35 riley-fedora systemd: Failed to listen on Socket for Minecraft server: 1_17_1.
Nov 04 14:52:41 riley-fedora systemd: minecraft@1_17_1.socket: Failed to open FIFO /run/minecraft1_17_1.stdin: Permission denied
Nov 04 14:52:41 riley-fedora systemd: minecraft@1_17_1.socket: Failed to listen on sockets: Permission denied
Nov 04 14:52:41 riley-fedora systemd: minecraft@1_17_1.socket: Failed with result 'resources'.
Nov 04 14:52:41 riley-fedora systemd: Failed to listen on Socket for Minecraft server: 1_17_1.
Nov 04 14:52:46 riley-fedora systemd: minecraft@1_17_1.socket: Failed to open FIFO /run/minecraft1_17_1.stdin: Permission denied
Nov 04 14:52:46 riley-fedora systemd: minecraft@1_17_1.socket: Failed to listen on sockets: Permission denied
Nov 04 14:52:46 riley-fedora systemd: minecraft@1_17_1.socket: Failed with result 'resources'.
Nov 04 14:52:46 riley-fedora systemd: Failed to listen on Socket for Minecraft server: 1_17_1.
So it seems like the issue has to do with permissions for the pipe I had it use. For good measure, I ran journalctl -xeu minecraft@1_17_1.socket
Nov 04 14:52:46 riley-fedora systemd: minecraft@1_17_1.socket: Failed to open FIFO /run/minecraft1_17_1.stdin: Permission denied
Nov 04 14:52:46 riley-fedora systemd: minecraft@1_17_1.socket: Failed to listen on sockets: Permission denied
Nov 04 14:52:46 riley-fedora systemd: minecraft@1_17_1.socket: Failed with result 'resources'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel 
░░ 
░░ The unit minecraft@1_17_1.socket has entered the 'failed' state with result 'resources'.
Nov 04 14:52:46 riley-fedora systemd: Failed to listen on Socket for Minecraft server: 1_17_1.
░░ Subject: A start job for unit minecraft@1_17_1.socket has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel 
░░ 
░░ A start job for unit minecraft@1_17_1.socket has finished with a failure.
░░ 
░░ The job identifier is 58598 and the job result is failed.
#### Question: What am I doing wrong? I have spent 4-ish hours on the *gasp* second, third, and even fourth pages of Google with no solution. I'm at a loss here, so any and all help is greatly appreciated.
Riley (65 rep)
Nov 4, 2021, 06:58 PM • Last activity: Dec 21, 2024, 04:24 AM
0 votes
3 answers
79 views
Terminal hangs while writing into a FIFO file
I have a program running under a systemd service and I'd like to pass some text/commands to it, so I tried doing so with a FIFO file: .service file ``` [Unit] Description=A Minecraft server service! [Service] EnvironmentFile=/etc/systemd/system/minecraft.env WorkingDirectory=/srv/http/mc/mcserver/ E...
I have a program running under a systemd service and I'd like to pass some text/commands to it, so I tried doing so with a FIFO file:
.service file
[Unit]
Description=A Minecraft server service!

[Service]
EnvironmentFile=/etc/systemd/system/minecraft.env
WorkingDirectory=/srv/http/mc/mcserver/
ExecStart=/srv/http/mc/mcserver/start  minecraft_input or echo 'stop (or anything)' | tee minecraft_input)` it just hangs with no output and nothing gets passed. (FIFO File created using mkfifo and has permissions 664)

EDIT:

I accidentally pasted the wrong code. I also tried this:

.service file
[Unit] Description=A Minecraft server service! Requires=minecraft.socket [Service] EnvironmentFile=/etc/systemd/system/minecraft.env WorkingDirectory=/srv/http/mc/mcserver/ ExecStart=/srv/http/mc/mcserver/start StandardOutput=append:/srv/http/mc/mcserver/mcserver.log StandartInput=socket [Install] WantedBy=multi-user.target
.socket file
[Socket] ListenFIFO=/srv/http/mc/mcserver/minecraft_input Service=minecraft.service [Install] WantedBy=sockets.target ``` But that doesn't work either. The echo command completes, but nothing happens/passes to the program.
slavekrouta (1 rep)
Dec 2, 2024, 06:39 PM • Last activity: Dec 6, 2024, 08:44 AM
1 votes
1 answers
88 views
How to keep a pipe open and discard data while the reader is blocked?
I have a camera capture program and video streaming program working together as `rpicam-vid ... | go2rtc`. My problem is that `go2rtc` only reads data from the pipe when someone opens the video stream, but I need `rpicam-vid` to run continuously even when the stream is not up. Is there a way to simp...
I have a camera capture program and video streaming program working together as rpicam-vid ... | go2rtc. My problem is that go2rtc only reads data from the pipe when someone opens the video stream, but I need rpicam-vid to run continuously even when the stream is not up. Is there a way to simply discard data when the reader is not pulling data?
Matthew Foran (13 rep)
Nov 19, 2024, 01:24 AM • Last activity: Nov 19, 2024, 12:52 PM
2 votes
1 answers
250 views
Knowing when bash is done running a command through a FIFO pipe
I'm trying to link a web-based terminal with bash . My current attempt to do so is spawning a shell pointing a FIFO pipe to its input, like this: Terminal 1 $ mkfifo pipe $ bash pipe file1 file2 file4 Terminal 2 $ echo "ls" > pipe As you might be able to tell, I am only getting command responses fro...
I'm trying to link a web-based terminal with bash. My current attempt to do so is spawning a shell pointing a FIFO pipe to its input, like this: Terminal 1 $ mkfifo pipe $ bash pipe file1 file2 file4 Terminal 2 $ echo "ls" > pipe As you might be able to tell, I am only getting command responses from spawning the shell in _Terminal 1_ (obviously). Is it possible for me to tell if bash is idle or not? I need to know when to show $PS1 on the client side. If I run something like apt-get install curl -y, the command is continuous and finishes when it finishes. I need to know when it is finished so that on the front-end, I can show the terminal prompt. Any ideas?
Kirk122 (21 rep)
Nov 1, 2018, 04:10 AM • Last activity: Oct 6, 2024, 07:38 AM
3 votes
1 answers
1113 views
How to pipe all output streams to another process?
Take the following Bash script `3-output-writer.sh`: ``` lang-bash echo A >&1 echo B >&2 echo C >&3 ``` Of course when ran as `. 3-output-writer.sh` it gets the error `3: Bad file descriptor`, because Bash doesn't know what to do with the 3rd output stream. One can easily `. 3-output-writer.sh 3>fil...
Take the following Bash script 3-output-writer.sh:
lang-bash
echo A >&1
echo B >&2
echo C >&3
Of course when ran as . 3-output-writer.sh it gets the error 3: Bad file descriptor, because Bash doesn't know what to do with the 3rd output stream. One can easily . 3-output-writer.sh 3>file.txt, though, and Bash is made happy. But here's the question: how do I pipe all these into another process, so that it would have all three to work with? Is there any way other than creating three named pipes, as in,
lang-bash
mkfifo pipe1 pipe2 pipe3  # prepare pipes
. 3-output-writer.sh 1>pipe1 2>pipe2 3>pipe3 &  # background the writer, awaiting readers
3-input-reader pipe1 pipe2 pipe3  # some sort of reader, careful not to block
rm pipe1 pipe2 pipe3
?
Sinus tentaclemonster (139 rep)
Aug 6, 2020, 07:30 PM • Last activity: Oct 5, 2024, 01:36 PM
4 votes
1 answers
891 views
two piped commands, each needs to read password from stdin
Is there a way to sensibly do this: scp user@host:/path/to/file /dev/tty | openssl [options] | less without creating a file, and without having to supply either password directly in arguments? The problem is, that both ask for password, but the order in which they start (and therefore also the order...
Is there a way to sensibly do this: scp user@host:/path/to/file /dev/tty | openssl [options] | less without creating a file, and without having to supply either password directly in arguments? The problem is, that both ask for password, but the order in which they start (and therefore also the order in which they ask for the password) is undefined. It would be OK to first finish scp and then start openssl, but without a temporary file.

#**Possible solutions so far** - put the output of scp into variable and then from variable into openssl (would be feasible only for small files, plus I suspect there might be some problems with binary data, etc.) - put passwords in files (_not good_) - use keys (_better?_) - use named pipes
**Named pipes version 1** mkfifo pipe && { scp user@host:/path/to/file pipe # asks for password, then waits # for read from pipe to finish # which will only happen after the # password for openssl was supplied # => must ^Z and enter the password # => `pipe: Interrupted system call' openssl [options] -in pipe | less }
**Named pipes version 2** mkfifo pipe && { scp user@host:/path/to/file pipe & # asks for password (and works when # password is entered) despite being # put in background (what? how? # can someone explain?) openssl [options] -in pipe | less # `bad password read' }
**Named pipes version 3** mkfifo pipe && { scp user@host:/path/to/file pipe | # asks for password first openssl [options] -in pipe | less # asks for password after scp's # password has been entered # and everything works fine } Switching the commands around doesn't help.
`openssl [options] -in Could someone
1. propose another solution, 2. explain scp's unusual behavior regarding example 1 (`Interrupted system call') (I'm asuming it's either a bug or something of a "security feature"), 3. explain how entering passwords works, eg. how a task, which was started in background, can read from stdin, 4. (related to 3.) explain why does scp in example 2 print the password prompt even if stdout and stderr are both redirecred to /dev/null ?
kyrill (154 rep)
Jul 3, 2016, 03:46 PM • Last activity: Oct 5, 2024, 10:30 AM
0 votes
2 answers
68 views
Difference in bash output redirections behavior
I am trying to understand the reason behind the difference in redirections of bash output. Here are two approaches: 1) Redirection of the output to named pipe: `/bin/bash -i 2>&1 1>/tmp/fifo` 2) Redirection using unnamed pipe to another script reading from stdin: `/bin/bash -i 2>&1 | ./reader.sh` _r...
I am trying to understand the reason behind the difference in redirections of bash output. Here are two approaches: 1) Redirection of the output to named pipe: /bin/bash -i 2>&1 1>/tmp/fifo 2) Redirection using unnamed pipe to another script reading from stdin: /bin/bash -i 2>&1 | ./reader.sh _reader.sh_ is supposed to read line by line from stdin:
while read LINE; do
   echo ${LINE}    # do something with it here
done
exit 0
The question is: Why when redirecting to fifo (approach#1) the prompt information (username@host: $) is not redirected, but when redirecting to another program/script (approach#2), it does? Approach 2:
Pauloss (1 rep)
Sep 16, 2024, 06:34 PM • Last activity: Sep 17, 2024, 02:04 PM
-1 votes
3 answers
203 views
Empty named pipe
I try to create a named pipe, however, when I store data in it, it is still empty. ```shell $ mkfifo myfifo ``` ```shell $ cat > myfifo 123 123 123 ^[[D ^C ``` ```shell $ ls > myfifo ^C ``` ```shell $ cat < myfifo ``` (no output)
I try to create a named pipe, however, when I store data in it, it is still empty.
$ mkfifo myfifo
$ cat > myfifo
123
123
123
^[[D
^C
$ ls > myfifo
^C
$ cat < myfifo
(no output)
Irina (139 rep)
Aug 19, 2024, 03:11 PM • Last activity: Aug 20, 2024, 11:51 PM
1 votes
1 answers
1388 views
Write firmware over /dev/ttyUSB0 to embedded device
I'm using the last example in: [this post][1]. I keep getting `wctx:file length=0` after the sz command. I'm actually opening the serial connection with `minicom` Minicom is botching the transfer. So, after I get to `press enter to begin update` in the minicom window, at which point the normal cours...
I'm using the last example in: this post . I keep getting wctx:file length=0 after the sz command. I'm actually opening the serial connection with minicom Minicom is botching the transfer. So, after I get to press enter to begin update in the minicom window, at which point the normal course is to go into the menu, select S for send, choose Xmodem, locate the firmware file and select it, at which point the transfer begins and fails almost immediately. I move to a different virtual terminal window, and run sz -X -k -b -vvv - /tmp/sz_fifo > /dev/ttyUSB0 < /dev/ttyUSB0 after which I receive sz 0.12.21rc mode:1 sending s20806.lsz 0 blocks: give your Xmodem receive command now and I move into the minicom window where it still says to begin update press enter I hit enter, and get this in the sz window Xmodem sectors/kbytes sent 0/ 0kretry Timeout on sector ACK the last part repeated several times, then retry count exceeded followed by mode:0. I don't know if the fifo is empty, if I need to select Xmodem protocol in minicom, or what! I'm about to try using Windows for this job. I am far from from expert on modem protocols and fifos. Although I started somewhere around Debian Woody. In response to the comment, "hit enter before you hit s", Using minicom, update works by accessing the serial interface of the embedded card. Two options provided by the embedded card, CRCXmodem and 1K-Xmodem. I select 1K-Xmodem, and the response, "detatch terminal, change speed to 115,600, and reconnect". The card changes its own speed. Minicom reconnects after the speed change in port settings, and the embedded card says, "press enter to begin firmware update". At this point the written instructions in the manual for the embedded card read, "send firmware update file". I hit CTRLA + z for the minicom menu, select 'S' for send, at which point I can choose a protocol. I choose Xmodem, and a file browser appears, which I use to select the firmware file. Upon hitting enter, the update begins, shortly to fail. If I hit enter before I select 'S' in the minicom menu, nothing is sent, and the update fails. If I hit enter before accessing the minicom menu, the timer on the embedded card expires and the update fails. I was trying sz, because it worked in the link I provided above, using a named pipe. Without minicom, I don't know how to prepare the embedded system to receive the firmware, but minicom fails after that point. I was attempting to remedy that specific problem, especially since minicom has no option specifically for 1K-Xmodem, only Xmodem. Whereas, sz has the -k switch for 1K-Xmodem, as is specified in the options the card offers.
Brian (168 rep)
Aug 23, 2021, 03:31 AM • Last activity: Aug 11, 2024, 08:35 AM
0 votes
1 answers
87 views
Evaluating exit code of pipe or fifo
So i have something along the lines of: set -eo pipefail ssh localhost "ls; exit 1" | cat > output I thought pipefail would prevent cat to write to ouput (because of the exit code from ssh session), but it seems it will not. I then tried creating a fifo, but i am unsure of how to read the exit statu...
So i have something along the lines of: set -eo pipefail ssh localhost "ls; exit 1" | cat > output I thought pipefail would prevent cat to write to ouput (because of the exit code from ssh session), but it seems it will not. I then tried creating a fifo, but i am unsure of how to read the exit status of ssh without consuming the content in the fifo, because it seems it will wait until something "consumes" it. set -eo pipefail [[ -e /tmp/sshdump ]] && rm /tmp/sshdump; mkfifo /tmp/sshdump; ssh localhost "ls; exit 1" > /tmp/sshdump & wait $! && cat > output So, how do i: 1. avoid make the pipe exit early in the first example (and not reach cat)? 2. checking exit code of ssh without consuming the fifo content? if it matters, a POSIX compatible solution is always appreciated, but bash is fine too. In my particular case i'm using windows and git-bash, so maybe there is som quirks i'm not aware of. EDIT: full disclosure, i should probably have explained what i was traying to achieve with this from the get go, rather than simplifying it to the point where the question doesn't properly reflect the intent. I was trying to pipe the output of "mysqldump" on a remote ssh shell directly to "wp db import" on my machine without creating intermediate files. I originally thought pipefail would stop the pipeline and thus prevent the import on my end to go through if there was a failure. Why this was desireable to me was simplicity of not creating temporary files, and of course less code.
PKSWE (103 rep)
Aug 1, 2024, 09:28 PM • Last activity: Aug 2, 2024, 10:52 AM
2 votes
2 answers
371 views
Finding the processes trying to open some FIFO
In my example, the `wc` program is trying to open the test FIFO or named pipe. These in-progress `open` syscalls seem not to be shown by `fuser` or `lsof`: ``` mknod /tmp/testpipe p wc /tmp/testpipe & timeout 0.2 strace -p $! |& timeout 0.1 cat; echo strace: Process 10103 attached open("/tmp/testpip...
In my example, the wc program is trying to open the test FIFO or named pipe. These in-progress open syscalls seem not to be shown by fuser or lsof:
mknod /tmp/testpipe p
wc /tmp/testpipe &
timeout 0.2 strace -p $! |& timeout 0.1 cat; echo
strace: Process 10103 attached
open("/tmp/testpipe", O_RDONLY

fuser /tmp/testpipe  # no output
lsof | grep testpipe  # no output
How to find processes trying to open some FIFO in Linux systems ?
Juergen (754 rep)
Jul 18, 2024, 12:21 PM • Last activity: Jul 18, 2024, 03:33 PM
2 votes
1 answers
422 views
Can FIFO or other thing not block on writer's access, and instead just drop data?
FIFO is problematic in use because both reader and writer have to open it – if one of them is being late, the other one is blocked inside the operating system. I have to implement a publishing mechanism – a program publishes its logs, and if anyone "cares" to listen i.e. opens the publishing channel...
FIFO is problematic in use because both reader and writer have to open it – if one of them is being late, the other one is blocked inside the operating system. I have to implement a publishing mechanism – a program publishes its logs, and if anyone "cares" to listen i.e. opens the publishing channel, he receives the messages. If no one "cares", the messages vanish, no problem. Support for no more than single listener – also no problem. What can I use?
Digger (23 rep)
May 20, 2018, 12:56 PM • Last activity: Jul 3, 2024, 04:39 PM
11 votes
2 answers
16728 views
Why does this script with a FIFO pipe not terminate?
This script: ``` shell #!/bin/bash tmppipe=/tmp/temppipe mkfifo $tmppipe echo "test" > $tmppipe cat $tmppipe exit ``` does not terminate. I assume that the `cat` command is waiting for an `EOF` from the pipe; how do I send one?
This script:
shell
#!/bin/bash
tmppipe=/tmp/temppipe
mkfifo $tmppipe
echo "test" > $tmppipe
cat $tmppipe
exit
does not terminate. I assume that the cat command is waiting for an EOF from the pipe; how do I send one?
Benubird (6082 rep)
Jun 4, 2015, 09:27 AM • Last activity: May 9, 2024, 11:05 AM
1 votes
0 answers
237 views
Redirecting fifo to python's stdin
```sh $ mkfifo mypipe $ echo "hello redirection" > mypipe ``` ```sh $ cat mypipe $ echo "hello redirection" > mypipe ``` However if I put python in a loop ```py # pyecho.py while True: with open("/dev/stdin", "r") as f: print(f.read()) ``` Then after the initial iteration (requiring two writes), it...
$ mkfifo mypipe
$ echo "hello redirection" > mypipe
$ cat  mypipe
$ echo "hello redirection" > mypipe
However if I put python in a loop
# pyecho.py
while True:
    with open("/dev/stdin", "r") as f:
        print(f.read())
Then after the initial iteration (requiring two writes), it goes back to working as expected.
Tom Huntington (109 rep)
Apr 12, 2024, 02:42 AM • Last activity: Apr 12, 2024, 02:49 AM
Showing page 1 of 20 total questions