Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

4 votes
2 answers
4041 views
Startx as non-root user via SSH
I have a remote VM running Ubuntu 1804 and would like to run VNC. I am using x11vnc, which requires an X server to be running. Currently, I'm connected through SSH. The VM has an Nvidia card, and after generating the xorg.conf with nvidia-xconfig, I can start an X session using startx, but only as r...
I have a remote VM running Ubuntu 1804 and would like to run VNC. I am using x11vnc, which requires an X server to be running. Currently, I'm connected through SSH. The VM has an Nvidia card, and after generating the xorg.conf with nvidia-xconfig, I can start an X session using startx, but only as root. Any subsequent connection via VNC is with root, which I want to avoid. The Device section in xorg.conf file looks like this:
Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "Tesla K80"
    BusID          "0:30:0"
EndSection
When trying to launch startx as a non-root user, I get the following:
/usr/lib/xorg/Xorg.wrap: Only console users are allowed to run the X server
If I change the /etc/X11/Xwrapper.config to allow anybody to startx, I get the following:
Couldn't get a file descriptor referring to the console
I've been reading that connecting via SSH doesn't mean that you're connected to a text console, which you need to run startx. Trying to change to a text console with chvt fails doesn't change anything. Is there anyway that I can launch X via SSH?
Markus Schlafli (141 rep)
Nov 26, 2019, 05:02 PM • Last activity: Jul 17, 2025, 03:02 AM
1 votes
1 answers
58 views
coproc redirect to stdout
Suppose I have some process that when ready will output to `stdout`. Even though I want my script to run asynchronously with respect to that process, I still want the script to block and wait on that first line. Modelled bellow with `sleep` and `echo` is exactly the behaviour I need in working order...
Suppose I have some process that when ready will output to stdout. Even though I want my script to run asynchronously with respect to that process, I still want the script to block and wait on that first line. Modelled bellow with sleep and echo is exactly the behaviour I need in working order:
coproc monitor {
  sleep 2
  echo "init"
  sleep 1
  echo "foo"
  echo "bar"
  echo "baz"
}

read -u ${monitor} line
echo started
exec 3<&${monitor}
cat <&3 &

sleep 2
The script starts and creates the coprocess, then it waits on that first line via read -u, and finally it attaches 3 to ${monitor} so that we can then use cat in yet another background process to pipe stuff from monitor to actual stdout. Thus we get:
# waits 2 seconds
started
# after 1 second:
foo
bar
baz
I am not too happy with these two lines though
exec 3<&${monitor}
cat <&3 &
Is there no better way of achieving this? It seems like a rather roundabout of doing things. But everything I tried hasn't worked. For example cat <&"${monitor}" works, but then it blocks the script. { cat <&"${monitor}" ; } & gives Bad file descriptor for reasons I don't understand (but even then, I am still suing yet another background process, which seems sily).
Mathias Sven (273 rep)
May 27, 2025, 06:11 PM • Last activity: May 28, 2025, 04:11 PM
5 votes
2 answers
400 views
How (internally) does fd3>&fd1 after { fd1>&fd3 } put back (or not) original fd into fd1? ("bad file descriptor")
`I'm reading an answer to https://stackoverflow.com/questions/692000/how-do-i-write-standard-error-to-a-file-while-using-tee-with-a-pipe/692009#692009, https://stackoverflow.com/a/14737103/5499118: { { ./aaa.sh | tee bbb.out; } 2>&1 1>&3 | tee ccc.out; } 3>&1 1>&2 As I've checked it works as explain...
`I'm reading an answer to https://stackoverflow.com/questions/692000/how-do-i-write-standard-error-to-a-file-while-using-tee-with-a-pipe/692009#692009 , https://stackoverflow.com/a/14737103/5499118 : { { ./aaa.sh | tee bbb.out; } 2>&1 1>&3 | tee ccc.out; } 3>&1 1>&2 As I've checked it works as explained, the answer links to https://unix.stackexchange.com/a/18904/266260 which links to https://unix.stackexchange.com/a/3540/266260 . I don't understand why { ... 1>&3 ... } 3>&1 works (how the later redirection reverses effect of the former), because when I wanted to understand man bash: > Note that the order of redirections is significant. For example, the > command > > ls > dirlist 2>&1 > > directs both standard output and standard error to the file dirlist, > while the command > > ls 2>&1 > dirlist > > directs only the standard output to file dirlist, because the standard > error was duplicated from the standard output before the standard > output was redirected to dirlist. I've found https://unix.stackexchange.com/questions/248012/duplication-of-file-descriptors-in-redirection : > Redirections are implemented via the dup family of system functions. > dup is short for duplication and when you do e.g.: > > 3>&2 > > you duplicate (dup2 ) filedescritor 2 onto filedescriptor 3 ... Therefore I understand 1>&3 duplicates 3 into 1 and they point to the same object from that command on. man dup: > After a successful return, the old and new file descriptors may be > used interchangeably. They refer to the same open file description From dup explanation I expect 3>&1 to change nothing as 3 and 1 are already the same. But apparently it is not the case as omitting 3>&1 from { { ./aaa.sh | tee bbb.out; } 2>&1 1>&3 | tee ccc.out; } 3>&1 1>&2 results in bash: 3: bad file descriptor What (if any) is incorrect in explaining redirection with dup calls? What internally happens during 1>&3 and 3>&1? Maybe { } are important here, but I see they are used for grouping only and per man bash: > list is simply executed in the current shell environment.
Alex Martian (1287 rep)
Apr 19, 2025, 04:33 AM • Last activity: Apr 20, 2025, 01:34 PM
3 votes
3 answers
229 views
trap is ignored when dialog is running with custom BASH_XTRACEFD
I have following script ``` #!/bin/bash exec 5> >(logger -t $0) BASH_XTRACEFD="5" set -x trap _reboot INT DIALOG_TITLE="This is fancy dialog title" _reboot() { echo Exiting exit } dialog --title "${DIALOG_TITLE}" --yesno "Welcome to Dialog" 0 0 ``` However when i do CTRL+C, script is exited without...
I have following script
#!/bin/bash

exec 5> >(logger -t $0)
BASH_XTRACEFD="5"
set -x

trap _reboot INT

DIALOG_TITLE="This is fancy dialog title"

_reboot() {
        echo Exiting
        exit
}

dialog --title "${DIALOG_TITLE}" --yesno "Welcome to Dialog" 0 0
However when i do CTRL+C, script is exited without printing anything. If i remove either set -x or BASH_XTRACEFD="5" it will work fine and trap is caught and _reboot function is executed. But if i leave both in (which i need for debugging purposes) - then trap looses all meaning. Exit code is 141 which is interesting when i exit with CTRL+C. If i let the script run it's natural course it will exit with 0 However if i remove either of set -x or BASH_XTRACEFD="5" and then CTRL+C i will exit with 55 then. This for some reason happens only while dialog is on screen. For example if i do
while true; do
  sleep 5
done
And then exit with CTRL+C trap will be executed. I need help figuring this thing out. EDIT: Might be worth mentioning that bash version is 5.1.16, running on Alpine 3.16
Marko Todoric (437 rep)
Jul 4, 2022, 11:06 AM • Last activity: Apr 20, 2025, 09:57 AM
2 votes
1 answers
2608 views
What `ulimit` value applies to a `systemd` service?
I have a `systemd` service installed. It was creating some problems and I am speculating that it might be file descriptor stuff. The file descriptor limit `systemd` wide is quite high: $ cat /proc/sys/fs/file-max 378259 But the soft limit seems to be the default: $ ulimit -Hn 1048576 $ ulimit -Sn 10...
I have a systemd service installed. It was creating some problems and I am speculating that it might be file descriptor stuff. The file descriptor limit systemd wide is quite high: $ cat /proc/sys/fs/file-max 378259 But the soft limit seems to be the default: $ ulimit -Hn 1048576 $ ulimit -Sn 1024 Which of these limits applies to my systemd service? It is started as a dedicated user: [Unit] Description=Myservice After=network.target StartLimitIntervalSec=0 [Service] Type=simple WorkingDirectory=/home/app Restart=always RestartSec=1 User=app ExecStart= And user app is a regular user (/etc/passwd): app:x:1002:1002:APP user:/home/app:/bin/sh In other words, what exact number triggers a limit for the systemd service I am running, what exact number will result in a failure when an additional fd is opened?
unsafe_where_true (333 rep)
Sep 28, 2020, 06:31 PM • Last activity: Apr 19, 2025, 12:06 AM
3 votes
1 answers
160 views
Bash redirections - handling several filenames specially (man pages)
Please confirm/correct me. I've found related https://unix.stackexchange.com/questions/248012/duplication-of-file-descriptors-in-redirection but that does not answer my specific question. From the&#160;[*The GNU Bash Reference Manual*, section&#160;3.6 Redirections][1]: > Bash handles several filena...
Please confirm/correct me. I've found related https://unix.stackexchange.com/questions/248012/duplication-of-file-descriptors-in-redirection but that does not answer my specific question. From the *The GNU Bash Reference Manual*, section 3.6 Redirections : > Bash handles several filenames specially when they are used in redirections, as described in the following table. If the operating system on which Bash is running provides these special files, bash will use them; otherwise it will emulate them internally with the behavior described below. > > /dev/fd/fd
    If *fd* is a valid integer, file descriptor *fd* is duplicated.
> > /dev/stdin
    File descriptor 0 is duplicated.
> > /dev/stdout
    File descriptor 1 is duplicated.
> > /dev/stderr
    File descriptor 2 is duplicated.
> > A failure to open or create a file causes the redirection to fail. > > Redirections using file descriptors greater than 9 should be used with care, as they may conflict with file descriptors the shell uses internally. What above constitute "special handling" and what "internal emulation behavior"? My guess emulation is done to simulte all aspects of "special handling". Is it correct?
Martian2020 (1443 rep)
Apr 11, 2025, 03:38 AM • Last activity: Apr 11, 2025, 10:05 AM
0 votes
0 answers
45 views
I want to print debug output on failure, but ERR trap doesn't fire
I want to only print the debug output on failure, this is my attempt: ``` #!/bin/bash set -euo pipefail dt-api() { # shellcheck disable=SC2086 curl \ --fail-with-body \ --silent \ --show-error \ -H "X-Api-Key: $dt_api_key" \ ${curl_opts-} \ "$dt_api_url/api/v1$1" \ "${@:2}" } post-sbom() { local -r...
I want to only print the debug output on failure, this is my attempt:
#!/bin/bash
set -euo pipefail

dt-api() {
    # shellcheck disable=SC2086
    curl \
      --fail-with-body \
      --silent \
      --show-error \
      -H "X-Api-Key: $dt_api_key" \
      ${curl_opts-} \
      "$dt_api_url/api/v1$1" \
      "${@:2}"
}

post-sbom() {
    local -r project_name=$1
    local -r project_version=$2
    local -r bom_path=$(readlink -f "$3")

    dt-api "/bom" \
         -F "autoCreate=true" \
         -F "projectName=$project_name" \
         -F "projectVersion=$project_version" \
         -F "isLatest=true" \
         -F "bom=@$bom_path"
}
 

  debug_file=$(mktemp)
  >&2 echo "Debug file: $debug_file"
  readonly debug_file
  exec 3> "$debug_file"
  export BASH_XTRACEFD=3
#  trap 'rm -f "$debug_file"' EXIT
  trap 'touch ERR.marker; cat "$debug_file"' ERR
  set -x

  post-sbom foo bar target/bom.xml

  echo Done
If curl fails (e.g. 401 or 404) the trap is not invoked. The script exits at this point. Sample output from running rm ERR.marker; ./script
rm: cannot remove 'ERR.marker': No such file or directory
Debug file: /tmp/tmp.auETKRfIWu
curl: (22) The requested URL returned error: 404
$ echo $?
22
$ cat ERR.marker
cat: ERR.marker: No such file or directory
I added the touch ERR.marker to rule out any redirection issues. I can see the file is NOT created in this case. If I change the trap to EXIT I get:
rm: cannot remove 'ERR.marker': No such file or directory
Debug file: /tmp/tmp.npNyUItX5o
curl: (22) The requested URL returned error: 404
+ post-sbom foo bar target/bom.xml
+ local -r project_name=foo
+ local -r project_version=bar
++ readlink -f target/bom.xml
+ local -r bom_path=/home/jakub/repos/permission-service/target/bom.xml
+ dt-api 2/bom -F autoCreate=true -F projectName=foo -F projectVersion=bar -F isLatest=true -F bom=@/home/jakub/repos/permission-service/target/bom.xml
+ curl --fail-with-body --silent --show-error -H 'X-Api-Key: INVALID' https://dependency-track.example.com/api/v12/bom  -F autoCreate=true -F projectName=foo -F projectVersion=bar -F isLatest=true -F bom=@/home/jakub/repos/permission-service/target/bom.xml
+ touch ERR.marker
+ cat /tmp/tmp.npNyUItX5o
If I change the trap from ERR to EXIT then it prints the $debug_file when the script aborts, but the EXIT trap will also fire if script completes successfully. I guess I could just use an EXIT trap and clear it at the end of the script. Using set -E makes the trap fire, but now it fires multiple times, which is not what I want. Maybe I could truncate the file after printing it. Another thing that seems to fix this is changing the line to.
post-sbom foo bar target/bom.xml || false
I guess that implies that a failing function call is not triggering the ERR trap? Maybe there is a better approach to printing debug output on failure?
Jakub Bochenski (325 rep)
Apr 9, 2025, 04:05 PM • Last activity: Apr 9, 2025, 05:48 PM
0 votes
1 answers
72 views
What is (if any) the file descriptor of /dev/tty?
The urgent issue to read keyboard input in the pipeline is solved by the answer in https://stackoverflow.com/questions/15230289/read-keyboard-input-within-a-pipelined-read-loop: mycommand-outputpiped | while read line do # do stuff read confirm < /dev/tty done Why does it work? Aren't `tty` redirect...
The urgent issue to read keyboard input in the pipeline is solved by the answer in https://stackoverflow.com/questions/15230289/read-keyboard-input-within-a-pipelined-read-loop : mycommand-outputpiped | while read line do # do stuff read confirm < /dev/tty done Why does it work? Aren't tty redirected to standard input? Can I get a file descriptor of /dev/tty and use read -u fd instead? TIA
Martian2020 (1443 rep)
Apr 8, 2025, 04:02 AM • Last activity: Apr 8, 2025, 05:12 AM
2 votes
2 answers
472 views
what is the meaning and usage of 2>&1 | tee /dev/stderr
for the following command output=$(cat $file | docker exec -i CONTAINER COMMAND 2>&1 | tee /dev/stderr) what is the meaning and usage of 2>&1 | tee /dev/stderr? I google and 2>&1 means "combine stderr and stdout into the stdout stream." tee mean "reads from standard input and writes to standard outp...
for the following command output=$(cat $file | docker exec -i CONTAINER COMMAND 2>&1 | tee /dev/stderr) what is the meaning and usage of 2>&1 | tee /dev/stderr? I google and 2>&1 means "combine stderr and stdout into the stdout stream." tee mean "reads from standard input and writes to standard output and one or more files simultaneously." but I cannot understand if combine 2>&1 | tee /dev/stderr 1) this combine stderr and stdout into the stdout stream, but then again write it to /dev/stderr (&2) again? 2) why need to combine to stdout, then redirect back to stderr? 3) and after write to /dev/stderr, will the message save to variable output in the following? output=$(cat $file | docker exec -i CONTAINER COMMAND 2>&1 | tee /dev/stderr)
user1169587 (133 rep)
Feb 19, 2025, 07:35 AM • Last activity: Feb 20, 2025, 09:06 AM
7 votes
1 answers
1396 views
Why does MacOS always append to a redirected file descriptor even when told to overwrite? Ubuntu only appends when strictly told to append
**Given the following code:** ``` bash out="$(mktemp)" rm -f "$out" clear printf '%s\n' 0 >"$out" { printf '%s\n' '1' >/dev/stdout printf '%s\n' '2' >/dev/stdout } >"$out" cat -e -- "$out" rm -f "$out" ``` On Ubuntu this outputs: ``` 2$ ``` On MacOS this outputs: ``` 1$ 2$ ``` **When explicitly appe...
**Given the following code:**
bash
out="$(mktemp)"
rm -f "$out"
clear

printf '%s\n' 0 >"$out"
{
	printf '%s\n' '1' >/dev/stdout
	printf '%s\n' '2' >/dev/stdout
} >"$out"
cat -e -- "$out"
rm -f "$out"
On Ubuntu this outputs:
2$
On MacOS this outputs:
1$
2$
**When explicitly appending, they behave consistently:**
bash
out="$(mktemp)"
rm -f "$out"
clear

printf '%s\n' 0 >"$out"
{
	printf '%s\n' '1' >/dev/stdout
	printf '%s\n' '2' >>/dev/stdout
} >"$out"
cat -e -- "$out"
rm -f "$out"
On MacOS and Ubuntu this outputs:
1$
2$
**The most confusing example to me is this one:**
bash
out="$(mktemp)"
rm -f "$out"
clear

printf '%s\n' 0 >"$out"
exec 3>>"$out"
{
	printf '%s\n' '1' >/dev/stdout
	printf '%s\n' '2' >/dev/stdout
} >&3
{
	printf '%s\n' '3' >/dev/stdout
	printf '%s\n' '4' >/dev/stdout
} >&3
cat -e -- "$out"
rm -f "$out"
exec 3>&-
Which on MacOS outputs:
0$
1$
2$
3$
4$
Which on Ubuntu outputs:
4$
I was expecting this on Ubuntu:
0$
2$
4$
--- I am thoroughly confused why this behaviour occurs, in this example, and [all the other examples I've devised to illustrate this discrepancy.](https://gist.github.com/balupton/cd779f3a39507f75d5956a67e5543ab8) My questions: - What is this descrepancy? What is happening? Is this descrepancy intentional? - Where else does this descrepancy apply? What is its origins? - If this descrepancy is intentional, why was it justified? Which one should be the correct behaviour? - What can be done to mitigate these differences when writing cross-OS scripts? - Is shopt -o noclobber the appropriate response? Is this the true necessity of noclobber?
balupton (634 rep)
Feb 11, 2025, 08:17 PM • Last activity: Feb 14, 2025, 08:18 AM
81 votes
4 answers
14019 views
Order of redirections
I don't quite understand how the computer reads this command. `cat file1 file2 1> file.txt 2>&1` If I understand, `2>&1` simply redirect Standard Error to Standard Output. By that logic, the command reads to me as follows: 1. concatenate files `file1` and `file2`. 2. send `stdout` from this operatio...
I don't quite understand how the computer reads this command. cat file1 file2 1> file.txt 2>&1 If I understand, 2>&1 simply redirect Standard Error to Standard Output. By that logic, the command reads to me as follows: 1. concatenate files file1 and file2. 2. send stdout from this operation to file.txt. 3. send stderr to stdout. 4. end? I'm not sure what the computer's doing. By my logic, the command should be cat file1 file2 2>&1 > file.txt but this is not correct.
iDontKnowBetter (1057 rep)
Apr 30, 2012, 10:28 PM • Last activity: Jan 20, 2025, 01:16 AM
1 votes
0 answers
68 views
How to solve uninterruptible sleep process deadlock without reboot
I have a process stuck in uninterruptible sleep. The problematic syscall is a read syscall towards `/dev/fd0`, which is not backed by a real floppy drive. * I am trying to use `modprobe` and `rmmod` (both with the force option) to unload the floppy module but getting a resource busy error for both....
I have a process stuck in uninterruptible sleep. The problematic syscall is a read syscall towards /dev/fd0, which is not backed by a real floppy drive. * I am trying to use modprobe and rmmod (both with the force option) to unload the floppy module but getting a resource busy error for both. * kill -9 does not work either, since the process is uninterruptible. Is it possible somehow to "reset" the floppy block device to solve this deadlock? I would like to avoid rebooting this VM unless absolutely necessary.
MichaelAttard (31 rep)
Oct 18, 2024, 02:38 PM • Last activity: Oct 18, 2024, 03:14 PM
1 votes
1 answers
74 views
Are file permissions copied into the Open File Table?
I have a doubt on what the entry created in the *Open File Table* upon calling `open()` contains. The following schema [![bytebytego linux descriptor illustrated][1]][1] from bytebytego seems quite good to understand the big picture of opening a file, but in the *Open File Table* I see nothing about...
I have a doubt on what the entry created in the *Open File Table* upon calling open() contains. The following schema bytebytego linux descriptor illustrated from bytebytego seems quite good to understand the big picture of opening a file, but in the *Open File Table* I see nothing about file permissions (which are only depicted in the inode at the bottom). This source instead says: > The *open file table* contains several pieces of information about > each file: > > - the current offset (the next position to be accessed in the file) > - a reference count (we'll explain below in the section about fork()) > - **the file mode (permissions)**, > - ... Since the file permissions are stored on disk in its inode, if the last source I mentioned is right, does it mean that the permissions are copied in the entry of the *Open File Table* ? And if so, maybe the schema from bytebytego omitted this detail for the sake of simplicity?
pochopsp (113 rep)
Oct 8, 2024, 04:15 PM • Last activity: Oct 9, 2024, 12:47 PM
1 votes
1 answers
98 views
Separate stdout of an application
I have an application that takes a few command line parameters, then prints some text to the terminal, then starts writing data to a file. It has a parameter I can use to define the file name to write to (like `-o some-file-name.bin`). I would like to process that data (live) using another applicati...
I have an application that takes a few command line parameters, then prints some text to the terminal, then starts writing data to a file. It has a parameter I can use to define the file name to write to (like -o some-file-name.bin). I would like to process that data (live) using another application, so I tried making the program output its data to stdout by specifying -o - and piping the output to another tool. However, unfortunately, that program isn't smart enough to notice that it is writing to stdout and that it should stop writing human-readable text to stdout (as that messes up the binary data). So, usually, the tool would write text to stdout (which I don't care about) and data to a file. I would like to ignore all the text written to stdout and instead have it write the data to stdout so I can pipe it into another tool. However when I run it with -o -, it writes both the (binary) data and the useless text into stdout, ruining the data. I tried -o >( my_other_tool ) instead of a pipe, but that resulted in Couldn't open output: /dev/fd/63: /dev/fd/63: No such file or directory. Is there another way - other than recompiling the tool and removing all the useless printf calls - to ignore the original stdout and send the data it would write to a file to stdout instead? EDIT: In case it is helpful, I am talking about nsntrace .
Florian Bach (263 rep)
Jun 21, 2019, 01:46 PM • Last activity: Oct 6, 2024, 03:24 PM
0 votes
2 answers
665 views
STDOUT + STDERR output ... is there any difference between considering the output to be an empty string vs NULL
I'm writing some application code that is used to execute Linux shell commands, and it then logs the command details into an SQL database. This includes the output of STDOUT + STDERR (separately). After the command has been executed, and assuming the process didn't output anything... could there be...
I'm writing some application code that is used to execute Linux shell commands, and it then logs the command details into an SQL database. This includes the output of STDOUT + STDERR (separately). After the command has been executed, and assuming the process didn't output anything... could there be any reason to leave the STDOUT/STDERR fields as NULL -vs- setting them to be empty strings? To put the question another way: is there technically any difference between these two things? - A process that doesn't output anything to STDOUT - A process that outputs an empty string to STDOUT (and nothing else) And to put the question another way again... does it make sense to make these columns NOT NULL in SQL?
LaVache (423 rep)
Dec 6, 2018, 03:02 AM • Last activity: Oct 6, 2024, 09:27 AM
14 votes
2 answers
2800 views
Why doesn't the process substitution <() work with ssh -F
I have some vagrant virtual machines. To log into them I issue the `vagrant ssh` command. I want to log into them using regular `ssh` command. The `vagrant ssh-config` outputs the suitable config file $ vagrant ssh-config Host default HostName 127.0.0.1 User vagrant Port 2201 UserKnownHostsFile /dev...
I have some vagrant virtual machines. To log into them I issue the vagrant ssh command. I want to log into them using regular ssh command. The vagrant ssh-config outputs the suitable config file $ vagrant ssh-config Host default HostName 127.0.0.1 User vagrant Port 2201 UserKnownHostsFile /dev/null StrictHostKeyChecking no PasswordAuthentication no IdentityFile /home/cbliard/.vagrant.d/insecure_private_key IdentitiesOnly yes LogLevel FATAL When outputing this config in a file and using with ssh -F, everything works fine: $ vagrant ssh-config > /tmp/config $ ssh -F /tmp/config default => logged successfully When using process substitution operator <(cmd) to prevent the creation of the temporary config file, it fails: $ ssh -F <(vagrant ssh-config) default Can't open user config file /proc/self/fd/11: No such file or directory Same error happens when using <(cat /tmp/config) $ ssh -F <(cat /tmp/config) default Can't open user config file /proc/self/fd/11: No such file or directory I am using zsh and I observe the same behavior with bash. What am I doing wrong here?
cbliard (402 rep)
Nov 20, 2013, 10:50 AM • Last activity: Sep 9, 2024, 06:07 AM
1 votes
1 answers
217 views
bash: script running in pm2 unable to access file descriptors files at /dev/fd/
I have script `script.sh`: ```bash #!/usr/bin/env bash # pm2 seems to always run in `bash` regardless of `#!` echo "running in $(readlink -f /proc/$$/exe)" # Redirect to both stdout(1) and stderr(2) echo "hello" | tee /dev/fd/2 ``` I am running it with `pm2 start script.sh` I can see logs at `pm2 lo...
I have script script.sh:
#!/usr/bin/env bash

# pm2 seems to always run in bash regardless of #!
echo "running in $(readlink -f /proc/$$/exe)"

# Redirect to both stdout(1) and stderr(2)
echo "hello" | tee /dev/fd/2
I am running it with pm2 start script.sh I can see logs at pm2 logs like so:
tee: /dev/fd/2: No such device or address
Which means script is unable to access /dev/fd. Although this problem doesn't happen when directly running in a bash terminal. --- - About pm2: https://pm2.keymetrics.io/docs/usage/quick-start/ - Previous discussion: https://stackoverflow.com/questions/78885319/bash-redirecting-to-multiple-file-descriptors - For reference : https://unix.stackexchange.com/questions/26926/file-descriptors-and-dev-fd
Amith (313 rep)
Aug 23, 2024, 11:08 AM • Last activity: Sep 3, 2024, 10:24 AM
2 votes
1 answers
296 views
Shell/bash: Can I create a file descriptor to an existing file without emptying the file?
**Context:** I have cursory bash experience. I do not fully get file descriptors, just some basic usage. Trying to create a setup script. Most done, but a few "kinks" remain. So here is a newbie question on logging and file descriptors! **Goal(s):** * Log all executions of the script to a *daily log...
**Context:** I have cursory bash experience. I do not fully get file descriptors, just some basic usage. Trying to create a setup script. Most done, but a few "kinks" remain. So here is a newbie question on logging and file descriptors! **Goal(s):** * Log all executions of the script to a *daily logfile*. * If script is executed multiple times one day, logfile is appended to. * I do not want several logfiles per day, one rotation per day is plenty. * I output * (a) basic/overview/main info to the console, and (using e g echo [...] >&3. * (b) all "details" go to the logfile (all standard output, echo). (See sample script.) * Avoid log function: I would love to avoid having special log functions to call, but maybe that is the only way... ### Problem/obstacle: When I create the file descriptor, it seems the logfile is reset/emptied. **Example:** Again, see sample script below. First cat (row 3) indeed outputs previous contents of the logfile. But after setting up the file descriptors for logging on row 5, the cat on row 7 always outputs nothing. ---- **Question(s):** * **(A)** Can I use this approach and somehow create a file descriptor for an existing file, and avoid it "resetting"/emptying the existing file of its previous contents? * **(B)** *If* that approach can *not* work, is there an alternative way that accomplishes my goals? ## Sample script
LOG_FILE="./$(date -u +%Y%m%dTZ).log.txt"
touch $LOG_FILE
cat $LOG_FILE
echo -----------------------
exec 3>&1 1>"$LOG_FILE" 2>&1 #TODO: suspect this file descriptor creation resets existing file. Investigate if/how this can be avoided.
echo +++++++++++++++++++++++ >&3
cat $LOG_FILE >&3 # this is always empty
echo +++++++++++++++++++++++ >&3
read -t 2
echo "${blu}=================================================="
echo Logging some. Time: $(date -Iseconds) 
echo "==================================================${end}"
------ *I have of course searched to try and find a solution, but for this problem it seems I cannot find any good discussions at all. Lots on file descriptors, but I have not managed to find anyone asking this question. I may be using the wrong keywords, ofc. I found this related question , and others a bit like that.* Thanks alot for reading my question!
mawi (23 rep)
Jul 9, 2024, 10:18 AM • Last activity: Jul 9, 2024, 12:43 PM
79 votes
6 answers
25685 views
How portable are /dev/stdin, /dev/stdout and /dev/stderr?
Occasionally I need to specify a "path-equivalent" of one of the standard IO streams (`stdin`, `stdout`, `stderr`). Since 99% of the time I work with Linux, I just prepend `/dev/` to get `/dev/stdin`, etc., and this "*seems* to do the right thing". But, for one thing, I've always been uneasy about s...
Occasionally I need to specify a "path-equivalent" of one of the standard IO streams (stdin, stdout, stderr). Since 99% of the time I work with Linux, I just prepend /dev/ to get /dev/stdin, etc., and this "*seems* to do the right thing". But, for one thing, I've always been uneasy about such a rationale (because, of course, "it seems to work" until it doesn't). Furthermore, I have no good sense for how portable this maneuver is. So I have a few questions: 1. In the context of Linux, is it safe (yes/no) to equate stdin, stdout, and stderr with /dev/stdin, /dev/stdout, and /dev/stderr? 2. More generally, is this equivalence "adequately *portable*"? I could not find any POSIX references.
kjo (16299 rep)
Apr 13, 2012, 10:49 PM • Last activity: Jul 3, 2024, 11:30 PM
3 votes
2 answers
203 views
Creating a filename for an opened file descriptor
I have a file on the filesystem. I'm opening the file with the `open(2)` function to get the file descriptor to that file. Now I remove the file. But I still have the file descriptor, so I can read and write to that file without problems, because the filesystem will not remove the file allocation of...
I have a file on the filesystem. I'm opening the file with the open(2) function to get the file descriptor to that file. Now I remove the file. But I still have the file descriptor, so I can read and write to that file without problems, because the filesystem will not remove the file allocation of my file until last file descriptor is closed. But after I remove the file, and while I still hold the file descriptor, can I somehow re-create (re-bind) the filename to that file descriptor? So the file would appear again on the filesystem, so it won't be removed when I close the file descriptor? (all I have is an opened file descriptor and nothing else). I'm mostly interested if this can be done on macOS (on Linux/glibc it seems to be possible to do by using linkat with the AT_EMPTY_PATH flag).
antekone (722 rep)
Oct 5, 2023, 05:56 AM • Last activity: Jun 19, 2024, 09:16 AM
Showing page 1 of 20 total questions