Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

3 votes
1 answers
200 views
Why SIGTSTP (^Z/Ctrl-Z/suspend) doesn't work when process blocked redirecting write into a FIFO/pipe not open for reading yet? And what to do then?
Sometimes, a command is stalled attempting to write to a FIFO/pipe that no other process is currently reading from, but typing Ctrl - Z to put the process to the background by sending it the SIGTSTP signal (i.e. the *suspend* non-printing control character `^Z`) does work. Example: $ mkfifo p $ ll p...
Sometimes, a command is stalled attempting to write to a FIFO/pipe that no other process is currently reading from, but typing Ctrl-Z to put the process to the background by sending it the SIGTSTP signal (i.e. the *suspend* non-printing control character ^Z) does work. Example: $ mkfifo p $ ll p prw-rw-r-- 1 me me 0 Jul 30 16:27 p| $ echo "Go!" >p # Here & has been forgotten after the >p redirection (stalled) [Ctrl-Z] ^Z # Pressing Ctrl-Z just prints "^Z" on the terminal, nothing else happens [Ctrl-D] # Attempting to send the EOF character (nothing) # Doesn't print "^D" and does nothing and this is the behavior even though SIGTSTP (or susp) is reported to be attached to ^Z: $ stty -a speed 38400 baud; rows 24; columns 91; line = 0; intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = ; eol2 = ; swtch = ; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; discard = ^O; min = 1; time = 0; -parenb -parodd -cmspar cs8 -hupcl -cstopb cread -clocal -crtscts -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl -ixon -ixoff -iuclc -ixany -imaxbel iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke -flusho -extproc How comes that in this case the terminal does not catch and transmit to the process the SIGTSTP signal generated by the Ctrl-Z? Is it because redirecting with > to a FIFO/pipe puts the terminal in raw mode (cf. The TTY demystified )? But if so, why does "^Z" get printed on the screen, and why Ctrl-D (EOF) doesn't print anything and yet doesn't have any effect either? Incidentally, is there an alternative way to send the stalled process to the background in such cases (instead of just terminate the process with Ctrl-C (i.e. ^C / SIGINT)?
The Quark (402 rep)
Aug 5, 2025, 08:47 AM • Last activity: Aug 5, 2025, 12:02 PM
0 votes
1 answers
2211 views
Subshell and process substitution
Apologies if this is a basic question - I'm stuck trying to solve a larger problem, and it's come down to how a shell script is invoked - directly (`shellScript.sh`) or using `sh shellScript.sh`. Here's a model for the problem: When I execute on bash: cat <(echo 'Hello') I see the output Hello But w...
Apologies if this is a basic question - I'm stuck trying to solve a larger problem, and it's come down to how a shell script is invoked - directly (shellScript.sh) or using sh shellScript.sh. Here's a model for the problem: When I execute on bash: cat <(echo 'Hello') I see the output Hello But when I use: sh -c "cat <(echo 'Hello')" I see errors: sh: -c: line 0: syntax error near unexpected token `(' sh: -c: line 0: `cat <(echo 'Hello')' I've tried escaping the <, ( and ) in various combinations, but I don't see the output anywhere. What am I missing here? My actual problem is that I'm passing a <() as an input argument to a python script within a shell script, and while it works fine when I invoke the shell script using just the name, if I use sh to invoke it, I get errors similar to what I've shown above. Thank you!
Ram RS (111 rep)
May 9, 2017, 08:16 PM • Last activity: Jul 30, 2025, 06:24 AM
1 votes
2 answers
1077 views
Cannot Display Bash Functions within FZF Preview Window
**How do I get the FZF Preview Window to Display Functions from my Current Bash Environment?** I want to list my custom bash functions using FZF, and view the code of a selected function in the FZF Preview Window. However, it does not appear that the bash enviroment used by FZF to execute my command...
**How do I get the FZF Preview Window to Display Functions from my Current Bash Environment?** I want to list my custom bash functions using FZF, and view the code of a selected function in the FZF Preview Window. However, it does not appear that the bash enviroment used by FZF to execute my command can see the functions in my terminal bash environment. For example:
$ declare -F | fzf --preview="type {3}"

/bin/bash: line 1: type: g: not found
enter image description here However, the following works:
$ declare -F

declare -f fcd
declare -f fz
declare -f g

$ type g
g is a function
g ()
{
    search="";
    for term in $@;
    do
        search="$search%20$term";
    done;
    nohup google-chrome --app-url "http://www.google.com/search?q=$search " > /dev/null 2>&1 &
}

declare -F | fzf --preview="echo {3}"

g # my function g()
One reason I suspect that the FZF Preview Window environment may not be able to see my terminal environment is because they have different process ID's.
$ echo $BASHPID

1129439

$ declare -F | fzf --preview="echo $BASHPID"

1208203
**How do I get the FZF Preview Window to Display Functions from my Current Bash Environment?**
user2514157 (225 rep)
Oct 21, 2022, 03:10 PM • Last activity: Jun 11, 2025, 07:38 AM
0 votes
2 answers
69 views
How to temporarily substitute the login shell for running a shell command/subprocess?
My login shell is Fish, but I would like to execute a shell command (`apt install ...`) **as if** my login shell was Bash. Is it possible to make a command/subprocess believe that my login shell is `/usr/bin/bash` without actually making it my login shell? Here is the context. When I try to install...
My login shell is Fish, but I would like to execute a shell command (apt install ...) **as if** my login shell was Bash. Is it possible to make a command/subprocess believe that my login shell is /usr/bin/bash without actually making it my login shell? Here is the context. When I try to install a certain package with apt install, the post-installation script fails to start some service, and there is an error message from fish about wrong syntax:
fish: Variables cannot be bracketed. In fish, please use "$XDG_RUNTIME_DIR".
XDG_RUNTIME_DIR="/run/user/$UID" DBUS_SESSION_BUS_ADDRESS="unix:path=${XDG_RUNTI
ME_DIR}/bus" systemctl --user start gpa
I tried executing apt install from Bash, but this did not help. However, if I change my login shell to Bash, then the installation succeeds without errors. Of course, I do not want to change the login shell back and forth to execute one command, I hope there is a more appropriate solution. (Of course, I do not understand why the post-installation script blindly uses the login shell which is not good for it.) --- P.S. The package I was installing is GlobalProtect_deb-6.2.1.1-7.deb for [Palo Alto GlobalProtect VPN]. Some versions are available online, for example here: https://myport.port.ac.uk/connect-to-the-vpn-on-your-linux-device
Alexey (2310 rep)
May 24, 2025, 02:14 PM • Last activity: May 24, 2025, 03:18 PM
173 votes
3 answers
193075 views
Do parentheses really put the command in a subshell?
From what I've read, putting a command in parentheses should run it in a subshell, similar to running a script. If this is true, how does it see the variable x if x isn't exported? x=1 Running `(echo $x)` on the command line results in `1`. Running `echo $x` **in a script** results in nothing, as ex...
From what I've read, putting a command in parentheses should run it in a subshell, similar to running a script. If this is true, how does it see the variable x if x isn't exported? x=1 Running (echo $x) on the command line results in 1. Running echo $x **in a script** results in nothing, as expected.
Igorio (7917 rep)
Jun 21, 2014, 05:50 PM • Last activity: May 24, 2025, 08:04 AM
0 votes
1 answers
2735 views
Launching a program via `xdg-open` from a subshell without blocking
I've noticed that calling `xdg-open` from a subshell will reliably block until the launched process is closed. I suspect there may be a reason for this, but I'm not sure as to why. For example, launching Nautilus doesn't block when calling `xdg-open` directly from the command line: xdg-open ~/dir ;...
I've noticed that calling xdg-open from a subshell will reliably block until the launched process is closed. I suspect there may be a reason for this, but I'm not sure as to why. For example, launching Nautilus doesn't block when calling xdg-open directly from the command line: xdg-open ~/dir ; echo foo # doesn't block but invoking xdg-open from a subshell will reliably block the terminal var=$(xdg-open ~/dir ; echo foo) # blocks { xdg-open ~/dir ; echo foo ; } | cat # blocks. My understanding is that xdg-open detaches the launched process from the shell session so that it's no longer a subprocess. I'd therefore expect this to be different to e.g. invoking sleep 1 & in a subshell for which it seems reasonable that the terminating subshell will block until all subprocess have completed, i.e. var=$(sleep 1 & echo foo) # also blocks, but understandable. But if xdg-open is detaching the process, what's causing the subshell to wait? In what may (?) be a partial answer, I've noticed that running { xdg-open ; ps ; } | cat shows that depending on program launched by ``, those that block are also the ones that keep the tty as their controlling terminal. That begs the question why this happens, why this happens only in a subshell and ultimately what's a good way to a launch desktop process from the terminal that will fully and reliably detach from it? Edit: fix syntax on bash.
wardw (396 rep)
Dec 12, 2020, 08:08 PM • Last activity: May 14, 2025, 12:06 AM
4 votes
0 answers
83 views
Why is `fc` output different through a pipe in a subshell?
Why does `echo "$(fc -l -1)"` show the previous command, but `echo "$(fc -l -1 | cat)"` show the current command? ```bash $ testfc () { > echo "$(fc -l -1)" > echo "$(fc -l -1 | cat)" > } $ echo foo foo $ testfc 1820 echo foo 1821 testfc ``` ## More detail I was testing things for [a rabbit-hole que...
Why does echo "$(fc -l -1)" show the previous command, but echo "$(fc -l -1 | cat)" show the current command?
$ testfc () {
>     echo "$(fc -l -1)"
>     echo "$(fc -l -1 | cat)"
> }

$ echo foo
foo
$ testfc
1820     echo foo
1821     testfc
## More detail I was testing things for [a rabbit-hole question](https://unix.stackexchange.com/questions/794387/multiline-command-substitution-syntax-errors-with-mysterious-1) and ended up down another rabbit hole. I created this function to try different ways of accessing the previous or current command history number with the [fc] and [history] built-ins:
testfc () {
    printf 'fc1\t';        fc -l -1
    printf 'fc1|\t';       fc -l -1 | cat
    printf '(fc1)\t%s\n'   "$(fc -l -1)"
    printf '(fc1|)\t%s\n'  "$(fc -l -1 | cat)" # this one is weird
    printf 'fc0\t';        fc -l -0
    printf 'fc0|\t';       fc -l -0 | cat
    printf '(fc0)\t%s\n'   "$(fc -l -0)"
    printf '(fc0|)\t%s\n'  "$(fc -l -0 | cat)"
    printf 'hist\t';       history 1
    printf 'hist|\t';      history 1 | cat
    printf '(hist)\t%s\n'  "$(history 1)"
    printf '(hist|)\t%s\n' "$(history 1 | cat)"
    str='\!'
    printf '@P\t%s\n'      "${str@P}"
    printf 'HC\t%s\n'      "$HISTCMD"
}
Generally, fc -l -0 & history 1 show the current command, and fc -l -1 shows the previous command. Their outputs don't change when run in a $() subshell or piped through cat. *Except* "$(fc -l -1 | cat)"!
1831 $ echo foo
foo

1832 $ testfc
fc1     1831     echo foo
fc1|    1831     echo foo
(fc1)   1831     echo foo
(fc1|)  1832     testfc   # <-- WHAT?
fc0     1832     testfc
fc0|    1832     testfc
(fc0)   1832     testfc
(fc0|)  1832     testfc
hist     1832  2025-05-02 15:10:59  testfc
hist|    1832  2025-05-02 15:10:59  testfc
(hist)   1832  2025-05-02 15:10:59  testfc
(hist|)  1832  2025-05-02 15:10:59  testfc
@P      1832
HC      1832

1833 $ fc -l -2
1831     echo foo
1832     testfc
## Context
$ echo $BASH_VERSION
5.2.37(1)-release
$ type fc
fc is a shell builtin
$ type history
history is a shell builtin
$ type cat
cat is /usr/bin/cat
Jacktose (533 rep)
May 2, 2025, 10:18 PM • Last activity: May 3, 2025, 09:24 PM
0 votes
0 answers
38 views
bash subshell execution behaves unexpectedly
I have a script which is supposed to fetch 2 URLs sequentially: #!/bin/bash wget_command='wget --restrict-file-names=unix https://www.example.com/{path1,path2}/' $($wget_command) echo $wget_command >> wget_commands.log results in this message: --2025-04-30 09:13:49-- https://www.example.com/%7Bpath1...
I have a script which is supposed to fetch 2 URLs sequentially: #!/bin/bash wget_command='wget --restrict-file-names=unix https://www.example.com/{path1,path2}/ ' $($wget_command) echo $wget_command >> wget_commands.log results in this message: --2025-04-30 09:13:49-- https://www.example.com/%7Bpath1,path2%7D/ ^ the %7D is a problem ---------- Whereas, directly issuing: wget --restrict-file-names=unix https://www.example.com/{path1,path2}/ results in the expected fetches and messages: --2025-04-30 09:14:13-- https://www.example.com/path1/ ... --2025-04-30 09:14:14-- https://www.example.com/path2/ ---------- It feels like {path1,path2} is triggering an expansion/interpolation when $($wget_command) goes to use it but I am not sure how to verify nor avoid this problem.
MonkeyZeus (143 rep)
Apr 30, 2025, 01:30 PM
5 votes
2 answers
294 views
Duplicate stdout to pipe into another command with named pipe in POSIX shell script function
``` mkfifo foo printf %s\\n bar | tee foo & tr -s '[:lower:]' '[:upper:]' <foo wait rm foo ``` This is a working POSIX shell script of what I want to do: - `printf %s\\n bar` is symbolic for an external program producing stdout - `tr -s '[:lower:]' '[:upper:]'` is symbolic for another command that i...
mkfifo foo
printf %s\\n bar | tee foo &
tr -s '[:lower:]' '[:upper:]' 
This is a working POSIX shell script of what I want to do: - printf %s\\n bar is symbolic for an external program producing stdout - tr -s '[:lower:]' '[:upper:]' is symbolic for another command that is supposed to receive the stdout and do something with it - tee duplicates stdout to named pipe foo And the output is as expected:
bar
BAR
Now I'd like to tidy up the code so it becomes external_program | my_function. Something like this:
f() (
  mkfifo foo
  tee foo &
  tr -s '[:lower:]' '[:upper:]' 
But now there is no output at all.
finefoot (3554 rep)
Mar 9, 2025, 01:34 PM • Last activity: Mar 9, 2025, 03:19 PM
1 votes
0 answers
71 views
What's the logic in exiting early on failure in blocks and subshells in Bash?
In Bash blocks {} and subshells () the exit early doesn't work if there is an OR condition following it. Take for example ``` set -e { echo a; false; echo b; } || echo c ``` prints ``` a b ``` and ``` set -e { echo a; false; echo b; false;} || echo c ``` prints ``` a b c ``` It seems to take the las...
In Bash blocks {} and subshells () the exit early doesn't work if there is an OR condition following it. Take for example
set -e
{ echo a; false; echo b; } || echo c
prints
a
b
and
set -e
{ echo a; false; echo b; false;} || echo c
prints
a
b
c
It seems to take the last executed command's exit code only. While this makes sense, given a semicolon instead of && is used, I'd expect the set -e to still make it exit on the first false and execute the error handling echo c code. Using && instead of ; does make it work, but that makes it messy when having multiple line blocks. Adding in set -e at the start of the block/subshell also has no effect. The reason this confuses me is because
set -e
{ echo a; false; echo b; }
prints
a
which means the exit-on-failure works when there's no || code following it. So I'd expect this to be the case with || code following it, executing it after the first failure in the block. Is there no way to achieve that without appending && after each line in the block?
QuickishFM (141 rep)
Dec 20, 2024, 01:34 PM • Last activity: Dec 20, 2024, 02:06 PM
22 votes
2 answers
6621 views
Difference between subshells and process substitution
In bash, I want to assign my current working directory to a variable. Using a subshell, I can do this. var=$(pwd) echo $var /home/user.name If I use process substitution like so: var=<(pwd) echo $var /dev/fd/63 I have understood that process substitution is primarily used when a program does not acc...
In bash, I want to assign my current working directory to a variable. Using a subshell, I can do this. var=$(pwd) echo $var /home/user.name If I use process substitution like so: var=<(pwd) echo $var /dev/fd/63 I have understood that process substitution is primarily used when a program does not accept STDIN. It is unclear to me what a process substitution exactly does and why it assigns /dev/fd/63 to var.
PejoPhylo (385 rep)
Sep 20, 2017, 08:12 AM • Last activity: Dec 20, 2024, 10:58 AM
5 votes
1 answers
321 views
How to determine what is opening tmp files when I invoke a subshell with ksh
I'm experiencing extremely sluggishness in opening subshells (by using \` \` or $( ) command substitutions in scripts) while in `ksh` on some Linux servers. The same problem does not exist in `sh` or any other shell. `strace` indicates that the time is due to `stat` and `openat` calls against random...
I'm experiencing extremely sluggishness in opening subshells (by using \ \ or $( ) command substitutions in scripts) while in ksh on some Linux servers. The same problem does not exist in sh or any other shell. strace indicates that the time is due to stat and openat calls against randomly named files in /tmp. **test.sh**: echo expr 1 + 1 **command**: strace -tttT ksh test.sh **output**: . . .[snipped]. . . 1734368858.571604 stat("/tmp", {st_mode=S_IFDIR|S_ISVTX|0777, st_size=708608, ...}) = 0 1734368859.184765 geteuid() = 1001 1734368859.184851 getegid() = 1002 1734368859.184879 getuid() = 1001 1734368859.184913 getgid() = 1002 1734368859.184946 access("/tmp", W_OK|X_OK) = 0 1734368859.185012 getpid() = 210594 1734368859.185055 openat(AT_FDCWD, "/tmp/sf0p.si0", O_RDWR|O_CREAT|O_EXCL, 0666) = 1 1734368860.771539 unlink("/tmp/sf0p.si0") = 0 . . . Between stat and openat, my simple invocation of expr 1 + 1 **took more than 2 seconds of time.** Questions: 1. Why is ksh creating files in /tmp, whereas none of the other shells (sh, bash, csh) do that? 2. How do I start diagnosing why these operations would take 1-2 seconds? Server in question is at version: Linux 5.4.17-2136.322.6.4.el8uek.x86_64 (distribution Oracle Linux Server release 8.9). Ksh is version AJM 93u+ 2012-08-01 **Update:** We've seen some artifacts of python creating empty directories under /tmp, which a few days ago I found to number 10,000 subdirs under /tmp. I deleted them all but it did not help performance. Forgive my lack of Unix knowledge, but am I right in assuming that once the directory inode is enlarged to list such an extensive # of subdirs/files, deleting those doesn't shrink the directory inode itself, but leaves a sparse structure that still has to be scanned through by all file accesses? The inode size of my /tmp (ls -ld /tmp) is currently 708KB. That's 172x larger than the starting size of 4096 bytes. Could that be what's slowing down stat and openat calls that hit /tmp ?
Paul W (183 rep)
Dec 16, 2024, 05:30 PM • Last activity: Dec 16, 2024, 08:08 PM
1 votes
0 answers
26 views
Why is the first sub-command of my remote command not executing or not affecting later sub-commands?
This command is a simplification of the problem I've come across: ssh grinder1h.devqa sh -c "cd /etc/ssh && pwd" This command gives an output of ***"/home/fetch"***, which is the remote user's home directory. I would expect the output to be ***"/etc/ssh"***, as the "cd" should run in the same shell...
This command is a simplification of the problem I've come across: ssh grinder1h.devqa sh -c "cd /etc/ssh && pwd" This command gives an output of ***"/home/fetch"***, which is the remote user's home directory. I would expect the output to be ***"/etc/ssh"***, as the "cd" should run in the same shell as "pwd", and so should affect its output. In fooling around, I tried this command: ssh grinder1h.devqa sh -c "pwd && cd /etc/ssh && pwd" This command produces ***"/home/fetch"*** followed by ***"/etc/ssh"***, which is the expected output. So for this command, the first sub-command produces a result, and somehow affects the remaining sub-commands. It turns out that I can add any subcommand to the front of this command to cause the remaining parts of the command to work correctly. So this command produces ***"/etc/ssh"***, the desired result: ssh grinder1h.devqa sh -c "X=1 && cd /etc/ssh && pwd" So my question is, why does adding "X=1" to the front of this command affect its output? I could add any number of examples that act strangely. They all suggest that the first part of my remote command is either not being executed at all, or its effects are ignored. What am I missing? In case it matters, the client machine is a Mac, the remote host is running Alma Linux 9.5, and doing "sh --version" on the remote host produces this version info: GNU bash, version 5.1.8(1)-release (x86_64-redhat-linux-gnu)
CryptoFool (121 rep)
Dec 1, 2024, 06:17 PM
3 votes
0 answers
159 views
Understanding piping to a unix domain socket in the background
I have a Debian 12 (bookworm) host where I run three VMs using QEMU / KVM. To simplify VM management, each VM has a QEMU monitor socket. These sockets are `/vm/1.sock`, `/vm/2.sock` and `/vm/3.sock`. Among others, I can use such a socket to gracefully shutdown the respective VM; the command would be...
I have a Debian 12 (bookworm) host where I run three VMs using QEMU / KVM. To simplify VM management, each VM has a QEMU monitor socket. These sockets are /vm/1.sock, /vm/2.sock and /vm/3.sock. Among others, I can use such a socket to gracefully shutdown the respective VM; the command would be something like that:
printf "%s\n" 'system_powerdown' | socat - unix-connect:/vm/1.sock
So far, so good. This works as expected for each of the three VMs / sockets. Now I have a script that needs to shut down all of these VMs, where no delay must occur if one of the shutdown commands hangs or takes a long time. That means that I have to execute the command line shown above in the background. The respective part in the original version of that script is:
{ printf "%s\n" 'system_powerdown' | socat - unix-connect:/vm/1.sock; } &
{ printf "%s\n" 'system_powerdown' | socat - unix-connect:/vm/2.sock; } &
{ printf "%s\n" 'system_powerdown' | socat - unix-connect:/vm/3.sock; } &
Apart from the fact that this produces weird output (because the commands are executed asynchronously and their outputs are interleaved), it does not work as intended. **It shuts down one of the VMs, but not the other two.** During my tests, it was always VM #2 that got shut down, but I believe that this is pure random. [ Side note: VM #2 takes only 3 seconds or so to actually shut down, while VM #1 and VM #3 take 10 seconds or so; this *may* be the reason why it's always VM #2 that gets shut down. But let's put that aside for the moment; I wouldn't be able to explain it anyway and still believe that it's random that it's only VM #2 that gets shut down. ] Then I changed the passage shown above in the following way:
( printf "%s\n" 'system_powerdown' | socat - unix-connect:/vm/1.sock ) &
( printf "%s\n" 'system_powerdown' | socat - unix-connect:/vm/2.sock ) &
( printf "%s\n" 'system_powerdown' | socat - unix-connect:/vm/3.sock ) &
Of course, this version also produces weird output, **but otherwise works; it reliably shuts down all of the three VMs.** While I am glad to have a working solution, I would like to understand the matter. After having re-visited the relevant parts of the bash manual, I believe that both versions should shut down all VMs, but this is not the case. Why does the second version work, while the first version doesn't? Of course, I have read some similar questions on this site and elsewhere that deal with executing commands or pipes in background. From this research I got the impression that both variants should work. Some answers also proposed to move the & into the braces, like that:
(printf "%s\n" 'system_powerdown' | socat - unix-connect:/vm/1.sock &)
But I haven't tried that yet because I first would like to understand the difference between the first and the second version shown above.
Binarus (3891 rep)
Oct 17, 2024, 06:24 PM
0 votes
1 answers
69 views
How to exit a shell if the subshell exit with an error
there is a script, `1.sh`. `1.sh` starts `1a.sh` and then `1b.sh`. But how to exit all scripts, how to exit `1.sh` and how to not start `1b.sh` if `1a.sh` breaks with an error?
there is a script, 1.sh. 1.sh starts 1a.sh and then 1b.sh. But how to exit all scripts, how to exit 1.sh and how to not start 1b.sh if 1a.sh breaks with an error?
user447274 (539 rep)
Oct 5, 2024, 04:05 PM • Last activity: Oct 6, 2024, 06:10 AM
0 votes
2 answers
2378 views
Passing a variable to subshell
Contrived example: #!/usr/bin/bash MYVAR=$(cat /somedir | grep -i myval) Now I want: #!/usr/bin/bash BASEDIR=/somedir MYVAR=$(cat [BASEDIR?] | grep -i myval) How should variable be passed to subshell?
Contrived example: #!/usr/bin/bash MYVAR=$(cat /somedir | grep -i myval) Now I want: #!/usr/bin/bash BASEDIR=/somedir MYVAR=$(cat [BASEDIR?] | grep -i myval) How should variable be passed to subshell?
paulj (238 rep)
Feb 27, 2023, 04:34 PM • Last activity: Aug 30, 2024, 07:40 AM
-2 votes
1 answers
66 views
How does bash <command-argument> work?
About `bash` For a new _tty_ when is executed the `echo $SHLVL` command it displays 1 as expected. Now, if in the same _tty_ is executed the `bash` command and later again the `echo $SHLVL` command it displays 2. Is mandatory use the `exit` command to exit of course. Furthermore I did do realize tha...
About bash For a new _tty_ when is executed the echo $SHLVL command it displays 1 as expected. Now, if in the same _tty_ is executed the bash command and later again the echo $SHLVL command it displays 2. Is mandatory use the exit command to exit of course. Furthermore I did do realize that each bash has its own _history command_ and the user is interacting or has access in the current bash. So far, after to did do a research it seems _it is a kind of subshell_ (correct me if I am wrong), it because mostly a subshell is created through the () approach instead. Just as playing I executed the bash cat /etc/os-release command and nothing is printed. Therefore *being curious* **Question** * How does bash work? As _extra question_: * Under what circumstances the bash approach would be mandatory to be applied? **Observation** In the answer was indicated to expect an error message. Well, the error mentioned is correct (tested on Ubuntu and Fedora) but in my case the bash cat /etc/os-release command was applied as an argument to create a docker container based on Linux and none error was shown as indicated from the beginning and it was the reason to create this post. Well it is other history
Manuel Jordan (2108 rep)
Aug 26, 2024, 03:04 PM • Last activity: Aug 26, 2024, 04:17 PM
6 votes
3 answers
3466 views
Why must I put the command read into a subshell while using pipeline
The command `df .` can show us which device we are on. For example, me@ubuntu1804:~$ df . Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 61664044 8510340 49991644 15% /home Now I want to get the string `/dev/sdb1`. I tried like this but it didn't work: `df . | read a; read a b; echo "...
The command df . can show us which device we are on. For example, me@ubuntu1804:~$ df . Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 61664044 8510340 49991644 15% /home Now I want to get the string /dev/sdb1. I tried like this but it didn't work: df . | read a; read a b; echo "$a", this command gave me an empty output. But df . | (read a; read a b; echo "$a") will work as expected. I'm kind of confused now. I know that (read a; read a b; echo "$a") is a subshell, but I don't know why I have to make a subshell here. As my understanding, x|y will redirect the output of x to the input of y. Why read a; read a b; echo $a can't get the input but a subshell can?
Yves (3401 rep)
Dec 1, 2020, 02:58 AM • Last activity: Jul 23, 2024, 06:23 PM
6 votes
2 answers
1294 views
Why is subshell created by background control operator (&) not displayed under pstree
I understand that when I run `exit` it terminates my current shell because `exit` command run in the same shell. I also understand that when I run `exit &` then original shell will not terminate because `&` ensures that the command is run in sub-shell resulting that `exit` will terminate this sub-sh...
I understand that when I run exit it terminates my current shell because exit command run in the same shell. I also understand that when I run exit & then original shell will not terminate because & ensures that the command is run in sub-shell resulting that exit will terminate this sub-shell and return back to original shell. But what I do not understand is why commands with and without & looks exactly the same under pstree, in this case sleep 10 and sleep 10 &. 4669 is the PID of bash under which first sleep 10 and then sleep 10 & were issued and following output was obtained from another shell instance during this time: # version without & $ pstree 4669 bash(4669)───sleep(6345) # version with & $ pstree 4669 bash(4669)───sleep(6364) Should't the version with & contain one more spawned sub-shell (e.g. in this case with PID 5555), like this one? bash(4669)───bash(5555)───sleep(6364) PS: Following code was omitted from output of pstree beginning for better readability: systemd(1)───slim(1009)───ck-launch-sessi(1370)───openbox(1551)───/usr/bin/termin(4510)───bash(4518)───screen(4667)───screen(4668)───
Wakan Tanka (825 rep)
Jan 20, 2016, 10:22 PM • Last activity: May 30, 2024, 06:33 AM
0 votes
1 answers
334 views
Is it possible to give an existing Flatpak application permission to run another Flatpak?
You can give a Flatpak permissions to access files/folders outside of its sandbox using the techniques described in this [Ubuntu Stack Exchange QA][1]. **But is it possible to give an existing Flatpak application permission to run another Flatpak application?** The catch here (which is a bit of a [C...
You can give a Flatpak permissions to access files/folders outside of its sandbox using the techniques described in this Ubuntu Stack Exchange QA . **But is it possible to give an existing Flatpak application permission to run another Flatpak application?** The catch here (which is a bit of a Catch-22 ) is that to run a Flatpak application, the flatpak executable needs to be run, and it is typically located in /bin or /usr/bin. And /bin is reserved path, according to the Flatpak documentation . Thus, a Flatpak application cannot call flatpak itself unless it is stored somewhere atypical, which could, in theory, violate the effectiveness of its sandboxing functionality. So is it possible or impossible to give an existing Flatpak application permission to run another Flatpak application?
Amazon Dies In Darkness (281 rep)
May 9, 2024, 07:45 AM • Last activity: May 10, 2024, 11:11 AM
Showing page 1 of 20 total questions