Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
5033 views
Execute an shell script with sudo inside from php
I want to execute a command line command as root from a php file. In my case I want to view the crontab file in my php page. Running the `sudo` command in php is not recommended and on my raspberry pi with raspberry pi OS the php line echo (shell_exec("sudo more /var/spool/cron/crontabs/pi")); does...
I want to execute a command line command as root from a php file. In my case I want to view the crontab file in my php page. Running the sudo command in php is not recommended and on my raspberry pi with raspberry pi OS the php line echo (shell_exec("sudo more /var/spool/cron/crontabs/pi")); does not work. So I created a shell script: **crontab.sh** #!/bin/bash echo "Start Crontab" sudo more /var/spool/cron/crontabs/pi echo "End Crontab" and I created a php page: **crontab.php** Who am I: ".(exec("whoami")); echo "
"; echo (shell_exec("/var/www/html/cmd/crontab.sh")); ?> When I execute the crontab.sh in the command line. It seems to work. Because I see the crontab text. When I load the php page. I do not see the crontab text. I only see Get Current User: pi Who am I: www-data Start Crontab End Crontab What can I do? I had a cron that used rsync to copy the crontab -e file to a cron.txt file every hour. It worked but i do not want to view an (old) copy. edit: My problem is that the task that starts with sudo gives zero output. In the comments I got the suggestion to use sudo crontab -l. That's better than the line I used because it gives the root crontab and I just did not know of the -l solution. But the problem is still there. There is zero output.
MacQwerty (21 rep)
Apr 4, 2021, 01:43 PM • Last activity: May 8, 2025, 07:01 PM
0 votes
2 answers
489 views
Cron to find exec cp and redirect output with time / date per line?
I try to setup a `cron` task to run a set of basic shell commands. I need to look to any files created on the last day, then copy to another folder and generate a log stating, line by line, the date and the time of the file copy operation. The two shell commands run separately but need to create one...
I try to setup a cron task to run a set of basic shell commands. I need to look to any files created on the last day, then copy to another folder and generate a log stating, line by line, the date and the time of the file copy operation. The two shell commands run separately but need to create one and schedule via cron. When I try to increase the first command (find) cron does not execute the task and gives errors. If I run manually, it works. find /dir/ -type f -mtime -1 -exec cp -v -a --parents "{}" /dir2/ \; >> /dir2/LogsCopiaDBs_$(date +%d-%m-%Y).txt exec &> >(while read line; do echo "$(date +'%h %d %Hh%Mm%Ss') $line" >> /dir2/LogsCopiaDBs.txt; done;) Any idea ?
Edson S Freitas (1 rep)
Nov 13, 2015, 02:25 PM • Last activity: Mar 17, 2025, 07:28 AM
0 votes
1 answers
140 views
script appears to run with no progress
This script is to find a list of files stored in a text file, and if the files are found, copy them to a specific location. So far I have had success running the portions up to but not including the portion that actually copies the files. When I add the code to copy the file, starting with exec, the...
This script is to find a list of files stored in a text file, and if the files are found, copy them to a specific location. So far I have had success running the portions up to but not including the portion that actually copies the files. When I add the code to copy the file, starting with exec, the script no longer appears to work and makes no progress. I would like to understand what is locking this script up and how to make it work correctly. Thanks! #!/bin/bash #Find files from a list in a file and copy them to a common folder mapfile -t filelist < filelist.txt for file in "${filelist[@]}"; do xargs find ~ -name '${filelist[@]}' -exec mv -t ~/Document/foundfiles/ {} +; done
user289380
May 8, 2018, 12:43 AM • Last activity: Mar 17, 2025, 07:24 AM
3 votes
1 answers
187 views
exec cp failes from script, yet works when issued directly
I have a script that copies SQL backups to a windows server. Here's the line from /etc/fstab: //my.win.box/share$ /winshare cifs credentials=/etc/credfile,dom=mydomain,uid=0,gid=0,file_mode=0600,dir_mode=0700 0 0 Here's the backup script: backup.sh: # copy zipped sql exports to /winshare/db find /ba...
I have a script that copies SQL backups to a windows server. Here's the line from /etc/fstab: //my.win.box/share$ /winshare cifs credentials=/etc/credfile,dom=mydomain,uid=0,gid=0,file_mode=0600,dir_mode=0700 0 0 Here's the backup script: backup.sh: # copy zipped sql exports to /winshare/db find /backups/sql/*sql.gz -mtime +1 -exec cp {} /winshare/db \; Logged in with root privileges (in this case, as root) $ ./backup.sh cp: cannot create regular file `/winshare/db/mydb_20130301.sql.gz': Permission denied Yet if I issue the command from a prompt, rather than through the script: $ find /backups/sql/*sql.gz -mtime +1 -exec cp {} /winshare/db \; The file(s) are copied as expected. Again, logged in as root here. What could be causing the in-script command to fail, yet the identical command to work from console?
a coder (3343 rep)
Apr 23, 2013, 07:55 PM • Last activity: Mar 14, 2025, 03:38 PM
4 votes
2 answers
1664 views
'exec fish' at the very bottom of my '.zshrc' - is it possible to bypass it?
I prefer to use the fish shell on macOS most of the time, but refrain to make it the login shell, because I think sometimes it may cause problems. So, I simply added `exec fish` at the very bottom of my `.zshrc`, and that's it. Which means my login shell is still zsh, and my `$PATH` modifications ar...
I prefer to use the fish shell on macOS most of the time, but refrain to make it the login shell, because I think sometimes it may cause problems. So, I simply added exec fish at the very bottom of my .zshrc, and that's it. Which means my login shell is still zsh, and my $PATH modifications are still in .zprofile, but when I open the terminal, I don't have to switch to fish manually. Works fine for me. But then, what if I nevertheless want to switch from fish to zsh for some specific reason? Is it possible somehow? (Because if I simply type zsh and press Return, I will be immediately switched back to fish, of course.) **update:** There is a workaround to use zsh -c, but it works with some commands only.
jsx97 (1347 rep)
Dec 16, 2024, 11:22 AM • Last activity: Dec 16, 2024, 06:45 PM
2 votes
0 answers
47 views
In the context of {ARG_MAX}, how robust is `find . -exec sh -c 'exec tool "$@" extra-arg' find-sh {} +`?
Suppose I want to do this: find . -exec tool {} extra-arg + It doesn't work and I know why: `-exec … {} +` does not allow `extra-arg`(s) between `{}` and `+`. So be it. It seems I can inject the `extra-arg` by using a shell, like this: find . -exec sh -c 'exec tool "$@" extra-arg' find-sh {} + My qu...
Suppose I want to do this: find . -exec tool {} extra-arg + It doesn't work and I know why: -exec … {} + does not allow extra-arg(s) between {} and +. So be it. It seems I can inject the extra-arg by using a shell, like this: find . -exec sh -c 'exec tool "$@" extra-arg' find-sh {} + My question is: **how robust is this method?** I mean in the context of {ARG_MAX} and possible "argument list too long" error. I know find … -exec … {} + is supposed to group pathnames in sets to avoid the error. At first glance the command from the expansion of tool "$@" extra-arg should be shorter than what find executes, so if find manages to avoid "argument list too long" then exec tool … will avoid it as well; but: 1. I'm not sure if the relevant structure will contain tool or /full/absolute/path/to/tool which may be long and thus possibly make the structure exceed {ARG_MAX}, despite the effort to not exceed {ARG_MAX} taken by find earlier. 0. I'm not sure if sh may add something to the relevant structure and thus possibly make the structure exceed {ARG_MAX}, despite the effort to not exceed {ARG_MAX} taken by find earlier. **Assuming that find does a good job avoiding "argument list too long", can I assume my exec tool … invoked in the inner shell will avoid the error as well?** Clarification: - Yes, I'm asking about my attempted solution, like in the XY problem; but I'm *deliberately* asking about my attempted solution because I want to understand possible flaws of it. I know I can use -exec tool {} extra-arg \; and avoid the problem. I know I can pipe (preferably null-terminated strings) to xargs and let xargs deal with {ARG_MAX}. If my attempted solution is not robust and if some improvement can make it robust then I'm interested, but only if -exec … {} + is kept. - I know that some implementations of find do not try to get as close to {ARG_MAX} as possible , but according to the POSIX specification find does not have to leave any room we could take advantage of later. I do not have any particular implementation of find in mind. In this matter I will appreciate generic answers or answers that compare many implementations. - In the context of (1), the question is specifically about Linux. If a good answer mentions or compares to other Unices then fine, I don't mind extending the scope this way; but I do not require this. - In the context of (2), I do not have any particular implementation of sh in mind.
Kamil Maciorowski (24294 rep)
Nov 28, 2024, 01:45 PM
2 votes
1 answers
116 views
Why does exec in bash script run by cron not preserve $PATH?
I have the following cron job setup on Debian 12: `/etc/cron.d/jonathan-test`: ``` SHELL=/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin * * * * * jonathan /home/jonathan/test1.sh >> /home/jonathan/test.log 2>&1 ``` `/home/jonathan/test1.sh`: ``` #!/usr/bin/env bash expor...
I have the following cron job setup on Debian 12: /etc/cron.d/jonathan-test:
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

* * * * * jonathan /home/jonathan/test1.sh >> /home/jonathan/test.log 2>&1
/home/jonathan/test1.sh:
#!/usr/bin/env bash

export PATH="/home/jonathan/mytestdir:${PATH}"
echo "test1.sh -> PATH=${PATH}"
export PAAATH="this_is_a_test"
echo "test1.sh -> PAAATH=${PAAATH}"
exec "${HOME}/test2.sh"
/home/jonathan/test2.sh:
#!/usr/bin/env bash

echo "test2.sh -> PATH=${PATH}"
echo "test2.sh -> PAAATH=${PAAATH}"
When it runs, it writes the following to /home/jonathan/test.log:
test1.sh -> PATH=/home/jonathan/mytestdir:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
test1.sh -> PAAATH=this_is_a_test
test2.sh -> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
test2.sh -> PAAATH=this_is_a_test
As you can see, the $PATH variable is not preserved by exec. This is a contrived, simplified example of my actual problem, which is with running [pyenv](https://github.com/pyenv/pyenv) from a cron job. If I change my cron.d file to this:
SHELL=/bin/bash
PYENV_ROOT=/opt/pyenv
PATH=/opt/pyenv/shims:/opt/pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

* * * * * jonathan python --version >> /home/jonathan/test.log 2>&1
Then I get this written to the output file:
/opt/pyenv/libexec/pyenv-exec: line 24: pyenv-version-name: command not found
It correctly executes /opt/pyenv/shims/python. That's just a bash script that runs pyenv exec python --version. It correctly executes /opt/pyenv/bin/pyenv, which is a symlink to /opt/pyenv/libexec/pyenv, which is a bash script that modifies $PATH to include /opt/pyenv/libexec (and yes, it does export it!) and executes /opt/pyenv/libexec/pyenv-exec, which is another bash script that tries to do PYENV_VERSION="$(pyenv-version-name)" on line 24 which results in the error above, because /opt/pyenv/libexec isn't in $PATH. I've narrowed it down to the simplified example above. The exact same pyenv setup with only environment variables and without the shell integration works just fine when not run from cron. FYI, there's no sudo anywhere in this, and I can reproduce it as other users too. So it doesn't seem to be related to secure_path in /etc/sudoers.
jnrbsn (189 rep)
Nov 25, 2024, 01:26 AM • Last activity: Nov 27, 2024, 03:30 PM
0 votes
1 answers
47 views
Are reads of /proc/pid/environ atomic in Linux 6.x (e.g: 6.1.99)?
When a process execs, looking at kernel code for environ_read(), it seems that if the mm_struct doesn't yet exist / is null or the env_end member of that mm_struct is null, environ_read() will return 0 ~immediately. My question is, are there protections WRT fork/exec races such that (pseudo-code ahe...
When a process execs, looking at kernel code for environ_read(), it seems that if the mm_struct doesn't yet exist / is null or the env_end member of that mm_struct is null, environ_read() will return 0 ~immediately. My question is, are there protections WRT fork/exec races such that (pseudo-code ahead) if ((pid = fork) != 0) execv*("/bin/exe", {"exe", "-l"}, &envp) read("/proc/${pid}/environ") Cannot: **A:** erroneously read a zero-length env due to races with execve and the subsequent read (e.g: assuming the user space program that issues the read is multi-threaded or performing asynchronous IO) **B:** erroneously read a partial env (assuming user-space code is not causing a short read due to a bug in user space code) **C:** erroneously read the parent's env Are reads from /p/p/environ atomic?
Gregg Leventhal (109 rep)
Nov 3, 2024, 02:51 PM • Last activity: Nov 19, 2024, 02:23 PM
1 votes
1 answers
156 views
How to load android binaries in Debian environment?
I am trying to run `adbd` from within a chrooted environment. I can run it fine with Android's `LD_LIBRARY_PATH=$PWD ./linker64 $PWD/adbd`. When I try to run `./adbd` I get: `bash: ./adbd: cannot execute: required file not found`. Running with `strace $PWD/adbd` returns: ```bash execve("/root/adbd",...
I am trying to run adbd from within a chrooted environment. I can run it fine with Android's LD_LIBRARY_PATH=$PWD ./linker64 $PWD/adbd. When I try to run ./adbd I get: bash: ./adbd: cannot execute: required file not found. Running with strace $PWD/adbd returns:
execve("/root/adbd", ["/root/adbd"], 0x7fcfe8dfd0 /* 8 vars */) = -1 ENOENT (No such file or directory)
strace: exec: No such file or directory
+++ exited with 1 +++
What dynamic linker is missing and on which path?
Bret Joseph (491 rep)
Jul 4, 2023, 03:44 PM • Last activity: Jul 26, 2024, 08:59 PM
0 votes
0 answers
74 views
Understanding AFL behaviour for fork and execv; Why `/proc/<pid>/maps` does not show the loaded binary
TL;DR Why process map in `/proc/ /maps` does not show where the executed binary is loaded? I am trying to do some post-mortem analysis of the fuzzed program once it finishes. Basically what I am seeing is that `/proc/ /maps` shows what looks to be the memory map of the parent instead of the child. I...
TL;DR Why process map in /proc//maps does not show where the executed binary is loaded? I am trying to do some post-mortem analysis of the fuzzed program once it finishes. Basically what I am seeing is that /proc//maps shows what looks to be the memory map of the parent instead of the child. I was unable to replicate the behavior on a smaller scale but I'll provide a patch to [github.com/google/AFL](https://github.com/google/AFL) . (Add an empty raw after the else last row if patch fails)
diff --git a/afl-fuzz.c b/afl-fuzz.c
index 46a216c..e31125f 100644
--- a/afl-fuzz.c
+++ b/afl-fuzz.c
@@ -2283,6 +2283,14 @@ EXP_ST void init_forkserver(char** argv) {

 }

+static void read_map(int pid, char *map) {
+    FILE *proc;
+    char path;
+    sprintf(path, "/proc/%d/maps", pid);
+    proc = fopen(path, "r");
+    fread(map, 4096, 1, proc);
+    fclose(proc);
+}

 /* Execute target application, monitoring for timeouts. Return status
    information. The called program will update trace_bits[]. */
@@ -2423,7 +2431,14 @@ static u8 run_target(char** argv, u32 timeout) {

   if (dumb_mode == 1 || no_forkserver) {

-    if (waitpid(child_pid, &status, 0)  input/test1
./afl-fuzz -i input -o output -n -- /bin/ls
What you'll probably is the map for afl-fuzz? The patch is waiting for the execv to finish while reading the /proc//maps, once it's finished prints the last map that was read. I'm curious what I am missing in me reading the map or what triggers this. (Also I don't think this is regarding AFL itself is just where I've seen this, it would be nice to see a smaller program that replicates the behavior since I was unable to replicate it)
sorin the turtle (1 rep)
May 19, 2024, 05:31 PM • Last activity: May 22, 2024, 09:37 AM
2 votes
1 answers
267 views
How to pass a parameter to a command that's being executed by exec in bash?
I'm trying to start a subprocess with a unique process name from a bash script. This works: ``` bash -c "exec -a MyUniqueProcessName './start_service' &" ``` but my problem is that I want to pass a parameter to `start_service`. If I do something like ``` bash -c "exec -a MyUniqueProcessName './start...
I'm trying to start a subprocess with a unique process name from a bash script. This works:
bash -c "exec -a MyUniqueProcessName './start_service' &"
but my problem is that I want to pass a parameter to start_service. If I do something like
bash -c "exec -a MyUniqueProcessName './start_service param' &"
or
bash -c "exec -a MyUniqueProcessName './start_service $myvar' &"
then it fails with complaining about
/start_service param: No such file or directory
What am I doing wrong?
Adam Arold (133 rep)
Mar 20, 2024, 01:36 PM • Last activity: Mar 20, 2024, 04:44 PM
4 votes
1 answers
225 views
POSIX Shell: `exec` with changed arg0
I want to `exec` a program and control it's arguments including arg0 and environment. Using C I could go for `execve`. Can I do this in POSIX shell?
I want to exec a program and control it's arguments including arg0 and environment. Using C I could go for execve. Can I do this in POSIX shell?
Patrick B&#246;ker (563 rep)
Mar 15, 2024, 09:45 PM • Last activity: Mar 16, 2024, 10:55 AM
0 votes
1 answers
298 views
How to perform strace on shell without changing your current shell?
I use `strace` to trace the behavior of a `bash` process. The purpose is to find out the order `bash` loads its configuration files. I am running the following command under `zsh`: ```sh strace -e openat bash ``` After running this command, I end up in a new `bash` shell, but I don't want that to ha...
I use strace to trace the behavior of a bash process. The purpose is to find out the order bash loads its configuration files. I am running the following command under zsh:
strace -e openat bash
After running this command, I end up in a new bash shell, but I don't want that to happen. Is there any way to trace the bash command without actually starting new bash interactive shell? I searched online but couldn't find anything. I was trying this with exec: strace -e openat "$(exec bash)" 2>&1, but still my shell changes to bash from zsh.
Visrut (137 rep)
Jan 23, 2024, 11:41 AM • Last activity: Jan 23, 2024, 01:08 PM
16 votes
1 answers
1290 views
Is the behavior of bash -c "<single command>" documented?
It's quite known that when running `bash -c "COMMAND"` (at least in the common versions in Linux) when there's a single command without any [metacharacter](https://www.gnu.org/software/bash/manual/bash.html#index-metacharacter) (except for _space_, _tab_ or _newline_), the `bash -c` process will not...
It's quite known that when running bash -c "COMMAND" (at least in the common versions in Linux) when there's a single command without any [metacharacter](https://www.gnu.org/software/bash/manual/bash.html#index-metacharacter) (except for _space_, _tab_ or _newline_), the bash -c process will not fork, but rather replace itself by executing COMMAND directly with execve system call for optimization, so the result will only be one process.
-shellsession
$ pid=$$; bash -c "pstree -p $pid"
bash(5285)───pstree(14314)
If there's any metacharacter (such as redirection) or more than one command (which requires a metacharacter anyway), bash will fork for every command it executes.
-shellsession
$ pid=$$; bash -c ":; pstree -p $pid"
bash(5285)───bash(28769)───pstree(28770)

$ pid=$$; bash -c "pstree -p $pid 2>/dev/null"
bash(5285)───bash(14403)───pstree(14404)
Is this an undocumented optimization feature (which means it's not guaranteed), or is it documented somewhere and guaranteed? --- _Note:_ I assume that not all versions of bash behave like that and that on some versions that do, it's just considered an implementation details and not guaranteed, but I wonder if maybe there are at least some bash versions that do explicitly support this and document the condition for this. For instance, if there's a single ; character after the command, without any second command, bash will still execve without forking.
-shellsession
$ pid=$$; bash -c "pstree -p $pid ; "
bash(17516)───pstree(17658)
### Background to my question As I mentioned, this behavior is quite well known1 2 by experienced bash users, and I'm familiar with it for a long time. Some days ago I encountered the following comment to [Interactive bash shell: Set working directory via commandline options](https://unix.stackexchange.com/questions/766111/interactive-bash-shell-set-working-directory-via-commandline-options/766113#comment1461163_766113) where @dave_thompson_085 wrote: > bash automatically execs (i.e. replaces itself with) the last (or only) command in -c. I responded that it's only true if there's a single command. But then I wondered: Are there some versions of bash where maybe the last command **is** execed and not forked, even if there's another command before it? And in general, are there cases this behavior is guaranteed? Do certain bash versions expose (and elaborate on) this feature outside of the source code? ### Additional references * 1[Why is there no apparent clone or fork in simple bash command and how it's done?](https://unix.stackexchange.com/questions/466496/why-is-there-no-apparent-clone-or-fork-in-simple-bash-command-and-how-its-done) * 2[Why bash does not spawn a subshell for simple commands?](https://unix.stackexchange.com/questions/401020/why-bash-does-not-spawn-a-subshell-for-simple-commands)
aviro (6925 rep)
Jan 10, 2024, 10:43 AM • Last activity: Jan 11, 2024, 08:01 AM
0 votes
2 answers
98 views
Creating an option for 'exec > >(tee -ia $OUT)' to skip stdout
I'm trying to **modify** a script that uses the following: ```bash # first portion of script # ... exec > >(tee -ia $OUT) # ... # second portion of script ``` The problem I have with this script is that it produces voluminous output to `stdout` (my terminal). The script author included no options fo...
I'm trying to **modify** a script that uses the following:
# first portion of script
# ...
exec > >(tee -ia $OUT)
# ...
# second portion of script
The problem I have with this script is that it produces voluminous output to stdout (my terminal). The script author included no options for eliminating the terminal output. I would like to add an ***option*** that removes the stdout, and get the output solely in the file $OUT. Here's what I've tried:
TERM_OPT="OFF"

# first portion of script
# ...

if [ $TERM_OPT != "OFF" ]; then
     exec > >(tee -ia $OUT)
else {

# ...
# second portion of script 

} > $OUT
fi
This seems to work, but I'm unsure about the use of *curly braces* {} in this context as the [GNU secion of Grouping Commands](https://www.gnu.org/software/bash/manual/html_node/Command-Grouping.html) (*seems to*) state that a semicolon ; is required following the list. But adding a ; or leaving it off seems to make no difference. I've wondered if I should use parentheses () instead of curly braces, but this causes everything inside the () to execute in a subshell. I'm not particularly keen on that as it's someone else's script & the *subshell* implications are unclear to me (I didn't try this). The other thing I tried *seemed like* a hack, but I read of others using it, and it seems to work OK also:
TERM_OPT="OFF"

# first portion of script
# ...

if [ $TERM_OPT != "OFF" ]; then
     exec > >(tee -ia $OUT)
else
     exec > >(tee -ia $OUT 1> /dev/null)
fi

# ...
# second portion of script
I like this as it seems more *self-contained*, but that's not much of a consideration AFAICT. **So the Question is:** What's the correct way to do this? By that, I mean what's the correct way to **opt out** of the terminal output after an exec > >(tee -ia $OUT)? Is one of these solutions preferable to the other - or do I need to do something completely different?
Seamus (3772 rep)
Jan 8, 2024, 07:26 AM • Last activity: Jan 8, 2024, 08:20 AM
1 votes
2 answers
1129 views
Redirecting stderr to temporary fd in a Bash script
I see that many questions have been asked and answered on SE about redirections in Bash using *exec*, but none seem to answer my question. What I'm trying to accomplish is redirect all output to *stderr* in a script to a temporary fd and restitute it back to *stderr* only in case of non-successful s...
I see that many questions have been asked and answered on SE about redirections in Bash using *exec*, but none seem to answer my question. What I'm trying to accomplish is redirect all output to *stderr* in a script to a temporary fd and restitute it back to *stderr* only in case of non-successful script termination. The ability to restore the contents of *stderr* is needed in case of a non-successful termination, for returning to the caller information about the error. Redirecting *stderr* to
/dev/null
would discard both irrelevant noise and useful information. As much as possible, explicit temporary files should be avoided, as they add a number of other issues (see https://dev.to/philgibbs/avoiding-temporary-files-in-shell-scripts ). The contents of *stdout* should be returned transparently to the caller. It will be ignored by the caller in case of non-successful termination of the script, but the script shouldn't need to care about that. The reason for doing this is that the script is called from another program that considers any output to *stderr* an error, instead of testing for a non-zero exit code. In the example below, *openssl* echoes a progress indicator to *stderr* which does not indicate an error and can be ignored upon successful completion of the command. Below is a simplified version of my script (note that *openssl* here is only a placeholder for arbitrary and possibly more complex instructions):
#!/bin/bash
set -euo pipefail

# Redirect stderr to temp fd 3
exec 3>&2

# In case of error, copy contents of fd 3 to stderr
trap 'cat &2' ERR

# Upon exit, restore original stderr and close fd 3
trap 'exec 2>&3 3>&-' EXIT

# This nonetheless outputs the progress indicator to stderr
openssl genpkey -algorithm RSA
But I'm obviously missing something as the script keeps printing out the content of *stderr* upon successful termination, but now blocks upon failure. I guess I'm confused with how redirections with *exec* work. What am I getting wrong?
mesr (429 rep)
Nov 17, 2022, 03:41 PM • Last activity: Dec 9, 2023, 03:30 PM
1 votes
4 answers
10497 views
What after exec() in ls command: Is the parent process printing the output to the console or the child?
I have a simple doubt on execution of the command `ls`. As per my understanding from the research I have done on the internet, I understood the below points. 1. When we type `ls` command shell interprets that command. 2. Then the shell process forks and creates the child process and the parent (shel...
I have a simple doubt on execution of the command ls. As per my understanding from the research I have done on the internet, I understood the below points. 1. When we type ls command shell interprets that command. 2. Then the shell process forks and creates the child process and the parent (shell) executes the wait() system call, effectively putting itself to sleep until the child exits. 3. Child process inherits all the open file descriptors and the environment. 4. The child process (shell) executes an exec() of the ls program, causing the ls binary being loaded from the disk (filesystem) and being executed in the same process. 5. When the ls program runs to completion, it calls exit(), and the kernel sends a signal to its parent indicating the child has terminated. My doubt starts from here onwards, as soon as ls finishes its tasks; does it send the result back to the parent process, or does it display the output to the screen? If it sends the output back to parent, then is it using pipe() implicitly?
Subi Suresh (513 rep)
Mar 14, 2013, 06:28 PM • Last activity: Nov 28, 2023, 04:46 PM
0 votes
1 answers
81 views
How to execute a subshell directly
I have this: timeout 25 bash -c ' for i in {1..9}; do if read line < "$my_fifo"; then if test "$line" != "0"; then exit 0; fi fi done ' I don't really like that bash can't do this: timeout 25 (...) I don't understand why () is not considered a program by itself. Just an anonymous program... anyway.....
I have this: timeout 25 bash -c ' for i in {1..9}; do if read line < "$my_fifo"; then if test "$line" != "0"; then exit 0; fi fi done ' I don't really like that bash can't do this: timeout 25 (...) I don't understand why () is not considered a program by itself. Just an anonymous program... anyway.. my goals is to achieve the above but without the need to use bash -c 'xyz' since I won't get syntax highlighting in the quotes etc. Is there a workaround for this?
Alexander Mills (10734 rep)
Nov 22, 2023, 04:00 AM • Last activity: Nov 22, 2023, 05:11 AM
3 votes
1 answers
264 views
How to stop redirection via exec
I want to redirect some output of a script to a file. ```bash REDIRECT_FILE="foo.txt" echo to console # starting redirect exec > $REDIRECT_FILE echo to file echo ... echo ... # stop redirect exec >&- echo end of script ``` "end of script" should be written to stdout. But instead there is a "Bad file...
I want to redirect some output of a script to a file.
REDIRECT_FILE="foo.txt"

echo to console

# starting redirect
exec > $REDIRECT_FILE

echo to file
echo ...
echo ...

# stop redirect
exec >&-

echo end of script
"end of script" should be written to stdout. But instead there is a "Bad file descriptor" error message. exec 1>&- does not work too. What is the correct term to stop the redirection?
Andy A. (227 rep)
Nov 20, 2023, 01:34 PM • Last activity: Nov 20, 2023, 02:51 PM
0 votes
3 answers
3702 views
Parent process always printing output after child
Consider the following code running under Solaris 11.3: int main(void) { pid_t pid = fork(); if (pid > 0) { printf("[%ld]: Writing from parent process\n", getpid()); } if (pid == 0) { execl("/usr/bin/cat", "/usr/bin/cat", "file.c", (char *) 0); perror("exec failed"); exit(1); } } Whenever I run it,...
Consider the following code running under Solaris 11.3: int main(void) { pid_t pid = fork(); if (pid > 0) { printf("[%ld]: Writing from parent process\n", getpid()); } if (pid == 0) { execl("/usr/bin/cat", "/usr/bin/cat", "file.c", (char *) 0); perror("exec failed"); exit(1); } } Whenever I run it, the "Writing from parent" line is always output last. I would not be surprised with that result if my school task wasn't to use wait(2) in order to print that line only after the child process has finished. Why does this happen and how to ensure that this line is printed before the child process executes cat (or the order is at least undefined), so I can safely use wait(2) or waitpid(2) to tackle that?
Dmitry Serov (103 rep)
Mar 9, 2016, 05:21 PM • Last activity: Nov 20, 2023, 02:48 PM
Showing page 1 of 20 total questions