Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

2 votes
1 answers
153 views
What is a parked thread in Linux kernel?
What is a parked thread in the context of Linux kernel? I mean a thread that is in `TASK_PARKED` state? How this state differs from `TASK_INTERRUPTIBLE` and `TASK_UNINTERRUPTIBLE`? From which state a thread can be woken faster? Generally, and in particular case if used for waiting: `kthread_parkme /...
What is a parked thread in the context of Linux kernel? I mean a thread that is in TASK_PARKED state? How this state differs from TASK_INTERRUPTIBLE and TASK_UNINTERRUPTIBLE? From which state a thread can be woken faster? Generally, and in particular case if used for waiting: kthread_parkme / kthread_unpark instead of [s]wait_event_... / [s]wake_up_...]? I know that waitqueues support multiple waiters, but I am interested only in a single sleeper/waker pair.
Andrey Pro (179 rep)
Apr 19, 2025, 01:34 PM • Last activity: Apr 23, 2025, 11:24 AM
3 votes
2 answers
589 views
Linux wait command returning for finished jobs
I came across some weird behavior when using the wait command for running parallel jobs in a bash script. For the sake of simplicity I have reduced the problem to the following bash script: ```bash #!/bin/bash test_func() { echo "$(date +%M:%S:%N): start $1" sleep $1 echo "$(date +%M:%S:%N): end $1"...
I came across some weird behavior when using the wait command for running parallel jobs in a bash script. For the sake of simplicity I have reduced the problem to the following bash script:
#!/bin/bash

test_func() {
  echo "$(date +%M:%S:%N): start $1"
  sleep $1
  echo "$(date +%M:%S:%N): end $1"
}

i=0
for j in {5..9}; do
  test_func $j &
  ((i++))
  sleep 3
done
echo "$(date +%M:%S:%N): No new processes, waiting for all to finish"
while [ $(pgrep -c -P$$) -ge 1 ]; do
  echo "$(date +%M:%S:%N): $(pgrep -P$$ -d' ')"
  wait -n $(pgrep -P$$ -d' ')
  echo "$(date +%M:%S:%N): next $i"
  ((i++))
done
The above script spawns 5 parallel runs of the test_func function, which each wait for j seconds. I've added time stamps to each output to show the timings. The output of running this script is as follows:
03:53:854843895: start 5
03:56:855729952: start 6
03:58:856136029: end 5
03:59:856388725: start 7
04:02:857016376: end 6
04:02:857508665: start 8
04:05:857895265: start 9
04:06:857738397: end 7
04:08:858666941: No new processes, waiting for all to finish
04:08:864528182: 3837265 3837297
04:08:875479745: next 5
04:08:881049792: 3837265 3837297
04:08:892058494: next 6
04:08:899310728: 3837265 3837297
04:08:910466324: next 7
04:08:916130505: 3837265 3837297
04:10:858746305: end 8
04:10:859380011: next 8
04:10:864975972: 3837297
04:14:859172632: end 9
04:14:859818377: next 9
As can be seen from the output above, the script spawns all 5 processes, of which 3 end before the end of the for loop (due to the sleep 3). At this point there are 2 processes still running, which are given correctly by the pgrep command with IDs 3837265 and 3837297. However the wait command in the while loop then immediately returns (< 0.1 seconds) for the next three calls, without any other processes finishing (shown with the pgrep command), even despite giving it the process IDs to wait for. As far as I can tell (and from some experimentation) the wait command is immediately returning for each of the test_func calls that finished before it was first called (which in this case is three times), before actually waiting. What I don't understand is why this is the case, especially since I supply the process IDs to wait for. I'm using Ubuntu 20.04.6 and GNU bash, version 5.0.17(1) for context.
Dylan Callaghan (55 rep)
Sep 20, 2024, 01:06 PM • Last activity: Sep 20, 2024, 09:04 PM
7 votes
2 answers
2208 views
What determines whether a script's background processes get a terminal's SIGINT signal?
``` #!/usr/bin/env bash sleep 3 && echo '123' & sleep 3 && echo '456' & sleep 3 && echo '999' & ``` If I run this, and send `SIGINT` by pressing control-c via terminal, it seems to still echo the `123...` output. I assumed this is because it's somehow detached? However if I add a `wait < <(jobs -p)`...
#!/usr/bin/env bash

sleep 3 && echo '123' &
sleep 3 && echo '456' &
sleep 3 && echo '999' &
If I run this, and send SIGINT by pressing control-c via terminal, it seems to still echo the 123... output. I assumed this is because it's somehow detached? However if I add a wait < <(jobs -p) (wait for all background jobs to finish) to the end of the script, if I run it then, and send the SIGINT then the 123... output is **not** displayed. What explains this behavior? Is wait somehow intercepting the signal and passing it to the background processes? Or is it do with some sort of state of whether a process is "connected" or not to a terminal? I found one possibly relevant question on this but I couldn't figure out how it relates to the above behaviour: https://unix.stackexchange.com/questions/564726/why-sigchld-signal-was-not-ignored-when-using-wait-functions
Chris Stryczynski (6603 rep)
Jun 21, 2020, 10:18 AM • Last activity: Jun 30, 2024, 11:24 PM
6 votes
3 answers
12939 views
What is the relation between SIGCHLD and `waitpid()` or`wait()`?
If I am correct, a process waits for its children to terminate or stop by calling the `waitpid()` or `wait()` function. What is the relation between SIGCHLD signal and the `waitpid()` or`wait()` functions? - Is it correct that when a process calls the `waitpid()` or `wait()` functions, the process s...
If I am correct, a process waits for its children to terminate or stop by calling the waitpid() or wait() function. What is the relation between SIGCHLD signal and the waitpid() orwait() functions? - Is it correct that when a process calls the waitpid() or wait() functions, the process suspends itself until a child process terminates/stops, which is the same as until a SIGCHLD signal is sent to it? (pause() suspends the current process until any signal is sent to it. So I wonder if waitpid() is similar, except that until SIGCHLD is sent to it?) - When SIGCHLD is sent to a process which has been suspended by calling waitpid(), what is the order between executing SIGCHLD handler and resuming from suspension by waitpid()? ( In the following example from Computer Systems: a Programmer's Perspective, the SIGCHLD handler calls waitpid().) Thanks.
void handler(int sig)
{
  int olderrno = errno;

  while (waitpid(-1, NULL, 0) > 0) {
    Sio_puts("Handler reaped child\n");
  }
  if (errno != ECHILD)
    Sio_error("waitpid error");
  Sleep(1);
  errno = olderrno;
}

int main()
{
  int i, n;
  char buf[MAXBUF];

  if (signal(SIGCHLD, handler1) == SIG_ERR)
    unix_error("signal error");

  /* Parent creates children */
  for (i = 0; i < 3; i++) {
    if (Fork() == 0) {
      printf("Hello from child %d\n", (int)getpid());
      exit(0);
    }
  }

  /* Parent waits for terminal input and then processes it */
  if ((n = read(STDIN_FILENO, buf, sizeof(buf))) < 0)
    unix_error("read");

  printf("Parent processing input\n");
  while (1)
    ;

  exit(0);
}
Tim (106420 rep)
Oct 27, 2020, 12:29 AM • Last activity: May 17, 2023, 11:27 AM
1 votes
1 answers
113 views
Does POSIX sh require expanding $! in order to keep a reference to the child process?
## Spec According to [this online POSIX Spec](https://pubs.opengroup.org/onlinepubs/9699919799/), in Shell & Utilities, Shell Command Language, Section 2.9.3 Lists has the following to say about Asynchronous Lists: > When an element of an asynchronous list (the portion of the list ended > by an \ ,...
## Spec According to [this online POSIX Spec](https://pubs.opengroup.org/onlinepubs/9699919799/) , in Shell & Utilities, Shell Command Language, Section 2.9.3 Lists has the following to say about Asynchronous Lists: > When an element of an asynchronous list (the portion of the list ended > by an \, such as *command1*, above) is started by the shell, > the process ID of the last command in the asynchronous list element > shall become known in the current shell execution environment; see > Shell Execution Environment. This process ID shall remain known until: > > - The command terminates and the application waits for the process ID. > > - Another asynchronous list is invoked before "$!" (corresponding to the > previous asynchronous list) is expanded in the current execution > environment. > > The implementation need not retain more than the {CHILD_MAX} most > recent entries in its list of known process IDs in the current shell > execution environment. Other relevant quotations: - From idem., Section 2.12 Shell Execution Environment: > A shell execution environment consists of the following: > > […] > > - Process IDs of the last commands in asynchronous lists known to this shell environment; see Asynchronous Lists - From Shell & Utilities, Utilities, wait: > When an asynchronous list (see Asynchronous Lists) is started by the shell, the process ID of the last command in each element of the asynchronous list shall become known in the current shell execution environment; see Shell Execution Environment. > > […] > > If the wait utility is invoked with no operands, it shall wait until all process IDs known to the invoking shell have terminated and exit with a zero exit status. > > […] > > The known process IDs are applicable only for invocations of wait in the current shell execution environment. ## Confusion In Bash, help wait specifies "all currently active child processes." The spec specifies only all known process IDs, and it seems that that invoking another async list causes the process ID to be "forgotten." That would make the following program only wait for 5 seconds if I've interpreted the POSIX spec correctly:
#! /bin/sh
sleep 10 &
sleep 5 &
wait
Since no use of $! appears before sleep 5 &, is the ID for sleep 10 forgotten and not waited on? Similarly, inserting : $! between the two lines would cause it to not be forgotten, right? Let me try to lay out my logic a little more clearly: 1. SPEC: The known proc. IDs are part of the shell execution environment. Let's imagine this is some list, since implementation details don't matter. 1. SPEC: Running command & causes the proc. ID for this asynchronous command to be "known" (_i.e._, in the list). - SPEC (dubiously, but see the very first quote): Running command2 & without expanding $! in some way causes the proc. ID for the previous command to become _no longer known_! (But the proc. ID for command2 is now known.) 1. SPEC: wait with no arguments uses only the known proc. IDs. 1. Conclusion: Therefore, not forcing expansion of $! between asynchronous commands causes wait to forget to wait on some child processes. Some have argued that wait waits on all processes. The spec says "known" processes, which has a specific definition. Some have argued that "known" refers to "in $!" even though the spec says no such thing (and further, "known process IDs" is a plural quantity while $! is not). I agree this makes no practical sense; a shell doesn't forget about my jobs when I start a new one. So where am I misreading the spec? ## Question - Does POSIX actually require using $! in order for wait with no arguments to behave sensibly, such as when starting asynchronous lists in a loop? - Does any shell (POSIX or otherwise) actually implement the spec in such a way (_i.e._, practically, can I avoid adding : $! expand async. proc. ID to prevent forgetting it to programs that use nullary wait)?
D. Ben Knoble (552 rep)
Apr 24, 2023, 05:06 PM • Last activity: Apr 24, 2023, 06:52 PM
0 votes
1 answers
178 views
Is it possible to defer reaping of background processes in bash?
If I just run `sleep 1 &` in bash, the `sleep` process will get reaped almost instantly after it dies. This happens whether job control is enabled or disabled. Is there a way I can make bash hold off on reaping the process until I do something like `wait` or `fg`? E.g.: ```bash sleep 1 & sleep 2 ps...
If I just run sleep 1 & in bash, the sleep process will get reaped almost instantly after it dies. This happens whether job control is enabled or disabled. Is there a way I can make bash hold off on reaping the process until I do something like wait or fg? E.g.:
sleep 1 &
sleep 2
ps -ef | grep defunct # I want this to show the sleep 1 process
wait
ps -ef | grep defunct # But now it should be gone
Joseph Sible-Reinstate Monica (4220 rep)
Apr 19, 2023, 01:06 AM • Last activity: Apr 19, 2023, 09:17 AM
3 votes
1 answers
1432 views
using wait (bash posix) and fail if one process fails in a script
I am writing a script that executes a bunch of commands in the background all at once then waits for them to all finish: ``` #!/bin/sh -fx ./p1 & ./p2 & ./p3 & ./p4 & ./p5 & wait ``` The problem is that if one or more of these processes fail, it just keeps going. How can I execute all of these comma...
I am writing a script that executes a bunch of commands in the background all at once then waits for them to all finish:
#!/bin/sh -fx
./p1 &
./p2 &
./p3 &
./p4 &
./p5 &
wait
The problem is that if one or more of these processes fail, it just keeps going. How can I execute all of these commands at the same time and check if one or more fail?
user567972 (31 rep)
Apr 5, 2023, 02:49 PM • Last activity: Apr 5, 2023, 03:29 PM
1 votes
2 answers
107 views
How to add file markers to check if script is already running
I wonder if I can get some help with a project I'm working on. I have a Synology NAS. I found a Community Package that autoruns a script of my creation anytime a USB drive is plugged in to one of the drives. My script copies images and movie files to a given folder from all USB drives/Sandisk cards...
I wonder if I can get some help with a project I'm working on. I have a Synology NAS. I found a Community Package that autoruns a script of my creation anytime a USB drive is plugged in to one of the drives. My script copies images and movie files to a given folder from all USB drives/Sandisk cards listed in the script into a specific folder on the Synology. The autorun package runs the script **every** time each drive is plugged in. The problem is if I plug in four USB drives one after the other within 15 seconds, it copies all four drives four times. Instead I want it wait 15 seconds to allow me to plug in all USBs, and then copy all drives once. My script is:
#!/bin/bash
#
var=$(date +"%FORMAT_STRING")
now=$(date +”%m_%d_%Y_%s”)
printf "%s\n" $now
today=$(date +%m-%d-%Y-%s)
rsync -avz --prune-empty-dirs --include "*/" --include="*."{cr2,CR2,mov,MOV,mpg,MPG,dng,DNG,jpg,JPG,jpeg,JPEG} --exclude="*" /volumeUSB1/usbshare/ /volume1/KingstonSSD/Camera_Loads/Sandisk-${today}
rsync -avz --prune-empty-dirs --include "*/" --include="*."{cr2,CR2,mov,MOV,mpg,MPG,dng,DNG,jpg,JPG,jpeg,JPEG} --exclude="*" /volumeUSB2/usbshare/ /volume1/KingstonSSD/Camera_Loads/Sandisk-${today}
rsync -avz --prune-empty-dirs --include "*/" --include="*."{cr2,CR2,mov,MOV,mpg,MPG,dng,DNG,jpg,JPG,jpeg,JPEG} --exclude="*" /volumeUSB3/usbshare/ /volume1/KingstonSSD/Camera_Loads/Sandisk-${today}
rsync -avz --prune-empty-dirs --include "*/" --include="*."{cr2,CR2,mov,MOV,mpg,MPG,dng,DNG,jpg,JPG,jpeg,JPEG} --exclude="*" /volumeUSB4/usbshare4-2/ /volume1/KingstonSSD/Camera_Loads/Sandisk-${today}
My goal is to have the script wait a given time (say 15 seconds), to allow me to plug in all four USBs. Then after 15 seconds, run the code one time. I guess I need to check if the code is already running for any of the USBs plugged in. Terminate the current script if so, copy files if not. I found this, I'm wondering if I can tweak it and add it to mine to check if any other instances of my script are running and terminate if so... or copy files if not:
if [ ps -ef | grep "script.sh" | grep -v grep | wc -l -gt 1 ] ; then
echo "RUNNING...."
else
echo "NOT RUNNING..."
fi
Any chance anyone would help with a solution? Thanks in advance!
Matt (13 rep)
Feb 11, 2023, 10:30 PM • Last activity: Feb 12, 2023, 11:20 AM
1 votes
1 answers
11330 views
Curl returning with no response and does not wait for `wait=x seconds`
I call an async service that takes ~80 seconds to respond. I run: ``` curl -v -X POST https://hostname.com/service/v2/predict \ -H 'x-api-key: somekey' \ -H 'x-request-id: longfiles' \ -H "Authorization: Bearer dfv651df8fdvd" \ -H 'Prefer: respond-async, wait=200' \ -F 'contentAnalyzerRequests={"inp...
I call an async service that takes ~80 seconds to respond. I run:
curl -v -X POST https://hostname.com/service/v2/predict  \
  -H 'x-api-key: somekey' \
  -H 'x-request-id: longfiles' \
  -H "Authorization: Bearer dfv651df8fdvd" \
  -H 'Prefer: respond-async, wait=200' \
  -F 'contentAnalyzerRequests={"inputtest": "this is a test"}
  -F infile=@/mnt/file/stream-01865caa-b2e0-40e4-b298-1502fcc65045.json
The command specifies wait=200 but curl returns in ~60 seconds. And since the service takes ~80 seconds to respond, I get no response (but I do get a response if I use wait=1000). Why? --- Output of the curl query with -v:
> Prefer: respond-async, wait=200
> Content-Length: 19271573
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=------------------------5873f0b92dd68547
>
< HTTP/1.1 100 Continue
< HTTP/1.1 202 Accepted
< Server: openresty
< Date: Fri, 07 Oct 2022 21:55:33 GMT
< Content-Length: 0
< Connection: keep-alive
< x-request-id: longfiles
< vary: Origin,Access-Control-Request-Method,Access-Control-Request-Headers
< location: https://hostname.com/service/v2/status/longfiles 
< retry-after: 1
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Headers: Authorization,Content-Type,X-Api-Key,User-Agent,If-Modified-Since,Prefer,location,x-transaction-id,retry-after,cache-control
< Access-Control-Allow-Methods: GET, POST, PUT ,DELETE, OPTIONS,PATCH
< Access-Control-Expose-Headers: location, retry-after, x-request-id, x-transaction-id, cache-control
< Access-Control-Max-Age: 86400
<
* Connection #0 to host hostname.com left intact
Franck Dernoncourt (5533 rep)
Oct 7, 2022, 09:25 PM • Last activity: Oct 10, 2022, 08:26 AM
0 votes
0 answers
45 views
block in bash so that no more than 5 child processes are running at a time?
Is there a way to block in bash, something like this: function do_work { wait -c 5 ### wait until 5 or fewer cp's are running now do whatever else you wanted }
Is there a way to block in bash, something like this: function do_work { wait -c 5 ### wait until 5 or fewer cp's are running now do whatever else you wanted }
Alexander Mills (10734 rep)
Sep 29, 2022, 07:33 AM
0 votes
0 answers
242 views
Bash script logging not working as expected with rsync jump server
I'm using a shell script below for doing some rsync on a remote host. By ssh to jumphost and from there ssh again and initiating the rsync as below. Here I need to capture the log start and end date. But somehow when i executed below script, as it prints both log at the same time. To be more precise...
I'm using a shell script below for doing some rsync on a remote host. By ssh to jumphost and from there ssh again and initiating the rsync as below. Here I need to capture the log start and end date. But somehow when i executed below script, as it prints both log at the same time. To be more precise, here the script spawns on ServerA , then jumping to ServerB, again it jumps to ServerC and then it hit the rsync command. Here I need to capture the start time and end time of rsync command on ServerA. But this end time which is not working as expected, once the script starts it keeps record the start and end time on logs.txt, but i can see the rsync process is still progressing. ~~~ date="date +%T-%D" i=folder echo "Log started at $date for $i" >> logs.txt ssh -T -i key user1@HOST1 "ssh -T user2@HOST2 > logs.txt fi ~~~ Here i would need this script to be active once the rsync process to finish on the remote host and then it proceed to the log ended part. I already tried the wait command, but this also not helped Any ideas appreciated
user183980 (153 rep)
Jun 20, 2022, 08:14 AM • Last activity: Jun 21, 2022, 12:47 PM
1 votes
1 answers
334 views
How to kill all background and spawned processes of a bash script in its pre-exit handler?
I'm using the `wait -n` technique to perform `max_jobs` parallel tasks: ```bash #!/usr/bin/env bash cleanup() { echo "cleaning up..." } trap "cleanup" EXIT do_task() { echo "doing task" "$1" " ..." sleep 3s } main_task() { for ((j = 0; j < 10; j++)); do ((i++ < max_jobs)) || wait -n do_task "$j" & d...
I'm using the wait -n technique to perform max_jobs parallel tasks:
#!/usr/bin/env bash

cleanup() {
    echo "cleaning up..."
}

trap "cleanup" EXIT

do_task() {
    echo "doing task" "$1"  " ..."
    sleep 3s
}

main_task() {
    for ((j = 0; j < 10; j++)); do
        ((i++ < max_jobs)) || wait -n
        do_task "$j" &
    done
    wait
}

i=0
max_jobs=4

main_task
How can I kill all jobs and processes spawned by this script (in cleanup handler) if I hit Ctrl+C ? I tried kill 0 in cleanup, but it doesn't seem to kill the dangling do_task jobs. Note that if I send SIGTERM (Ctrl+C) in first 3 seconds, it kills the script. But if I wait until 5s and then send SIGTERM, suddenly one dangling process consumes 100% of the CPU as if it is stuck in an infinite loop. I have to eyeball that process in htop and send SIGKILL to it manually.
Zeta.Investigator (1190 rep)
Mar 4, 2022, 02:59 PM • Last activity: Mar 7, 2022, 12:11 PM
7 votes
1 answers
762 views
Why wait in this script is not executed after all subshells?
In this script, that pulls all git repositories: ```shell #!/bin/bash find / -type d -name .git 2>/dev/null | while read gitFolder; do if [[ $gitFolder == *"/Temp/"* ]]; then continue; fi if [[ $gitFolder == *"/Trash/"* ]]; then continue; fi if [[ $gitFolder == *"/opt/"* ]]; then continue; fi parent...
In this script, that pulls all git repositories:
#!/bin/bash

find / -type d -name .git 2>/dev/null | 
while read gitFolder; do
    if [[ $gitFolder == *"/Temp/"* ]]; then
        continue;
    fi
    if [[ $gitFolder == *"/Trash/"* ]]; then
        continue;
    fi
    if [[ $gitFolder == *"/opt/"* ]]; then
        continue;
    fi
    parent=$(dirname $gitFolder);
    echo "";
    echo $parent;
    (git -C $parent pull && echo "Got $parent") &
done 
wait
echo "Got all"
the wait does not wait for all git pull subshells. Why is it so and how can I fix it?
Saeed Neamati (841 rep)
Jan 27, 2022, 04:53 PM • Last activity: Jan 28, 2022, 04:24 PM
4 votes
2 answers
1134 views
Why or how does killing the parent process clean the zombie child processes in linux?
Consider this example - #include #include #include int main() { pid_t pid = fork(); if (pid > 0) { printf("Child pid is %d\n", (int)pid); sleep(10); system("ps -ef | grep defunct | grep -v grep"); } return 0; } In this example, the child process remains a zombie until the parent process terminates....
Consider this example - #include #include #include int main() { pid_t pid = fork(); if (pid > 0) { printf("Child pid is %d\n", (int)pid); sleep(10); system("ps -ef | grep defunct | grep -v grep"); } return 0; } In this example, the child process remains a zombie until the parent process terminates. How did this zombie process get cleaned up without being reaped by any process ? $ ./a.out Child pid is 32029 32029 32028 0 05:40 pts/0 00:00:00 [a.out] $ ps -p 32029 PID TTY TIME CMD
shawdowfax1497 (123 rep)
Dec 31, 2021, 12:18 AM • Last activity: Dec 31, 2021, 01:42 AM
2 votes
2 answers
10077 views
Why should parent process wait (to terminate) until all of its child process terminates?
I know there is no enforcement for the parent process to wait until all its child process terminates. However it's a convention followed. Furthermore, I know that if parent process terminates before it's child process terminates, then the child process become orphan and it will be adopted by *init*...
I know there is no enforcement for the parent process to wait until all its child process terminates. However it's a convention followed. Furthermore, I know that if parent process terminates before it's child process terminates, then the child process become orphan and it will be adopted by *init* process. But what I don't understand is, what is the problem if the child process becomes orphan and gets adopted by the *init* process. Why should it be attached to the parent process itself until it terminates?
Darshan L (279 rep)
Dec 30, 2018, 06:49 AM • Last activity: Nov 22, 2021, 12:08 PM
2 votes
1 answers
1228 views
Why doesn't the 2nd command wait for the output of the 1st (piping)?
I'm currently reading M. Bach's "THE DESIGN OF THE UNIX&#174; OPERATING SYSTEM". I read about the main shell loop. Look at the `if (/* piping */)` block. If I understood correctly, piping allows treating the 1st command output as the 2nd command input. If so, why isn't there a code that makes the 2n...
I'm currently reading M. Bach's "THE DESIGN OF THE UNIX® OPERATING SYSTEM". I read about the main shell loop. Look at the if (/* piping */) block. If I understood correctly, piping allows treating the 1st command output as the 2nd command input. If so, why isn't there a code that makes the 2nd command wait for the 1st to terminate? Without this command, piping seems nonsense: the 2nd command can start executing without its input being ready. Main Shell Loop
pigeon_gcc (39 rep)
Nov 15, 2021, 09:06 PM • Last activity: Nov 15, 2021, 09:33 PM
4 votes
2 answers
1535 views
zombie process reap without "wait"
I know if a subprocess does not get reaped properly, it will become a zombie and you can see it by `ps` command. Also the "wait [pid]" command will wait for subshell running on the background until it finishes and reap it. I have a script like this: ``` #!/bin/bash sleep 5 & tail -f /dev/null ``` My...
I know if a subprocess does not get reaped properly, it will become a zombie and you can see it by ps command. Also the "wait [pid]" command will wait for subshell running on the background until it finishes and reap it. I have a script like this:
#!/bin/bash
sleep 5 &

tail -f /dev/null
My question is, I don't use wait after sleep 5 & and the parent shell will never terminate because of tail, then why the sleep 5 & will not become a zombie? I see it disappear after finishing in ps, not sure who reaps it?
chengdol (303 rep)
Apr 11, 2021, 06:58 AM • Last activity: Apr 11, 2021, 09:42 PM
0 votes
0 answers
381 views
Closing stdout required for wait on sub process to finish?
I have a question from https://mywiki.wooledge.org/BashFAQ/106 Have a look at this code: ```bash exec > >(tee myfile) pspid=$! # ... stuff ... echo "A" cat file echo "B" # end stuff # All done. Close stdout so the proc sub will terminate, and wait for it. exec >&- wait $pspid # what happens if we de...
I have a question from https://mywiki.wooledge.org/BashFAQ/106 Have a look at this code:
exec > >(tee myfile)
pspid=$!

# ... stuff ...
echo "A"
cat file
echo "B"
# end stuff

# All done.  Close stdout so the proc sub will terminate, and wait for it.
exec >&-
wait $pspid


# what happens if we delete line exec >&- ?
# what if ...stuff... do not finish before >&- ?
My first question is why do we need exec >&- here? What happens if we delete it? I'm guessing deleting exec >&- will cause wait $pspid to wait indefinitely. Since tee myfile runs in background asynchronously I would expect stuff will be processed out of order. So my second question is, what if any of stuff do not finish before >&-?
Logan Lee (249 rep)
Feb 16, 2021, 02:35 AM
0 votes
2 answers
51 views
Will "$!" reliably return the correct ID with "&"?
In my tests, I always get the correct result so far with this: ``` [fabian@manjaro ~]$ sleep 10 & echo $! [1] 302657 302657 ``` But `sleep` and `echo` are getting executed simultaneously here, so I would expect that it can sometimes happen that `echo` executes before the value of `$!` is set properl...
In my tests, I always get the correct result so far with this:
[fabian@manjaro ~]$ sleep 10 & echo $!
 302657
302657
But sleep and echo are getting executed simultaneously here, so I would expect that it can sometimes happen that echo executes before the value of $! is set properly. Can this happen? Why doesn't it so far for me? My ultimate goal: Execute two tasks in parallel and then wait for both before moving on. The current plan is to use something like foo & bar; wait $!; baz. Will this always work or can it sometimes wait for an arbitrary older background process (or nothing at all, if $! is empty)?
Fabian R&#246;ling (369 rep)
Dec 13, 2020, 06:02 PM • Last activity: Dec 13, 2020, 09:51 PM
9 votes
1 answers
2574 views
Press SPACE to continue (not ENTER)
I know this question has been already asked & answered, but the solution I found listens for space **and enter**: while [ "$key" != '' ]; do read -n1 -s -r key done Is there a way (in **bash**) to make a script that will wait **only** for the space bar?
I know this question has been already asked & answered, but the solution I found listens for space **and enter**: while [ "$key" != '' ]; do read -n1 -s -r key done Is there a way (in **bash**) to make a script that will wait **only** for the space bar?
adazem009 (661 rep)
Oct 17, 2020, 07:03 AM • Last activity: Oct 17, 2020, 07:17 AM
Showing page 1 of 20 total questions