Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
1
answers
2498
views
Why does my process terminate upon log out despite nohup and disown?
I have an executable (a server made with Unity) which I want to continue to run after I log out. All the interwebs say that I should be able to accomplish this with `nohup` or `disown` or both. But it doesn't work. I do: ``` nohup /usr/bin/myexecutable & disown -h ``` and check the process list at t...
I have an executable (a server made with Unity) which I want to continue to run after I log out. All the interwebs say that I should be able to accomplish this with
nohup
or disown
or both. But it doesn't work. I do:
nohup /usr/bin/myexecutable &
disown -h
and check the process list at this point, and my process is running. Then I exit
and ssh back in, and check the process list again, and it is gone. I have tried it with and without disown
, with and without the -h
flag, etc., but nothing stops the process from disappearing when I exit the shell.
I can do it with screen
or tmux
, but I'm trying to set up a simple launch script that nontechnical people on the staff can just log in and run. Any ideas?
(I am running Ubuntu 16.04.6 LTS.)
EDIT: Someone suggested this question , but like in the comment from @mathsyouth, disown
does not work for me. @Fox, the author of the only answer to that question, says "If the program you are using still dies on shell-exit with disown, it is a different problem." So it appears this is a different problem. (And as mentioned above, his suggestion of using screen
is not helpful in my use case.)
Joe Strout
(131 rep)
Jul 7, 2020, 05:57 PM
• Last activity: May 25, 2025, 07:02 PM
4
votes
1
answers
218
views
Run in background avoiding any job control message from the shell
Lets define a shell function (here the shell is Bash) and test it $ s () { xterm -e sleep 5 & } $ s [1] 307926 $ [1]+ Done xterm -e sleep 5 $ With my specific meaning of *better*, I can redefine `s` like this $ s () { xterm -e sleep 5 & disown ; } $ s [1] 307932 $ $ (no message from the shell when t...
Lets define a shell function (here the shell is Bash) and test it
$ s () { xterm -e sleep 5 & }
$ s
307926
$
+ Done xterm -e sleep 5
$
With my specific meaning of *better*, I can redefine
s
like this
$ s () { xterm -e sleep 5 & disown ; }
$ s
307932
$
$
(no message from the shell when the job is finished).
Here I have to ask, is it possible to define s
so that I have
$ s () { pre_magic xterm -e sleep 5 post_magic ; }
$ s
$
$
i.e., suppress the job info printed on terminal by the shell?
gboffi
(1376 rep)
Apr 17, 2025, 01:52 PM
• Last activity: Apr 17, 2025, 02:21 PM
6
votes
2
answers
1722
views
bg command not sending process to background
After pausing the process with Ctrl + Z , I attempted to send it to background with the `bg` command.  Unfortunately, the process isn't sent to the background, and reappear to be running foreground.  Then, I do a Ctrl + Z again in order to pause it again.  Unfortunately...
After pausing the process with Ctrl+Z,
I attempted to send it to background with the
bg
command.
Unfortunately, the process isn't sent to the background,
and reappear to be running foreground.
Then, I do a Ctrl+Z again in order to pause it again.
Unfortunately, the key combination is no longer responding,
and the process is still ongoing.
To make matters worse, the command is a for
loop on many items.
If I hit Ctrl+C,
the job will resume on the next iteration, and until the last iteration.
Running on tmux inside iTerm on macOS.
Faxopita
(179 rep)
Apr 10, 2023, 11:47 PM
• Last activity: Mar 7, 2025, 10:14 PM
0
votes
0
answers
25
views
spawn process with existing session ID (setsid, maybe use GDB)
How to create a new process with the [session ID][1] (setsid) of an existing process? I have an idea using GDB which is working partly. But I'm also thankful for other approaches. . There seems to be no syscall where I can specify a session ID for a process. But I was inspired by [another conversati...
How to create a new process with the session ID (setsid) of an existing process?
I have an idea using GDB which is working partly. But I'm also thankful for other approaches.
.
There seems to be no syscall where I can specify a session ID for a process. But I was inspired by another conversation , where GDB is being used. So my idea is, to use GDB to fork a process from an existing one with the desired session ID.
The basic idea seems to work. But it suffers from a segmentation fault in the original process.
# start some long running process in it's own session
setsid --fork bash -c 'echo $$; sleep 1000'
# outputs it's PID == SESSION_ID (lets assume "100")
gdb --init-eval-command='set follow-fork-mode child' -p SESSION_ID
# in gdb:
call (int)fork()
# PROBLEM: original command outputs:
# Segmentation fault (core dumped) sleep 1000
call (int)execl("/bin/sleep", "sleep", "50")
ps --sid SESSION_ID -o pid,sid,args
# 101 100 sleep 50
# 102 100 sleep 120
So the original process "100" has crashed. But the new process successfully started.
Any idea how to avoid the segfault?
Related: Is there a way to change the process group of a running process?
kolAflash
(11 rep)
Feb 26, 2025, 10:46 AM
• Last activity: Feb 26, 2025, 10:47 AM
2
votes
0
answers
57
views
bash ctrl-z to send a for loop in background, only the currently run command is resumed when bg
poc: $ for i in $( seq 1 10 ); do echo "begin $i"; sleep 4 ; echo " end $i" ; done begin 1 end 1 begin 2 end 2 begin 3 ## after 1 second, I : ctrl-z ^Z [1]+ Stopped sleep 4 $ fg # wait or no wait : same result sleep 4 $ # the loop wasn't continued. I notice: when pressing ctrl-z : only the currently...
poc:
$ for i in $( seq 1 10 ); do echo "begin $i"; sleep 4 ; echo " end $i" ; done
begin 1
end 1
begin 2
end 2
begin 3
## after 1 second, I : ctrl-z
^Z
+ Stopped sleep 4
$ fg # wait or no wait : same result
sleep 4
$ # the loop wasn't continued.
I notice: when pressing ctrl-z : only the currently running command (sleep) is shown as being sent in the background? What happens to the surrounding loop? and how could I have the whole for loop suspended and continued? with set -o settings? or do I need to create a script and have it sent to background? or just surround it with { } or ( ) ?
Olivier Dulac
(6580 rep)
Feb 20, 2025, 10:19 AM
• Last activity: Feb 20, 2025, 10:44 AM
2
votes
1
answers
84
views
Why does bash (executing the script) stay in the foreground group when executing commands in a script?
I am using the following version of the bash: ``` GNU bash, version 5.1.16(1)-release (x86_64-pc-linux-gnu) ``` When I start some command (e.g. `./hiprogram`) directly from the terminal, then bash forks itself, `exec`s the command, and at the same time it places a new process in a new group that bec...
I am using the following version of the bash:
GNU bash, version 5.1.16(1)-release (x86_64-pc-linux-gnu)
When I start some command (e.g. ./hiprogram
) directly from the terminal, then bash forks itself, exec
s the command, and at the same time it places a new process in a new group that becomes the foreground, while moving its group to the background.
However, when I create a script that runs ./hiprogram
, then bash (that executes script) forks/execs itself in the same way but now bash stays in the same group as the command and thus stays in foreground group, why?
The only reason I can see for this is that the bash instance executing the script must be able to receive signals intended for the foreground group, such as CTRL+C, and react in the right way (for CTRL+C, that would mean to stop further execution of the script). Is that the only reason? AIs say that bash executing the script also remains in the foreground group to manage job control, but that explanation doesn’t quite make sense to me—after all, job control works just fine in the first case when commands are executed directly from the terminal and bash is not part of the foreground group.
Yakog
(517 rep)
Feb 19, 2025, 10:44 AM
• Last activity: Feb 19, 2025, 03:57 PM
1
votes
1
answers
2208
views
What exactly does it mean to run a process in the "background"?
I want to understand a little bit better, what a background process is. The question came to live as a result of reading this line of code: ``` /usr/sbin/rsyslogd -niNONE & ``` [Source](https://github.com/Mailu/Mailu/blob/c2d85ecc3282cdbc840d14ac33da7b5f27deddb3/core/postfix/start.py#L94) The docume...
I want to understand a little bit better, what a background process is. The question came to live as a result of reading this line of code:
/usr/sbin/rsyslogd -niNONE &
[Source](https://github.com/Mailu/Mailu/blob/c2d85ecc3282cdbc840d14ac33da7b5f27deddb3/core/postfix/start.py#L94)
The documentations says:
> -i pid file
> Specify an alternative pid file instead of the default
> one. This option must be used if multiple instances of
> rsyslogd should run on a single machine. To disable
> writing a pid file, use the reserved name "NONE" (all
> upper case!), so "-iNONE".
>
> -n Avoid auto-backgrounding. This is needed especially if
> the rsyslogd is started and controlled by init(8).
[Source](https://man7.org/linux/man-pages/man8/rsyslogd.8.html)
The ampersand &
seems to mean to request that the command is run in the background, see, for example [here](https://unix.stackexchange.com/a/86253/70683) .
If my understanding is correct, pid files [are used with daemons](https://unix.stackexchange.com/a/12816/70683) , that is when a program is run in the background.
So on the face value it seems that the command in question first tells the program not to run in the background with -n
, then specify NONE for the pid file, to indicate it is not a daemon1, and then right after that specify &
to send it into the background.
I cannot make a lot of sense of that. Is the background that the process would normally enter is a different background it is sent to by using &
? From all I read, it seems that the only meaning of the background is that shell is not blocked. In this respect asking the process not to auto-background and then background it does not make a lot of sense.
Is there something here I'm missing? What is exactly the background? (And who is responsible for deleting the pid file, while we are at it?)
---
1 - in a docker container context, where the question arose from, the existence of the pid file can cause problems when the container is stopped and then restarted. It is not clear for me what is responsible for deleting the pid files, some sources suggest that it's [the init system, such as systemd](https://unix.stackexchange.com/a/256130) , while others [imply that it's the program responsibility](https://stackoverflow.com/a/688365/18625995) . However if a process killed with SIGKILL the program might not have an opportunity to delete it, so subsequent container re-start will fail because the pid file will already be there, but is expected not to.
Andrew Savinykh
(453 rep)
Aug 24, 2022, 04:59 AM
• Last activity: Jan 30, 2025, 04:55 AM
0
votes
0
answers
86
views
How Do SSH-Launched Long-Running Background Jobs Detach Without nohup or disown?
When running a long-running command in the background over SSH from a non-interactive shell script, I noticed the process continues running on the remote machine **without** using `nohup`, `disown`, or similar tools. Remote Environment (SSH target): - Linux 6.12.9 - OpenSSH 9.9p1, OpenSSL 3.3.2 - Lo...
When running a long-running command in the background over SSH from a non-interactive shell script, I noticed the process continues running on the remote machine **without** using
nohup
, disown
, or similar tools.
Remote Environment (SSH target):
- Linux 6.12.9
- OpenSSH 9.9p1, OpenSSL 3.3.2
- Login Shell: bash 5.2.37
- Also for non-interactive sessions (verified by ssh -T $HOST "echo \$SHELL"
)
- Distribution: NixOS 24.11
On the client side, I can execute:
# Closing outgoing FDs (stdout and stderr) important to end
# SSH session immediately (EOF). We also need a non-interactive
# session (-T).
ssh -T $HOST "long_running_command >/dev/null 2>/dev/null &"
to start a long running command on the remote without having to
keep the SSH session alive.
I expected that background jobs would terminate or receive SIGHUP when the SSH session ends. However, the process is automatically reparented to PID 1 (init) and keeps running. I can verify this using htop
, ps
, et. al.
Why does this work **without** nohup
or disown
?
- Why does it just work like that? Why are no SIGHUP
or similar events being send to long_running_command
?
- Why does job control (&
) work in bash in non-interactive mode?
- Who decides that the running background job will switch ownership to the init process? Bash? Is this documented?
phip1611
(101 rep)
Jan 17, 2025, 08:43 AM
9
votes
3
answers
376
views
Bash script containing sudo - unreliable background resume (bg)
I have the following simple bash script (called `test.sh`), that shows disk usage in the root dir. `sudo` is needed to list all directories (and I'm not asked for `sudo` password). #!/bin/bash sudo du -h --max-depth=1 / 2> /dev/null The directory is in my path, and then I run the script like this (t...
I have the following simple bash script (called
test.sh
), that shows disk usage in the root dir. sudo
is needed to list all directories (and I'm not asked for sudo
password).
#!/bin/bash
sudo du -h --max-depth=1 / 2> /dev/null
The directory is in my path, and then I run the script like this (to get output to a text file):
$ ./test.sh >> ./test.txt
Now, if I suspend the job with Ctrl+Z, I get this:
^Z
+ Stopped ./test.sh >> ./test.txt
If I then resume in background with bg
, I still get:
$ bg
+ ./test.sh >> ./test.txt &
+ Stopped ./test.sh >> ./test.txt
$ jobs
+ Stopped ./test.sh >> ./test.txt
*(Additional tries with bg
may result in the script actually resuming in background after 2-3 tries, but it seems sporadic...)*
However, if I resume with fg
, then the script runs in foreground:
$ fg
./test.sh >> ./test.txt
And the result is written to test.txt
:
3.8M /root
4.0K /authentic-theme
4.0K /srv
72K /tmp
3.2G /snap
4.0K /media
8.4M /etc
0 /proc
0 /sys
4.0K /cdrom
16K /opt
16K /lost+found
24K /dev
4.3M /run
263G /mnt
14M /home
19G /var
245M /boot
3.8G /usr
297G /
If I modify the script to *not* use sudo
(and instead run the script with sudo
), then I can resume to background normally with bg
, and the script is run:
$ sudo ./test.sh >> ./test.txt
^Z
+ Stopped sudo ./test.sh >> ./test.txt
$ bg
+ sudo ./test.sh >> ./test.txt &
$ jobs
+ Running sudo ./test.sh >> ./test.txt &
The same happens if I run the entire command with sudo
, but not as a script:
$ sudo du -h --max-depth=1 / 2> /dev/null >> ./test.txt
^Z
+ Stopped sudo du -h --max-depth=1 / 2> /dev/null >> ./test.txt
$ bg
+ sudo du -h --max-depth=1 / 2> /dev/null >> ./test.txt &
$ jobs
+ Running sudo du -h --max-depth=1 / 2> /dev/null >> ./test.txt &
Can anybody explain what's going on here? Why can you resume a command that uses sudo
as well as a script in background, but when the script contains the exact same command using sudo
, then background resume with bg
is apparently not working correctly?
I'm using Ubuntu 22.04.1 with default Bash version 5.1.16.
**Edit #1:** I can inform that I have setup alias sudo='sudo '
to allow commands using sudo
to use other aliases. However, I tested both with and without this alias, and I got the same erratic bg
resume behavior in any case.*
**Edit #2:** jobs -l
give the following normal information:
jobs -l
+ 1074808 Stopped ./test.sh >> ./test.txt
**Edit #3:** I'm normally running in tmux
, but I also tested without tmux
, and the issue still persists.
**Edit #4:** Besides my SuperMicro server, I also has a Raspberry Pi, and an Ubuntu VM for testing on my Laptop (Aorus X5). This where it gets really strange:
- On my Ubuntu VM (on VMWare under Windows 10), this problem *does NOT* occur at all. It correctly resumes bg
the first time in all cases.
- On my Raspberry Pi, the problem is present as well - it usually takes 2-3 tries with bg
until it correctly resumes.
I'm beginning to think I need to test with regards to software that is running on both my main server and my Raspberry Pi, but not on my VM.
**Edit #5:** Setting stty -tostop
before running the script unfortunately does not really help the problem. Most of the time, it still takes 2-3 tries to resume correctly. A few times it succeeds on the first try, but I think this is more chance than anything else.
**Edit #6:** These are the services running on my Raspberry Pi:
$ systemctl --type=service --state=running
UNIT LOAD ACTIVE SUB DESCRIPTION
atd.service loaded active running Deferred execution scheduler
containerd.service loaded active running containerd container runtime
cron.service loaded active running Regular background program processing daemon
dbus.service loaded active running D-Bus System Message Bus
docker.service loaded active running Docker Application Container Engine
getty@tty1.service loaded active running Getty on tty1
irqbalance.service loaded active running irqbalance daemon
ModemManager.service loaded active running Modem Manager
networkd-dispatcher.service loaded active running Dispatcher daemon for systemd-networkd
packagekit.service loaded active running PackageKit Daemon
polkit.service loaded active running Authorization Manager
prometheus-node-exporter.service loaded active running Prometheus exporter for machine metrics
rsyslog.service loaded active running System Logging Service
serial-getty@ttyS0.service loaded active running Serial Getty on ttyS0
smartmontools.service loaded active running Self Monitoring and Reporting Technology (SMART) Daemon
snapd.service loaded active running Snap Daemon
ssh.service loaded active running OpenBSD Secure Shell server
systemd-journald.service loaded active running Journal Service
systemd-logind.service loaded active running User Login Management
systemd-networkd.service loaded active running Network Configuration
systemd-timesyncd.service loaded active running Network Time Synchronization
systemd-udevd.service loaded active running Rule-based Manager for Device Events and Files
udisks2.service loaded active running Disk Manager
unattended-upgrades.service loaded active running Unattended Upgrades Shutdown
unbound.service loaded active running Unbound DNS server
user@1000.service loaded active running User Manager for UID 1000
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
26 loaded units listed.
I believe these are the ones that I installed and activated (and are running on both the SuperMicro and the Raspberry Pi):
containerd.service loaded active running containerd container runtime
docker.service loaded active running Docker Application Container Engine
prometheus-node-exporter.service loaded active running Prometheus exporter for machine metrics
smartmontools.service loaded active running Self Monitoring and Reporting Technology (SMART) Daemon
unbound.service loaded active running Unbound DNS server
**Things to test:**
- Check if sudo
configuration without NOPASSWD
makes a difference.
- Disable installed services that are common between my SuperMicro server and Raspberry Pi.
Artur Meinild
(792 rep)
Nov 24, 2022, 01:38 PM
• Last activity: Jan 7, 2025, 07:30 PM
801
votes
6
answers
433924
views
Difference between nohup, disown and &
What are the differences between $ nohup foo and $ foo & and $ foo & $ disown
What are the differences between
$ nohup foo
and
$ foo &
and
$ foo &
$ disown
Lesmana
(28027 rep)
Nov 9, 2010, 04:16 PM
• Last activity: Dec 25, 2024, 11:40 PM
0
votes
1
answers
113
views
How to run linux command in jobs, when to run bash script
[![enter image description here][1]][1] [1]: https://i.sstatic.net/iVtPyZcj.png I want to run those command in linux jobs, so when I tyep "jobs", I can mannage those jobs, how to do it ( in bash script)

lovespring
(845 rep)
Nov 17, 2024, 10:49 AM
• Last activity: Nov 17, 2024, 04:27 PM
0
votes
0
answers
37
views
job array with conditional statements
I'm working with a job array where I'm controlling for the potential execution of the various steps of a script multiple times. In this case, only the missing ones will be processed using appropriate `if` statements. As part of the script I need to rename folders before the last command, so I wish f...
I'm working with a job array where I'm controlling for the potential execution of the various steps of a script multiple times. In this case, only the missing ones will be processed using appropriate
if
statements.
As part of the script I need to rename folders before the last command, so I wish for the if
statements to check also the newly named folders on subsequent runs; however, I'm missing the logic on how to do so... Below, an example of the folders I visit with the array the first time
/path/to/INLUP_00165
/path/to/INLUP_00169
/path/to/INLUP_00208
/path/to/INLUP_00214
/path/to/INLUP_00228
/path/to/INLUP_00245
/path/to/INLUP_00393
/path/to/INLUP_00418
which will became, after the first execution, the following
/path/to/INLUP_00165-35
/path/to/INLUP_00169-27
/path/to/INLUP_00208-35
/path/to/INLUP_00214-32
/path/to/INLUP_00228-32
/path/to/INLUP_00245-34
/path/to/INLUP_00393-29
/path/to/INLUP_00418-32
This is the script I'm using
#!/bin/bash
#SBATCH --nodes=1 --ntasks=1 --cpus-per-task=12
#SBATCH --time=200:00:00
#SBATCH --mem=80gb
#
#SBATCH --job-name=name1
#SBATCH --output=name_1.out
#SBATCH --array=[1-8]%8
#
#SBATCH --partition=bigmem
#SBATCH --exclude=node5
NAMES=$1
h=$(sed -n "$SLURM_ARRAY_TASK_ID"p $NAMES)
#load modules
ID="$(echo ${h}/*.fastq.gz | cut -f 1 -d '.' | sed 's#/path/to/INLUP_[0-9]\+/##')"
readarray -t cond $h/log.txt; find . -maxdepth 1 -type f,l -not -name '*.filt.fastq.gz' -not -name '*.txt' -not -name '*.bin' -delete
HIGH=$(grep '\[M::ha_analyze_count\] highest\:' $h/log.txt | tail -1 | sed 's#\[M::ha_analyze_count\] highest\: count\[##' | sed 's#\] = [0-9]\+$##') #highest peak
LOW=$(grep '\[M::ha_analyze_count\] lowest\:' $h/log.txt | tail -1 | sed 's#\[M::ha_analyze_count\] lowest\: count\[##' | sed 's#\] = [0-9]\+$##') #lowest peak
SUFFIX="$(echo $(( ($HIGH - $LOW) "*" 3/4 + $LOW )))" #estimated homozygous coverage
mv $h $h-${SUFFIX}
HOM_COV="$(echo ${h} | sed 's#/path/to/INLUP_[0-9]\+-##')"
last command tool &> $h-${SUFFIX}/log_param.txt
fi
I thought to attempt, with the next code block, to parse the number after the -
in the new folders' name by storing it into an array to check element by element — starting with the first and assigning it to the first folder and so on.
readarray -t cond < <(
for filename in INLUP_00*
do
printf "$filename \n" | sed 's#INLUP_[0-9]\+-##'
done
)
How can I link it to the file I feed as input which is unchanged and contains the original paths to the folders? Maybe something related to the way a path is associated to the TASK ID like here h=$(sed -n "$SLURM_ARRAY_TASK_ID"p $NAMES)
. Let me know, thanks in advance!
Matteo
(209 rep)
Nov 10, 2024, 05:26 PM
1
votes
1
answers
58
views
Prevent SIGINT propagation from subshell to parent shell in Zsh
I need to prevent SIGINT (Ctrl-C) from propagating from a subshell to its parent shell functions in Zsh. Here's a minimal example: ``` function sox-record { local output="${1:-$(mktemp).wav}" ( rec "${output}" trim 0 300 # Part of sox package ) echo "${output}" # Need this to continue executing afte...
I need to prevent SIGINT (Ctrl-C) from propagating from a subshell to its parent shell functions in Zsh.
Here's a minimal example:
function sox-record {
local output="${1:-$(mktemp).wav}"
(
rec "${output}" trim 0 300 # Part of sox package
)
echo "${output}" # Need this to continue executing after Ctrl-C
}
function audio-postprocess {
local audio="$(sox-record)"
# Process the audio file...
echo "${audio}"
}
function audio-transcribe {
local audio="$(audio-postprocess)"
# Send to transcription service...
transcribe_audio "${audio}" # Never reached if Ctrl-C during recording
}
The current workaround requires trapping SIGINT at every level, which leads to repetitive, error-prone code:
function sox-record {
local output="${1:-$(mktemp).wav}"
setopt localtraps
trap '' INT
(
rec "${output}" trim 0 300
)
trap - INT
echo "${output}"
}
function audio-postprocess {
setopt localtraps
trap '' INT
local audio="$(sox-record)"
trap - INT
# Process the audio file...
echo "${audio}"
}
function audio-transcribe {
setopt localtraps
trap '' INT
local audio="$(audio-postprocess)"
trap - INT
# Send to transcription service...
transcribe_audio "${audio}"
}
When the user presses Ctrl-C to stop the recording, I want: 1. The rec
subprocess to terminate (working) 2. The parent functions to continue executing (requires trapping SIGINT in every caller)
I know that:
- SIGINT is sent to all processes in the foreground process group
- Using setsid
creates a new process group but prevents signals from reaching the child
- Adding trap '' INT
in the parent requires all callers to also trap SIGINT to prevent propagationj
Is there a way to isolate SIGINT to just the subshell without requiring signal handling in all parent functions? Or is this fundamentally impossible due to how Unix process groups and signal propagation work?
---
I took a look at [this question](https://unix.stackexchange.com/questions/80975/preventing-propagation-of-sigint-to-parent-process) , and I tried this:
function sox-record {
local output="${1:-$(mktemp).wav}"
zsh -mfc "rec "${output}" trim 0 300" &2 || true
echo "${output}"
}
While this works when I just call sox-record
, when I call a parent function like audio-postprocess
, Ctrl-C doesn't do anything. (And I have to use pkill
to kill rec
.)
function audio-postprocess {
local audio="$(sox-record)"
# Process the audio file...
echo "${audio}"
}
HappyFace
(1694 rep)
Nov 3, 2024, 04:34 PM
• Last activity: Nov 3, 2024, 06:07 PM
348
votes
13
answers
1034706
views
How to terminate a background process?
I have started a wget on remote machine in background using `&`. Suddenly it stops downloading. I want to terminate its process, then re-run the command. How can I terminate it? I haven't closed its shell window. But as you know it doesn't stop using Ctrl + C and Ctrl + Z .
I have started a wget on remote machine in background using
&
. Suddenly it stops downloading. I want to terminate its process, then re-run the command. How can I terminate it?
I haven't closed its shell window. But as you know it doesn't stop using Ctrl+C and Ctrl+Z.
Mohammad Etemaddar
(13227 rep)
Dec 12, 2013, 07:11 AM
• Last activity: Oct 9, 2024, 01:07 PM
2
votes
1
answers
650
views
Killing a process when some other process is finished
Given `a | b`, i'd like to kill `b` when `a` is finished. `b` is an interactive process, which doesn't terminates when `a` is finished ([`fzf`][1] in my case), and the whole `a | b` is executed in a `$()` subshell. So far what i come up with was echo $({ sleep 5 & a=$!; { wait $a; kill $b; } } | { f...
Given
a | b
, i'd like to kill b
when a
is finished. b
is an interactive process, which doesn't terminates when a
is finished (fzf
in my case), and the whole a | b
is executed in a $()
subshell.
So far what i come up with was
echo $({ sleep 5 & a=$!; { wait $a; kill $b; } } | { fzf & b=$!; })
sleep
represents a
, and fzf
represents b
, the result in the example is used by echo
, but in my case, it'd be an argument for ssh
. It seems, that $b
is not the PID of fzf
, it's empty. As far as i understand, this shouldn't be the case, since i've used {}
, and not ()
, so it's not executed in a subshell.
lennoff
(121 rep)
Dec 10, 2018, 10:14 PM
• Last activity: Oct 6, 2024, 09:59 AM
16
votes
6
answers
10337
views
Is there any way to exit “less” follow mode without stopping other processes in pipe?
Often times I find myself in need to have the output in a buffer with all the features (scrolling, searching, shortcuts, ...) and I have grown accustomed to `less`. However, most of the commands I use generate output continuously. Using `less` with continuous output doesn't really work the way I exp...
Often times I find myself in need to have the output in a buffer with all the features (scrolling, searching, shortcuts, ...) and I have grown accustomed to
less
.
However, most of the commands I use generate output continuously. Using less
with continuous output doesn't really work the way I expected.
For instance:
while sleep 0.5
do
echo "$(cat /dev/urandom | tr -cd 'a-zA-Z0-9' | head -c 100)"
done | less -R
This causes less
to capture the output until it reaches maximum terminal height and at this point everything stops (hopefully still accepting data), allowing me to use movement keys to scroll up and down. This is the desired effect.
Strangely, when I catch-up with the generated content (usually with PgDn) it causes less
to lock and follow new data, not allowing me to use movement keys until I terminate with ^C and stop the original command. This is not the desired effect.
Am I using less
incorrectly? Is there any other program that does what I wish? Is it possible to "unlock" from this mode?
Thank you!
normalra
(163 rep)
Apr 19, 2015, 09:08 AM
• Last activity: Sep 10, 2024, 05:28 PM
4
votes
1
answers
211
views
Why do backgrounded commands in Zsh functions not show correctly in jobs?
In Bash 5.2, the output of `jobs` after either of the following is identical modulo job numbers: ``` sleep 3 # press C-z ``` ``` s() { sleep 3; } s # press C-z ``` In both, `jobs` produces something like ``` [1]+ Stopped sleep 3 ``` --- In Zsh 5.9 (x86_64-apple-darwin20.6.0), the first produces simi...
In Bash 5.2, the output of
jobs
after either of the following is identical modulo job numbers:
sleep 3
# press C-z
s() { sleep 3; }
s
# press C-z
In both, jobs
produces something like
+ Stopped sleep 3
---
In Zsh 5.9 (x86_64-apple-darwin20.6.0), the first produces similar enough output containing the sleep
command:
+ suspended sleep 3
The second produces almost useless output:
+ suspended
---
I have [lots of functions that invoke Vim with different arguments or completions](https://github.com/benknoble/Dotfiles/blob/854f9498cb90af3b84cb961a9e97cf0009970f31/links/zsh/vim.zsh) ; suspending Vim during the execution of any of them via C-z
or :suspend
puts such a useless entry in jobs
. It's not uncommon for me to have 2 or more such jobs, in which case it is rather difficult to keep them straight.
Why does Zsh do this and is there a way for me to fix it?
D. Ben Knoble
(552 rep)
Nov 15, 2022, 06:38 PM
• Last activity: Jul 26, 2024, 03:16 PM
289
votes
5
answers
644982
views
How to suspend and bring a background process to foreground
I have a process originally running in the foreground. I suspended by Ctrl + Z , and then resume its running in the background by `bg `. I wonder how to suspend a process running in the background? How can I bring a background process to foreground? **Edit:** The process outputs to stderr, so how sh...
I have a process originally running in the foreground. I suspended by Ctrl+Z, and then resume its running in the background by
bg
.
I wonder how to suspend a process running in the background?
How can I bring a background process to foreground?
**Edit:**
The process outputs to stderr, so how shall I issue the command fg
while the process is outputting to the terminal?
Tim
(106420 rep)
Aug 8, 2012, 12:45 PM
• Last activity: Jul 23, 2024, 10:46 AM
2
votes
1
answers
87
views
Is there a way to `exec` a pipeline or exit shell after launching a pipeline?
I'm writing a shell wrapper script that is supposed to act as a pager (receiving input on stdin and performing output on a tty). This wrapper prepares environment and command lines, then launches two processes in a pipeline: first one is a non-interactive filter and the last one is the actual pager...
I'm writing a shell wrapper script that is supposed to act as a pager (receiving input on stdin and performing output on a tty).
This wrapper prepares environment and command lines, then launches two processes in a pipeline: first one is a non-interactive filter and the last one is the actual pager that needs to control a tty (or, rather, it spawns the actual pager itself):
#!/bin/bash
...
col -bx | bat "${bat_args[@]}" --paging=always
The resulting process tree is thus:
\- wrapper.sh
|- col -bx (dies)
\- bat ...
\- less ...
The col -bx
process exits after filtering the input and is reaped by the shell.
---
Is it possible to get rid of the shell process, such that it won't hang around for as long as the pager is running?
I thought of a workaround by using process substitution:
exec bat "${bat_args[@]}" --paging=always < <(col -bx)
However, the col -bx
process is not reaped by the shell and remains in the Z state. Is there a "right" way to write this wrapper?
intelfx
(5699 rep)
Jun 13, 2024, 01:09 AM
• Last activity: Jun 18, 2024, 05:45 AM
6
votes
2
answers
1294
views
Why is subshell created by background control operator (&) not displayed under pstree
I understand that when I run `exit` it terminates my current shell because `exit` command run in the same shell. I also understand that when I run `exit &` then original shell will not terminate because `&` ensures that the command is run in sub-shell resulting that `exit` will terminate this sub-sh...
I understand that when I run
exit
it terminates my current shell because exit
command run in the same shell. I also understand that when I run exit &
then original shell will not terminate because &
ensures that the command is run in sub-shell resulting that exit
will terminate this sub-shell and return back to original shell. But what I do not understand is why commands with and without &
looks exactly the same under pstree
, in this case sleep 10
and sleep 10 &
. 4669 is the PID of bash under which first sleep 10
and then sleep 10 &
were issued and following output was obtained from another shell instance during this time:
# version without &
$ pstree 4669
bash(4669)───sleep(6345)
# version with &
$ pstree 4669
bash(4669)───sleep(6364)
Should't the version with &
contain one more spawned sub-shell (e.g. in this case with PID 5555), like this one?
bash(4669)───bash(5555)───sleep(6364)
PS: Following code was omitted from output of pstree
beginning for better readability:
systemd(1)───slim(1009)───ck-launch-sessi(1370)───openbox(1551)───/usr/bin/termin(4510)───bash(4518)───screen(4667)───screen(4668)───
Wakan Tanka
(825 rep)
Jan 20, 2016, 10:22 PM
• Last activity: May 30, 2024, 06:33 AM
Showing page 1 of 20 total questions