Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

13 votes
3 answers
9514 views
How to launch gedit from terminal and detach it (just like "subl" command works)?
To open a file to edit in gedit I run `gedit sample.py &`. But with Sublime Text it is simply `subl sample.py`. This opens the file to edit and it doesn't run in the background (in my shell). How would I do that with gedit? I tried `exec /usr/bin/gedit "$@"` (copied from `/usr/bin/subl`) but it work...
To open a file to edit in gedit I run gedit sample.py &. But with Sublime Text it is simply subl sample.py. This opens the file to edit and it doesn't run in the background (in my shell). How would I do that with gedit? I tried exec /usr/bin/gedit "$@" (copied from /usr/bin/subl) but it works like gedit &. Or alias ged="gedit $file &" should do. What can I substitute $file in the alias?
Sandjaie Ravi (1055 rep)
Jul 11, 2015, 10:05 AM • Last activity: Jul 12, 2025, 03:57 AM
0 votes
1 answers
42 views
How to redirected output from disowned process to a file
A borgmatic backup command that runs for many hours: long_running_cmd &> file.txt I did `Strg+Z` then `bg` then `disown` to keep the command running I case my laptop goes to sleep or disconnects. I thought the whole command chain just gets executed in the background, but I notice it did not wri...
A borgmatic backup command that runs for many hours: long_running_cmd &> file.txt I did Strg+Z then bg then disown to keep the command running I case my laptop goes to sleep or disconnects. I thought the whole command chain just gets executed in the background, but I notice it did not write into it any more. How do I ensure that the output of a command is written to the file even after I disconnect my ssh connection? (*if possible with default commands that can be found in a Debian headless server installation*)
Destro (15 rep)
Jun 3, 2025, 05:50 AM • Last activity: Jun 3, 2025, 10:18 AM
6 votes
1 answers
2907 views
Mocking a pseudo tty (pts)
We would like to run some `curses` based apps in background. These apps use `curses` and get the current `tty` port name to be used internally to map log files and others context-terminal associations. In some tests just redirecting the input from `curses` apps that don't read the keyboard to a know...
We would like to run some curses based apps in background. These apps use curses and get the current tty port name to be used internally to map log files and others context-terminal associations. In some tests just redirecting the input from curses apps that don't read the keyboard to a known pts, worked. And so they can be executed in background, since I reserve a tty (or pseudo tty) for that. Is it possible to mock a tty, or has a reserved pts to some automatic running purposes, like that? We plain to launch them through crontab.
Luciano (1189 rep)
Jul 8, 2015, 09:10 PM • Last activity: May 27, 2025, 12:03 PM
2 votes
1 answers
3305 views
Run executable on background detached from terminal but with arguments
I can run an executable in the background and detach it from the terminal with these commands: $ nohup ./executable & or $ ./executable & $ disown But none work if I send arguments to the executable: $ nohup ./executable argument & or $ ./executable argument & $ disown I have tried combining it with...
I can run an executable in the background and detach it from the terminal with these commands: $ nohup ./executable & or $ ./executable & $ disown But none work if I send arguments to the executable: $ nohup ./executable argument & or $ ./executable argument & $ disown I have tried combining it with the .sh strings and pipes syntax but it does not work either: $ nohup ./executable <<<$'argument' & or $ ./executable <<<$'argument' & $ disown or $ nohup echo -e "argument" | ./executable & or $ echo -e "argument" | ./executable & $ disown EDIT: The "./executable" program accepts any number of parameters, like "./executable arg1 arg2" etc... I think the problem is that "&" gets absorbed by "./executable" as a parameter. Also, it is written in Go if that is of any help.
Azkron (21 rep)
Apr 21, 2021, 09:43 PM • Last activity: May 23, 2025, 01:07 AM
3 votes
2 answers
1163 views
Unable to run flatpak independent of the script that started it
I've always successfuly used an ampersand to start an application in the background from within a script. Such background applications seem to be detached from the script that started it, meaning that when I terminate the script, the applications keep running. I now have a script that first starts a...
I've always successfuly used an ampersand to start an application in the background from within a script. Such background applications seem to be detached from the script that started it, meaning that when I terminate the script, the applications keep running. I now have a script that first starts a number of applications, some of them flatpaks, and then starts an endless while loop to monitor my ip address to check that the vpn is still working.
#!/bin/bash
start application 1 &
start application 2 &
start flatpak application 1 &
start flatpak application 2 &

while true
do
   check ip address
   sleep 180
done
When I terminate the script with ctrl-C, the non-flatpak applications keep running (as expected), but the flatpaks are terminated. This, however, only happens if the script has this while loop: if I remove the while loop, then all applications keep running when I terminate the script, including the flatpaks. So far, I've tried: - calling the applications with the ampersand:
/usr/bin/flatpak run --command=de.manuel_kehl.go-for-it de.manuel_kehl.go-for-it &
- calling the applications with nohup:
nohup /usr/bin/flatpak run --command=de.manuel_kehl.go-for-it de.manuel_kehl.go-for-it &
- calling disown after starting the application
/usr/bin/flatpak run --command=de.manuel_kehl.go-for-it de.manuel_kehl.go-for-it &
disown
None of these approaches help. Can someone explain why this happens? And is there a solution? I would like to keep all applications, including the flatpaks, running when I terminate the script. tnx (using bash on Linux Mint 19.3 MATE)
wayan (101 rep)
Sep 11, 2022, 09:57 AM • Last activity: May 15, 2025, 07:17 AM
3 votes
1 answers
2349 views
SSH fork kills connection
I am using a Linux script which has the task of forwarding control of the system to remote support. In this script one of the commands is a ssh port forward command that will forward the port of the Video Live Stream of a remote camera. On the system with the remote camera, that system is an unknown...
I am using a Linux script which has the task of forwarding control of the system to remote support. In this script one of the commands is a ssh port forward command that will forward the port of the Video Live Stream of a remote camera. On the system with the remote camera, that system is an unknown and thus assumed always behind a firewall and also has a user whom lacks the knowledge to port forward their router and also acquire a dynamic DNS. To overcome this the "CLIENT" system or the camera computer executes the command below: ssh -R 8089:dc-bb7925e7:8085 -p 2250 user@remoteserver.com -fNT which is forwarding the CLIENT port for the camera feed 8085 to the remote support server 8089. Remote support is supposed to be able to go to localhost:8089 and be able to view the live stream. The problem is that this does not work. Once I insert the -f flag into the command, this command breaks and forwards nothing. Regardless of the flag, the problem is that when this ssh command executes, all other scripts and processes which are supposed to be running, get put on hold because of the TTY which does not allow the script to exit until the connection is broken. So I tried using the -f to fork the ssh into the background. This does not work as the port does not get forwarded. I can not figure out why. What I need is for the port to be forwarded and then forgotten about while the connection remains open. It is important that remote support has control over ssh while the client system still operates normally. What am I doing wrong? If is do not use the -fNT then this functions normally, only all other scripts are not executed. This is a Debian system.
RootWannaBe (131 rep)
Oct 19, 2014, 09:58 AM • Last activity: Apr 28, 2025, 02:05 AM
23 votes
5 answers
22097 views
Ctrl-C with two simultaneous commands in bash
I want to run two commands simultaneously in bash on a Linux machine. Therefore in my `./execute.sh` bash script I put: command 1 & command 2 echo "done" However when I want to stop the bash script and hit Ctrl + C , only the second command is stopped. The first command keeps running. How do I make...
I want to run two commands simultaneously in bash on a Linux machine. Therefore in my ./execute.sh bash script I put: command 1 & command 2 echo "done" However when I want to stop the bash script and hit Ctrl+C, only the second command is stopped. The first command keeps running. How do I make sure that the complete bash script is stopped? Or in any case, how do I stop both commands? Because in this case no matter how often I press Ctrl+C the command keeps running and I am forced to close the terminal.
maero21 (333 rep)
Jan 1, 2014, 01:54 PM • Last activity: Mar 21, 2025, 11:25 AM
1 votes
1 answers
37 views
Process run with nohup gets killed on client_loop: send disconnect: Broken pipe
I have observed the following and want to understand why: First, I run a Node server that listens on a port on a remote server using: ``` nohup my-app & ``` Next there are two cases: 1. I logout of the remote server using `exit`. I call this graceful logout 2. I do not logout gracefully but instead...
I have observed the following and want to understand why: First, I run a Node server that listens on a port on a remote server using:
nohup my-app &
Next there are two cases: 1. I logout of the remote server using exit. I call this graceful logout 2. I do not logout gracefully but instead have a forceful logout due to my local machine going to sleep after extended period of inactivity or something like that. In this case I see a message printed on the screen that says client_loop: send disconnect: Broken pipe When I log back into the machine, in case 1 the process is still running but in case 2 it is terminated. Why? And to further add a twist, case 1 and case 2 behave same on Mac but not in Windows (using WSL2). I am very curious to understand what's going on here and why do Windows (WSL2) and Mac behave differently.
morpheus (135 rep)
Mar 8, 2025, 12:38 AM • Last activity: Mar 8, 2025, 06:58 PM
6 votes
2 answers
1722 views
bg command not sending process to background
After pausing the process with Ctrl + Z , I&#160;attempted to send it to background with the&#160;`bg` command.&#160; Unfortunately, the process isn't sent to the background, and reappear to be running foreground.&#160; Then, I&#160;do a Ctrl + Z again in order to pause it again.&#160; Unfortunately...
After pausing the process with Ctrl+Z, I attempted to send it to background with the bg command.  Unfortunately, the process isn't sent to the background, and reappear to be running foreground.  Then, I do a Ctrl+Z again in order to pause it again.  Unfortunately, the key combination is no longer responding, and the process is still ongoing.  To make matters worse, the command is a for loop on many items.  If I hit Ctrl+C, the job will resume on the next iteration, and until the last iteration. Running on tmux inside iTerm on macOS.
Faxopita (179 rep)
Apr 10, 2023, 11:47 PM • Last activity: Mar 7, 2025, 10:14 PM
2 votes
1 answers
2788 views
Job Control: How to save output of background job in a variable
Using Bash in OSX. My script has these 2 lines: nfiles=$(rsync -auvh --stats --delete --progress --log-file="$SourceRoot/""CopyLog1.txt" "$SourceTx" "$Dest1Tx" | tee /dev/stderr | awk '/files transferred/{print $NF}') & nfiles2=$(rsync -auvh --stats --delete --progress --log-file="$SourceRoot/""Copy...
Using Bash in OSX. My script has these 2 lines: nfiles=$(rsync -auvh --stats --delete --progress --log-file="$SourceRoot/""CopyLog1.txt" "$SourceTx" "$Dest1Tx" | tee /dev/stderr | awk '/files transferred/{print $NF}') & nfiles2=$(rsync -auvh --stats --delete --progress --log-file="$SourceRoot/""CopyLog2.txt" "$SourceTx" "$Dest2Tx" | tee /dev/stderr | awk '/files transferred/{print $NF}') When I use the & after the first line (to run the two rsync commands in parallel), my later call to $nfiles returns nothing. Code: osascript -e 'display notification "'$nfiles' files transferred to MASTER," & return & "'$nfiles2' transferred to BACKUP," & return & "Log Files Created" with title "Copy Complete"' Can't figure out what's going on. I need the 2 rsyncs to run simultaneously.
user192259 (51 rep)
Sep 30, 2016, 12:48 AM • Last activity: Mar 1, 2025, 02:01 PM
0 votes
0 answers
111 views
How to wait the end of a non-child process?
I have here a not really fine app, partially out of my control. Sometimes it stops and I want to restart it, and some extra options. So, if it exists, I want to start my script. Poor man solution would be to check it in a loop, if it runs. I am thinking on a better solution. If it would be a child p...
I have here a not really fine app, partially out of my control. Sometimes it stops and I want to restart it, and some extra options. So, if it exists, I want to start my script. Poor man solution would be to check it in a loop, if it runs. I am thinking on a better solution. If it would be a child process, or it would have at least some console-only debug mode, that would be very simple. But it has not. It actually daemonizes itself into a dbus service, and it is doing that as a closed source app... However, fortunately I have a shell environment to deal with it. But I really, really don't like to poll the process if it is still running. So it would be trivial. But I want to *wait for the event that is had exited*. I repeat, I have no control over it, it is a closed source tool, restarting itself as a dbus service. I can find it with a script, and then I could wait its exit, but how? How can I wait for the exit of a non-child process?
peterh (10448 rep)
Feb 24, 2025, 01:37 PM • Last activity: Feb 24, 2025, 08:23 PM
0 votes
2 answers
81 views
Why can't & and ; be used together (two processes started, first in the background)?
I tried this simple test `ping [SOME IP] &;ls` expecting the output of ping to overlap with the listing. Instead, I got an error: > bash: syntax error near unexpected token `;' It does not help adding spaces. If the semicolon is escaped, the first command starts, then error > ;: command not found It...
I tried this simple test ping [SOME IP] &;ls expecting the output of ping to overlap with the listing. Instead, I got an error: > bash: syntax error near unexpected token `;' It does not help adding spaces. If the semicolon is escaped, the first command starts, then error > ;: command not found It almost works to enclose the ping in brackets (ping [SOME IP] &);ls The ls runs to completion, then the ping starts. I could achieve that more easily by typing ls;ping ... Is it possible to start two processes together, when the first (or both) are in the background?
Peter Bill (526 rep)
Feb 20, 2025, 09:31 AM • Last activity: Feb 20, 2025, 12:57 PM
1 votes
3 answers
797 views
How to disown a command in the busybox shell?
I'm trying to write a quick-and-dirty shell script daemon to run on a home router that has a busybox shell, which doesn't support `disown`. Is there any way to do either of the following? - Run a command like `command &` and then disown it once it's in the background. - Run a command "directly" in t...
I'm trying to write a quick-and-dirty shell script daemon to run on a home router that has a busybox shell, which doesn't support disown. Is there any way to do either of the following? - Run a command like command & and then disown it once it's in the background. - Run a command "directly" in the background (i.e., not using &).
joshlf (395 rep)
Sep 29, 2023, 08:05 PM • Last activity: Feb 13, 2025, 08:27 AM
1 votes
1 answers
2208 views
What exactly does it mean to run a process in the "background"?
I want to understand a little bit better, what a background process is. The question came to live as a result of reading this line of code: ``` /usr/sbin/rsyslogd -niNONE & ``` [Source](https://github.com/Mailu/Mailu/blob/c2d85ecc3282cdbc840d14ac33da7b5f27deddb3/core/postfix/start.py#L94) The docume...
I want to understand a little bit better, what a background process is. The question came to live as a result of reading this line of code:
/usr/sbin/rsyslogd -niNONE &
[Source](https://github.com/Mailu/Mailu/blob/c2d85ecc3282cdbc840d14ac33da7b5f27deddb3/core/postfix/start.py#L94) The documentations says: > -i pid file > Specify an alternative pid file instead of the default > one. This option must be used if multiple instances of > rsyslogd should run on a single machine. To disable > writing a pid file, use the reserved name "NONE" (all > upper case!), so "-iNONE". > > -n Avoid auto-backgrounding. This is needed especially if > the rsyslogd is started and controlled by init(8). [Source](https://man7.org/linux/man-pages/man8/rsyslogd.8.html) The ampersand & seems to mean to request that the command is run in the background, see, for example [here](https://unix.stackexchange.com/a/86253/70683) . If my understanding is correct, pid files [are used with daemons](https://unix.stackexchange.com/a/12816/70683) , that is when a program is run in the background. So on the face value it seems that the command in question first tells the program not to run in the background with -n, then specify NONE for the pid file, to indicate it is not a daemon1, and then right after that specify & to send it into the background. I cannot make a lot of sense of that. Is the background that the process would normally enter is a different background it is sent to by using &? From all I read, it seems that the only meaning of the background is that shell is not blocked. In this respect asking the process not to auto-background and then background it does not make a lot of sense. Is there something here I'm missing? What is exactly the background? (And who is responsible for deleting the pid file, while we are at it?) --- 1 - in a docker container context, where the question arose from, the existence of the pid file can cause problems when the container is stopped and then restarted. It is not clear for me what is responsible for deleting the pid files, some sources suggest that it's [the init system, such as systemd](https://unix.stackexchange.com/a/256130) , while others [imply that it's the program responsibility](https://stackoverflow.com/a/688365/18625995) . However if a process killed with SIGKILL the program might not have an opportunity to delete it, so subsequent container re-start will fail because the pid file will already be there, but is expected not to.
Andrew Savinykh (453 rep)
Aug 24, 2022, 04:59 AM • Last activity: Jan 30, 2025, 04:55 AM
0 votes
0 answers
86 views
How Do SSH-Launched Long-Running Background Jobs Detach Without nohup or disown?
When running a long-running command in the background over SSH from a non-interactive shell script, I noticed the process continues running on the remote machine **without** using `nohup`, `disown`, or similar tools. Remote Environment (SSH target): - Linux 6.12.9 - OpenSSH 9.9p1, OpenSSL 3.3.2 - Lo...
When running a long-running command in the background over SSH from a non-interactive shell script, I noticed the process continues running on the remote machine **without** using nohup, disown, or similar tools. Remote Environment (SSH target): - Linux 6.12.9 - OpenSSH 9.9p1, OpenSSL 3.3.2 - Login Shell: bash 5.2.37 - Also for non-interactive sessions (verified by ssh -T $HOST "echo \$SHELL") - Distribution: NixOS 24.11 On the client side, I can execute:
# Closing outgoing FDs (stdout and stderr) important to end
# SSH session immediately (EOF). We also need a non-interactive
# session (-T).
ssh -T $HOST "long_running_command >/dev/null 2>/dev/null &"
to start a long running command on the remote without having to keep the SSH session alive. I expected that background jobs would terminate or receive SIGHUP when the SSH session ends. However, the process is automatically reparented to PID 1 (init) and keeps running. I can verify this using htop, ps, et. al. Why does this work **without** nohup or disown? - Why does it just work like that? Why are no SIGHUP or similar events being send to long_running_command? - Why does job control (&) work in bash in non-interactive mode? - Who decides that the running background job will switch ownership to the init process? Bash? Is this documented?
phip1611 (101 rep)
Jan 17, 2025, 08:43 AM
0 votes
1 answers
5330 views
Suspend and then resume a process in python script - Linux
I am trying to see if there is a way I can suspend and then resume a process in a Python script. I get the process pid using `os.getpid()` and then I suspend the process using `suspend()`. Is there a way to resume the process without having to manually type `fg` in a shell? Here is my code: ```lang-...
I am trying to see if there is a way I can suspend and then resume a process in a Python script. I get the process pid using os.getpid() and then I suspend the process using suspend(). Is there a way to resume the process without having to manually type fg in a shell? Here is my code:
-python
#!/usr/bin/env python
import time
import psutil
import os

sm_pid = os.getpid()
p = psutil.Process(sm_pid)

print "Going to suspend"
p.suspend()

time.sleep(5)
p.resume()

print "process resumed"
hama_ROW (1 rep)
Nov 27, 2018, 12:45 AM • Last activity: Jan 16, 2025, 03:14 PM
2 votes
2 answers
102 views
Should launching background jobs cause a race condition?
I tested the following with both bash and dash, and it seems to reliably hang: [ -p pipe ] || mkfifo pipe i=0 while [ $i -lt 10 ]; do pipe wait If we lower the limit from 10 to 1, it doesn't hang (at least not reliably). Sleeping keeps it from hanging. I think that the echo is happening before all b...
I tested the following with both bash and dash, and it seems to reliably hang: [ -p pipe ] || mkfifo pipe i=0 while [ $i -lt 10 ]; do pipe wait If we lower the limit from 10 to 1, it doesn't hang (at least not reliably). Sleeping keeps it from hanging. I think that the echo is happening before all background jobs have opened the pipe, so EOF is written to the pipe, closing the pipe and terminating _some_ background jobs. The late arrivals then get stuck waiting for an EOF that never comes. Is this expected behavior? It sure looks like a bug to me, but I'm not sure whether it's my bug or the shell's. Also: I know I can just write echo x | { }, and that would be neater, but I'm working around [some dash behavior I've already reported as a bug](https://lore.kernel.org/dash/CAFKqKCrEXCkyFTx8SqOHx=LHYyKpfa6scjcrMCxA=Hoo0p9yMA@mail.gmail.com/) . EDIT: The version below terminates consistently in bash, so it seems likely that it is _intended_ to be possible to launch a group of background processes and feed them from a single pipeline without doing anything special. echo hello world | { i=0 while [ $i -lt 10 ]; do cat & : $(( i+=1 )) done } wait EDIT: opening the pipe in the parent (r/w mode to avoid [blocking](https://www.man7.org/linux/man-pages/man7/fifo.7.html)) and duplicating the resulting file descriptor for each background process does not seem to solve the issue and results in hangs even after sleeping. [ -p pipe ] || mkfifo pipe exec 3pipe i=0 while [ $i -lt 10 ]; do pipe wait
Pablo Repetto (31 rep)
Dec 16, 2024, 02:27 PM • Last activity: Dec 20, 2024, 12:22 PM
1 votes
1 answers
862 views
How to place a qemu vm in background (it means that it should run between the processes but nothing should be shown on the screen)
I would like to run the qemu vm that you see below runs between the processes,so I don't want to see any monitor,graphic,terminal window,nothing should be shown on the screen. /usr/local/bin/qemu-system-x86_64 -machine q35 \ -cpu kvm64,hv_relaxed,hv_time,hv_synic -m 1G -nographic \ -drive file=Debia...
I would like to run the qemu vm that you see below runs between the processes,so I don't want to see any monitor,graphic,terminal window,nothing should be shown on the screen. /usr/local/bin/qemu-system-x86_64 -machine q35 \ -cpu kvm64,hv_relaxed,hv_time,hv_synic -m 1G -nographic \ -drive file=Debian-warp.img,format=raw -rtc base=localtime \ -device usb-ehci,id=usb,bus=pcie.0,addr=0x3 -device usb-tablet \ -device usb-kbd -smbios type=2 -nodefaults \ -netdev tap,id=mynet0,ifname=tap20,script=no,downscript=no \ -device e1000,netdev=mynet0,mac=52:55:00:d1:55:01 \ -device ich9-ahci,id=sata \ -drive if=pflash,format=raw,readonly=on,file=/usr/local/share/edk2-qemu/QEMU_UEFI_CODE-x86_64.fd \ -drive if=pflash,format=raw,file=/usr/local/share/edk2-qemu/QEMU_UEFI_VARS-x86_64.fd I tried passing the parameter -nographic : it is able to put the vm in background,but I'm not able to ping the ip number assigned to Debian,so something does not work within the vm. Instead,below you see what happens if I use the parameter -daemonize : screenshot it presents two problems : 1) it does not ping and 2) I see that little window on the top left of the screen,that I don't want to see. That vm is based on Debian 12 and starts with the grub manager. It waits for 3 seconds and if no keys are pressed,it boots the os. I've configured the automatic login of the user,so when it is authenticated,the script below is ran automatically : function jumpto { label=$1 cmd=$(sed -n "/$label:/{:a;n;p;ba};" $0 | grep -v ':$') eval "$cmd" exit } start=${1:-"start"} jumpto $start start: warp-cli disconnect OLD_IP="$(curl -s api.ipify.org)" sudo iptables -A POSTROUTING -t nat -s 192.168.1.5 -j MASQUERADE warp-cli connect NEW_IP="$(curl -s api.ipify.org)" echo Connected to Cloudflare Warp...echo OLD IP is $OLD_IP , NEW IP is $NEW_IP mid : if [ "$OLD_IP = $NEW_IP" ] then echo OLD IP is $OLD_IP , NEW IP is $NEW_IP : it does not work anymore,reconnecting... sleep 10 jump foo else echo OLD IP is $OLD_IP , NEW IP is $NEW_IP : it still works. sleep 10 fi jumpto mid foo: warp-cli disconnect OLD_IP="$(curl -s api.ipify.org)" warp-cli connect NEW_IP="$(curl -s api.ipify.org)"echo OLD IP is $OLD_IP , NEW IP is $NEW_IP : it works again. jumpto mid If,instead of -nographic, I use -vga std,the vm works as expected,but this is not what I want. I want that the vm runs hidden. 2
Marietto (579 rep)
May 14, 2024, 08:17 PM • Last activity: Dec 18, 2024, 05:40 PM
467 votes
12 answers
690624 views
How can I run a command which will survive terminal close?
Sometimes I want to start a process and forget about it. If I start it from the command line, like this: redshift I can't close the terminal, or it will kill the process. Can I run a command in such a way that I can close the terminal without killing the process?
Sometimes I want to start a process and forget about it. If I start it from the command line, like this: redshift I can't close the terminal, or it will kill the process. Can I run a command in such a way that I can close the terminal without killing the process?
Matthew (5637 rep)
Nov 12, 2010, 10:57 PM • Last activity: Dec 17, 2024, 11:00 AM
0 votes
1 answers
94 views
Is it possible that systemctl process would not run without me being logged in to the server?
I am very new to linux and I need some help. I am trying to put blockchain node processes on the background so that they run without my being logged in. I am using ```systemctl``` to run my process in the background. Here are my ```.service``` files Root dir, command, identifiers and user were just...
I am very new to linux and I need some help. I am trying to put blockchain node processes on the background so that they run without my being logged in. I am using
to run my process in the background. Here are my
.service
files Root dir, command, identifiers and user were just redacted from the file they are not being imported. Also the cmds work in the "foreground" that being when I simply run them in the terminal. 1st
[Unit]
Description=MY_DESC
After=network.target

[Service]
Type=simple
Restart=always
RestartSec=1
User=USER
WorkingDirectory=ROOT_DIR
ExecStart=MY_COOL_CMD
StandardOutput=journal
StandardError=journal
SyslogIdentifier=MY_ID
StartLimitInterval=0
LimitNOFILE=65536
LimitNPROC=65536

[Install]
WantedBy=multi-user.target
2nd service file
[Unit]
Description=MY_DESC
After=network.target 1st-service.service

[Service]
Type=simple
Restart=always
RestartSec=1
User=USER
WorkingDirectory=HOME_DIR
Environment="DAEMON_NAME=CMD_DIR"
Environment="DAEMON_HOME=HOME_DIR"
Environment="DAEMON_ALLOW_DOWNLOAD_BINARIES=false"
Environment="DAEMON_RESTART_AFTER_UPGRADE=true"
Environment="UNSAFE_SKIP_BACKUP=true"
ExecStart=MY_COOL_CMD
StandardOutput=journal
StandardError=journal
SyslogIdentifier=MY_ID
StartLimitInterval=0
LimitNOFILE=65536
LimitNPROC=65536

[Install]
WantedBy=multi-user.target
I enabled both files successfully. They seem to be running when I am logged in, but not when I am away. There are no errors, but when I check the logs the block numbers did not rack up enough (last time it has been a week since I have been logged out and the block height has jumped up only by 100,000 blocks. Which is not a lot). Also when try to check the logs in the past (between 6 and 7 hours ago) the logs show no entry... Here is what I get when I ask for status 1st
my.service - MY_DESC
     Loaded: loaded (/etc/systemd/system/my.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2024-11-29 22:57:00 CET; 1 week 0 days ago
   Main PID: 1940037 (geth)
      Tasks: 23 (limit: 19099)
     Memory: 9.8G
        CPU: 4d 21h 2min 16.071s
     CGroup: /system.slice/my.service
             └─1940037 MY_COOL_CMD
2nd file
my.service - my
     Loaded: loaded (/etc/systemd/system/my.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2024-11-29 23:02:23 CET; 1 week 0 days ago
   Main PID: 1940098 (cosmovisor)
      Tasks: 28 (limit: 19099)
     Memory: 4.7G
        CPU: 3d 4h 49min 4.542s
     CGroup: /system.slice/my.service
             ├─1940098 CMD_HERE
             └─1940111 CMD_HERE
That is when I am logged in. If there is anything else that you need to see to give me some guidance in this matter please let me know. Thank you very much for your time and expertise!
tonymasek (9 rep)
Dec 7, 2024, 08:02 PM • Last activity: Dec 8, 2024, 07:17 AM
Showing page 1 of 20 total questions