Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
2114 views
Apache server sometimes gets stuck for minutes with requests getting backlogged and waiting too much to be processed
I've got a production server with **Apache 2.4.38** on **Debian 10** and sometimes the web server doesn't function properly and doesn't immediately send a response to the HTTP requests it gets (All virtual hosts requests on it are completely unresponsive (no matter what they reverse proxy to)). Afte...
I've got a production server with **Apache 2.4.38** on **Debian 10** and sometimes the web server doesn't function properly and doesn't immediately send a response to the HTTP requests it gets (All virtual hosts requests on it are completely unresponsive (no matter what they reverse proxy to)). After a restart it immediately fixes itself or after being like this a while (seconds or even minutes) and starts sending A LOT of HTTP responses all of a sudden.

CPU and RAM usage seem to be fine, so it's definitely not that. I don't know what exactly is going on and why it's doing this. I've also changed mpm_event.conf settings, they currently are set to this:
StartServers                     2
        ServerLimit 100
        MinSpareThreads          25
        MaxSpareThreads          75
        ThreadLimit                      128
        ThreadsPerChild          25
        MaxRequestWorkers         400
        MaxConnectionsPerChild   5000
There are some errors I've seen in the Apache error log though:
[Tue Mar 22 19:53:38.339703 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 29595 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.339777 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 26190 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.339825 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 27903 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.339889 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 16907 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.339933 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 26880 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.340000 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 15384 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.340041 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 24971 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.340091 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 9780 still did not exit, sending a SIGKILL
[Tue Mar 22 19:53:38.340130 2022] [core:error] [pid 3375:tid 140244229465216] AH00046: child process 26317 still did not exit, sending a SIGKILL
What settings can I change that would fix this issue?
BitMonster (35 rep)
Mar 22, 2022, 06:32 PM • Last activity: Aug 2, 2025, 01:01 AM
1 votes
0 answers
18 views
Process Maps in s390x linux systems
So I am working on a debugger for linux s390x system and have the whole disassembler etc set up for reading the ELF file. For debugger I just run it on the process with base address from the process maps. Now when running for debugger, the process map doesn't have a read only map which would only ha...
So I am working on a debugger for linux s390x system and have the whole disassembler etc set up for reading the ELF file. For debugger I just run it on the process with base address from the process maps. Now when running for debugger, the process map doesn't have a read only map which would only have ELF headers and this map also does not have the ELF magic bytes in the starting unlike other systems like linux x86_64 and linux arm64. Now this affects my debugger as the addresses are set according to this. Also to set up the breakpoint ptrace provides the #define S390_BREAKPOINT_U16 ((__u16)0x0001) Now when set the this at the opcode, it hits the breakpoint correctly, but when I replace the original opcode, the opcode 4 bytes ahead gets placed at this position for some reason. I think most probably the ELF header magic bytes missing messes up stuff, even if i set the breakpoint to start of a function like main SIGILL is hit some
well-mannered-goat (31 rep)
Jul 31, 2025, 03:35 PM
0 votes
1 answers
83 views
In one process, any way to know whether there exists any other same program is running?
So I am designing a feature to redirect log file if there are multiple processes of this very program are running. I guess this requires me to somehow get to know whether there exists yet unfinished same program(executable?). I wonder is this achievable within user mode? How?
So I am designing a feature to redirect log file if there are multiple processes of this very program are running. I guess this requires me to somehow get to know whether there exists yet unfinished same program(executable?). I wonder is this achievable within user mode? How?
PkDrew (111 rep)
Jul 29, 2025, 01:35 AM • Last activity: Jul 29, 2025, 09:50 AM
653 votes
16 answers
820944 views
What if 'kill -9' does not work?
I have a process I can't kill with `kill -9 `. What's the problem in such a case, especially since I am the owner of that process. I thought nothing could evade that `kill` option.
I have a process I can't kill with kill -9 . What's the problem in such a case, especially since I am the owner of that process. I thought nothing could evade that kill option.
tshepang (67482 rep)
Jan 10, 2011, 07:51 PM • Last activity: Jul 28, 2025, 03:20 PM
33 votes
12 answers
29786 views
Process descendants
I'm trying to build a process container. The container will trigger other programs. For example - a bash script that launches running background tasks with '&' usage. The important feature I'm after is this: when I kill the container, everything that has been spawned under it should be killed. Not j...
I'm trying to build a process container. The container will trigger other programs. For example - a bash script that launches running background tasks with '&' usage. The important feature I'm after is this: when I kill the container, everything that has been spawned under it should be killed. Not just direct children, but their descendants too. When I started this project, I mistakenly believed that when you killed a process its children were automatically killed too. I've sought advice from people who had the same incorrect idea. While it's possible to catch a signal and pass the kill on to children, that's not what I'm looking for here. I believe what I want to be achievable, because when you close an xterm, anything that was running within it is killed unless it was nohup'd. This includes orphaned processes. That's what I'm looking to recreate. I have an idea that what I'm loooking for involves unix sessions. If there was a reliable way to identify all the descendants of a process, it would be useful to be able to send them arbitrary signals, too. e.g. SIGUSR1.
Craig Turner (430 rep)
Jun 11, 2011, 10:59 AM • Last activity: Jul 24, 2025, 05:07 PM
2 votes
2 answers
4389 views
How to find out on which core a thread is running on?
Let's say we have a CPU-intensive application called `multi-threaded-application.out` that is running on top of Ubuntu with a PID of 10000. It has 4 threads with tid 10001, 10002, 10003, and 10004. I want to know, at any given time, on which core each of these threads is being scheduled? I tried `/p...
Let's say we have a CPU-intensive application called multi-threaded-application.out that is running on top of Ubuntu with a PID of 10000. It has 4 threads with tid 10001, 10002, 10003, and 10004. I want to know, at any given time, on which core each of these threads is being scheduled? I tried /proc//tasks//status, but I couldn't find any information regarding the core ID that is responsible for running the given thread. This question is somehow related to this one . Any help would be much appreciated.
Michel Gokan Khan (133 rep)
Sep 5, 2020, 05:20 PM • Last activity: Jul 14, 2025, 06:02 PM
42 votes
6 answers
11886 views
Does pressing ctrl-c several times make the running program close more quickly?
I often start to read a huge file and then want to quit after a while, but there is a lag from pressing Ctrl + C to the program stops. Is there a chance of shortening the lag by pressing the Ctrl + C key several times? Or am I wasting my keypresses?
I often start to read a huge file and then want to quit after a while, but there is a lag from pressing
Ctrl+C to the program stops. Is there a chance of shortening the lag by pressing the Ctrl+C key several times? Or am I wasting my keypresses?
The Unfun Cat (3451 rep)
Jan 15, 2014, 07:33 AM • Last activity: Jul 8, 2025, 10:00 AM
2 votes
4 answers
14126 views
Find the parent of a process
I am trying to write a script to help with computer security. I am trying to look for open ports, find the PID, and find what called it. I have it working, where my output looks something like this: IPV4 - 1234 - 2566/nc Running from: `/bin/nc.openbsd` Command run: `nc -l 1234` Where I was able to g...
I am trying to write a script to help with computer security. I am trying to look for open ports, find the PID, and find what called it. I have it working, where my output looks something like this: IPV4 - 1234 - 2566/nc Running from: /bin/nc.openbsd Command run: nc -l 1234 Where I was able to get those values from netstat, /proc/$PID/exe and /proc/$PID/cmdline However, in the nature of looking for backdoors, there may be a script on my computer somewhere, that would call nc. Is it possible, from the PID of nc, to find the original scripts location? I've tried looking at the other files in /proc/$PID/* to no avail. Say in /etc/rc.local I put the line nc -l 1234, Could I get something that would tell me that the nc command was opened by /etc/rc.local?
Connor Quick (27 rep)
Nov 24, 2014, 05:52 PM • Last activity: Jul 6, 2025, 01:06 PM
5 votes
2 answers
1887 views
Measuring peak memory usage of many processes
I have a bash script that calls various other scripts, one of which has a bunch of commands that launch scripts inside a screen session like this: screen -S $SESSION_NAME -p $f -X stuff "$CMD\n" Will running my top script with /usr/bin/time -v capture the peak memory usage everything? I want to have...
I have a bash script that calls various other scripts, one of which has a bunch of commands that launch scripts inside a screen session like this: screen -S $SESSION_NAME -p $f -X stuff "$CMD\n" Will running my top script with /usr/bin/time -v capture the peak memory usage everything? I want to have this script run as a cron job but I need to know how much memory it will take before I cause problems for other users on the machine. Thanks
Matt Feldman (51 rep)
Oct 10, 2016, 05:33 PM • Last activity: Jul 2, 2025, 08:44 AM
55 votes
1 answers
65070 views
Why is most the of disk IO attributed to jbd2 and not to the process that is actually using the IO?
When monitoring disk IO, most of the IO is attributed to jbd2, while the original process that caused the high IO is attributed a much lower IO percentage. Why? Here's `iotop`'s example output (other processes with IO<1% omitted): [![enter image description here][1]][1] [1]: https://i.sstatic.net/T6...
When monitoring disk IO, most of the IO is attributed to jbd2, while the original process that caused the high IO is attributed a much lower IO percentage. Why? Here's iotop's example output (other processes with IO<1% omitted): enter image description here
Sparkler (1109 rep)
Feb 8, 2017, 11:24 PM • Last activity: Jun 24, 2025, 08:01 AM
2 votes
2 answers
2658 views
Killing all python scripts except grep process and a specific python script
How can I run command in bash to kill all python scripts except script called `test.py` and the `grep`'s pid itself, in case we are using something like `ps -ef |grep` I think I can use something like pgrep python to ignore the grep process, but how do I also exclude the test.py script? I know there...
How can I run command in bash to kill all python scripts except script called test.py and the grep's pid itself, in case we are using something like ps -ef |grep I think I can use something like pgrep python to ignore the grep process, but how do I also exclude the test.py script? I know there is an option to do grep -v, is there option to do pgrep -v ***Clarification***: except grep process- means when we do for example ps -ef |grep test1.pywe get also the grep pid that used to bring this result. I don't want to kill it as this process is no longer exist in the stage that results are shown.
user12345 (21 rep)
Sep 15, 2016, 03:13 PM • Last activity: Jun 23, 2025, 01:02 PM
3 votes
1 answers
2143 views
How do I find what process is running in a particular GNU screen window?
## Problem I need to find what process is running on a particular window in screen (in a reasonable amount of time). ## Scenario I need to use the Session Name and Window Title to find the process running therein. It needs to not be super slow. Also potentially noteworthy: I'm using byobu as a wrapp...
## Problem I need to find what process is running on a particular window in screen (in a reasonable amount of time). ## Scenario I need to use the Session Name and Window Title to find the process running therein. It needs to not be super slow. Also potentially noteworthy: I'm using byobu as a wrapper for screen. ## What I've tried * Searching the internet * Reading the screen man page (not for the faint of heart). *(Okay, I didn't read all of it, but I did most of the relevant sections and searched it very thoroughly for anything that might be useful.)* - What I learned: + The only way to gain the information I might need from screen (by calling screen) is through the use of it's command line flags. * -Q will allow you to query certain commands, but none of these provided everything that I need. The one that I'm using returns the Window number. - number - what I'm using to get the window number - windowlist - allows you to get a custom-formatted string of information, but the session PID is not one of the things you can ask for - echo, info, lastmsg, select, time, title are the other ones and none of these looked useful * -ls lists the active sessions. It prepends the PID to the session name, so this is how I'm currently getting the session PID. + Once I have the PID of the shell running in a specific Window, I can check its environment variables for WINDOW. This variable is set to the window number. This is what I'm using to match the process to the window. + There is no single command that will allow me to return the session PID and a map of the window titles to window numbers. Also, I could find no way to deterministically find the session id and window title to window number map outside of calling screen. * Trial and error / digging through environment variables * Writing a script ## My Script I wrote a script that seems to successfully solve the problem, but it takes a little over 0.75 seconds to run. This is far too long for what I need done, but more importantly, far too long when a server is waiting for its completion to send the response to an HTML request. Here is the script:
#!/bin/bash
# Accept the name of a GNU/screen session & window and return the process running in its shell
SessionName=$1
TabName=$2

# ====== Averages 0.370 seconds ======
# This finds the window number given the session name and window title
# The screen command queries screen for the window number of the specified 
# window title in the specified session.
# Example output of screen command: 1 (Main)
# Example output after cut command: 1
TargetTabNum=$(screen -S $SessionName -p $TabName -Q number | cut -d ' ' -f1)

# ====== Averages 0.370 seconds ======
# This finds the session PID given the session name.
# The screen command prints the list of session IDs
# Example output of screen command:
#     There is a screen on:
#             29676.byobu     (12/09/2019 10:23:19 AM)        (Attached)
#     1 Socket in /run/screen/S-{username here}.
# Example output after sed command: 29676
SessionPID=$(screen -ls | sed -n "s/\s*\([0-9]*\)\.$SessionName\t.*/\1/p")

# ====== The rest averages 0.025 seconds ======
# This gets all the processes that have the session as a parent,
# loops through them checking the WINDOW environment variable for
# each until it finds the one that matches the window title, and
# then finds the process with that process as a parent and prints its
# command and arguments (or null if there are no matching processes)
ProcessArray=( $(ps -o pid --ppid $SessionPID --no-headers) )
for i in "${ProcessArray[@]}"
do
    ProcTabNum=$(tr '\0' '\n' &2
exit 1
As you can see, the problem commands are the two screen commands. I can get rid of one of them by searching for a screen process launched with the session name, but this feels kind of flaky and I'm not sure it would be deterministic:
SessionPID=$(ps -eo pid,command --no-headers | grep -i "[0-9]* screen.*\-s $SessionName " | grep -v grep | cut -d ' ' -f1)
## Goal I would like to have a fast, reliable way to determine the process currently running in a specific screen window. I feel like I'm just missing something, so I would be very grateful if one of you spot it! (I'm still fairly knew to StackExchange, so any feedback on my question is welcome!)
UrsineRaven (118 rep)
Dec 10, 2019, 07:19 PM • Last activity: Jun 21, 2025, 07:08 PM
1 votes
2 answers
2426 views
How to close new terminal forcefully from script, when Profile Preferences are -> Command-> When commands exits:->Hold the terminal open
I'm trying to open two new terminals and run .sh files on them from an script, but after finishing all the commands, I want one terminal to be closed. I'm using `gnome-terminal -e "sh patterns.sh";`command to open new terminal and to run script.
I'm trying to open two new terminals and run .sh files on them from an script, but after finishing all the commands, I want one terminal to be closed. I'm using gnome-terminal -e "sh patterns.sh";command to open new terminal and to run script.
MayankD (43 rep)
Jan 18, 2018, 12:21 PM • Last activity: Jun 20, 2025, 10:07 AM
4 votes
0 answers
60 views
Recovering text of Unsaved document from process memory? (frozen Xed window - process still "running" but in Sleeping status)
The window of my text editor Xed froze with Unsaved documents just as I was doing 'File'->'Save as...' to save them... [How ironic.] Since the process still exists, I am trying to recover the text of those documents from the process memory (/proc/$pid/mem). [There was quite some text, thus the effor...
The window of my text editor Xed froze with Unsaved documents just as I was doing 'File'->'Save as...' to save them... [How ironic.] Since the process still exists, I am trying to recover the text of those documents from the process memory (/proc/$pid/mem). [There was quite some text, thus the effort.] I first tried through the following bash script based on
taken from this page :
#!/bin/bash

if [ -z "$1" ]; then
  echo "Usage: $0 "
  exit 1
fi
if [ ! -d "/proc/$1" ]; then
  echo "PID $1 does not exist"
  exit 1
fi

while read -r line; do
  mem_range="$(echo "$line" | awk '{print $1}')"
  perms="$(echo "$line" | awk '{print $2}')"

  if [[ "$perms" == *"r"* ]]; then
    start_addr="$(echo "$mem_range" | cut -d'-' -f1)"
    end_addr="$(echo "$mem_range" | cut -d'-' -f2)"

    echo "Reading memory range $mem_range..."
    dd if="/proc/$1/mem" of="/dev/stdout" bs=1 skip="$((16#$start_addr))" count="$((16#$end_addr - 16#$start_addr))" 2>/dev/null
  fi
done < "/proc/$1/maps"
The script outputs some data, however I am dubious about the approach because: - I get this error/warning message due to the
part:
: /proc/4063214/mem: cannot skip to specified offset
and I have then read somewhere that since
/proc/$pid/mem
is a virtual file it cannot be skipped. - The first comment in this post says that
cannot be used, however... (i) the comment is quite ancient by now, so I am wondering if it is still valid, (ii) despite the
skip to specified offset
warning message from
it appears that shifting the offset value shifts the output accordingly, and (iii) with
I recover the same data as with
mentioned in this other answer ). So, should
be working in the end? - Still under the same post, someone mentioned the
command which can generate a 'core file' of a process. Howerver, I am not sure what is a 'core file' and if this is what I need. My questions are: - Is using the
command as in the script above a valid path in the end? - Should I rather stick to the Python-based approach proposed in this post mentioned above? (Or some other method like the gcore command?) - Would there be some alternative path to my issue using other files from the '/proc/$pid' folder of Xed?
The Quark (402 rep)
Jun 19, 2025, 01:47 PM • Last activity: Jun 19, 2025, 03:25 PM
0 votes
2 answers
2340 views
Monitor process pid for change
I have a service that is always supposed to be running. I’d like to know when the service ever stops or gets restarted. I’ve thought about referencing the service’s PID. If it gets restarted, it would get a new pid. So I would like to send an alert or email whenever the pid changes. What’s the least...
I have a service that is always supposed to be running. I’d like to know when the service ever stops or gets restarted. I’ve thought about referencing the service’s PID. If it gets restarted, it would get a new pid. So I would like to send an alert or email whenever the pid changes. What’s the least intrusive way to do this? Right now I have a cron job writing the pid to a file every 5 minutes. Is there a Linux tool that can monitor this file for pid changes? Or should I have some other thing like a Python script running outside somewhere that can pull this file and monitor that way?
Vince (1 rep)
Mar 28, 2019, 11:13 PM • Last activity: Jun 19, 2025, 10:04 AM
0 votes
2 answers
79 views
how do I prevent script continuation when the process run queue is full?
Environment: shell is BusyBox bash 3.2 running in what started-out as Ubuntu Server many years ago, but has since been tweaked a great deal by the manufacturer of this particular box to become a custom embedded OS. I've encountered a problem that appears to be related to process queuing. I'm hoping...
Environment: shell is BusyBox bash 3.2 running in what started-out as Ubuntu Server many years ago, but has since been tweaked a great deal by the manufacturer of this particular box to become a custom embedded OS. I've encountered a problem that appears to be related to process queuing. I'm hoping someone can advise if there's a better solution that the one described below. I have a bash script that launches several daemons consecutively and in the foreground (no background launches and job control is disabled). After each launch, I grab the returncode $? and check it to ensure there were no issues. Towards the end of this list, (at somewhat random times) launch commands are run, and the following command runs immediately afterward (as if I had just launched the previous one in the background with &). After some investigation, it seems the daemon launching command was received, and is queued. The process state will either be sleeping or runnable (and eventually running). **I need my script to wait for each launch to complete before continuing.** After locating the process ID by grepping ps, I tried wait $pid but this failed to do-so. The usual $! is empty. I've now constructed a loop that watches for the launch process ID directory in /proc to disappear. When it does, it's safe to assume the launch process completed. Is there a way to force my script to wait for each command to finish before proceeding? Has anyone encountered this behaviour before?
OneCheapDrunk (43 rep)
Jun 7, 2025, 08:30 AM • Last activity: Jun 15, 2025, 05:01 AM
31 votes
2 answers
21611 views
iotop showing 1.5 MB/s of disk write, but all programs have 0.00 B/s
I don't understand `iotop` output: it shows ~1.5 MB/s of disk write (top right), but all programs have 0.00 B/s. Why? [![enter image description here][1]][1] The video was taken as I was deleting the content of a folder with a few millions of files [using](https://unix.stackexchange.com/a/79656/1670...
I don't understand iotop output: it shows ~1.5 MB/s of disk write (top right), but all programs have 0.00 B/s. Why? enter image description here The video was taken as I was deleting the content of a folder with a few millions of files [using](https://unix.stackexchange.com/a/79656/16704) perl -e 'for(){((stat)<(unlink))}', on Kubuntu 14.04.3 LTS x64. iotop was launched using sudo iotop.
Franck Dernoncourt (5533 rep)
Dec 8, 2015, 10:17 PM • Last activity: Jun 9, 2025, 06:06 PM
0 votes
1 answers
41 views
Is there any advantage to changing process priorities using a kernel module instead of nice / chrt?
I'm working on a project where I want to study the impact of process priority on system behavior. I know that tools like nice, renice, and chrt can change the priority or scheduling policy (e.g., SCHED_FIFO, SCHED_RR, etc.) from user space using system calls. However, I’m wondering: Is there any tec...
I'm working on a project where I want to study the impact of process priority on system behavior. I know that tools like nice, renice, and chrt can change the priority or scheduling policy (e.g., SCHED_FIFO, SCHED_RR, etc.) from user space using system calls. However, I’m wondering: Is there any technical or practical advantage to adjusting process priority using a kernel module instead of via user-space tools like nice or chrt? Have you encountered cases where a kernel module offered more control or precision in setting scheduling parameters than user-space methods? Any insights or examples would be appreciated! **Edit** More specifically: Can setting the priority directly in the kernel (e.g., during process creation or in a module) reduce the chance of early interruptions or scheduling delays? Is there any behavioral or performance gain from assigning SCHED_FIFO 99 at the earliest possible point, compared to launching the process with chrt -f 99 ? I'm working in a forensics-related context where I want the memory acquisition process to be as undisturbed and deterministic as possible.
RustySyntax (1 rep)
Jun 8, 2025, 08:58 PM • Last activity: Jun 9, 2025, 09:18 AM
7 votes
2 answers
397 views
How can I find location of PRI in /proc
I have `sshd` with `PID` of 1957: mohsen@debian:~$ ps ax -o pid,nice,pri,cmd |grep 1957 1957 -2 21 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups According to above, my `nice` number is -2 and `pri` number is 21. According to `/proc/1957`, I can't find my `nice` number and `pri` number. roo...
I have sshd with PID of 1957: mohsen@debian:~$ ps ax -o pid,nice,pri,cmd |grep 1957 1957 -2 21 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups According to above, my nice number is -2 and pri number is 21.
According to /proc/1957, I can't find my nice number and pri number.
root@debian:~# cd /proc/1957 root@debian:/proc/1957# cat sched sshd (1957, #threads: 1) ------------------------------------------------------------------- se.exec_start : 211942985.934983 se.vruntime : 31.031644 se.sum_exec_runtime : 23.385935 se.nr_migrations : 14 nr_switches : 57 nr_voluntary_switches : 18 nr_involuntary_switches : 39 se.load.weight : 1624064 se.avg.load_sum : 4366 se.avg.runnable_sum : 4470824 se.avg.util_sum : 1557504 se.avg.load_avg : 147 se.avg.runnable_avg : 95 se.avg.util_avg : 33 se.avg.last_update_time : 211909716609024 se.avg.util_est : 38 policy : 0 prio : 118 clock-delta : 89 mm->numa_scan_seq : 0 numa_pages_migrated : 0 numa_preferred_nid : -1 total_numa_faults : 0 current_node=0, numa_group_id=0 numa_faults node=0 task_private=0 task_shared=0 group_private=0 group_shared=0 Where in /proc are the number pri and nice stored?
PersianGulf (11308 rep)
Jun 5, 2025, 05:42 AM • Last activity: Jun 5, 2025, 10:29 PM
1 votes
1 answers
5042 views
nginx stop is not working and nginx is creating new process after killing processes
**nginx version: nginx/1.8.0** I am trying to stop nginx with the following command `/etc/init.d/nginx stop`, however it is not returning any successful message. Then I tried to view the nginx processes with this command `[![pidof nginx][1]][1]` and it returns following pids `58058 58057`. ***My fir...
**nginx version: nginx/1.8.0** I am trying to stop nginx with the following command /etc/init.d/nginx stop, however it is not returning any successful message. Then I tried to view the nginx processes with this command ![pidof nginx ][1] and it returns following pids 58058 58057. ***My first query is why nginx is not stopping?*** Another thing which I tried is to kill the processes, so as above mentioned **PIDs** I tried to remove them by following command kill 58058 & kill 58057, the processes are kill but amazingly new processes created automatically. When I again checked the with the command pidof nginx, this time it returns 2 more new processes 58763 58762. ***My Second query is how these processes are automatically being created?*** I know following query is off topic, however I also want to make changes to the configuration file under sites-available. ***Is there any way the config file changes will be implemented without restarting nginx server?*** (For this reason I am restarting my nginx) as we generally do with nginx.conf file with this command service nginx reload or /etc/init.d/nginx reload. My configurations files with pastebin link are as following 1. /etc/init/nginx.conf 2. /etc/init.d/nginx 3. /etc/nginx/nginx.conf 4. php5/fpm/pool.d/www.conf > root@BS-Web-02:/var/run# cat nginx.pid > 58762 > root@BS-Web-02:/var/run# pidof nginx > 58763 58762 > root@BS-Web-02:/var/run# kill 58762 > root@BS-Web-02:/var/run# pidof nginx > 3809 3808 > root@BS-Web-02:/var/run# cat nginx.pid > 3808 Tried Following Solutions but didn't work 1. Why doesn't stopping the nginx server kill the processes associated with it? 2. Not able to stop nginx server **P.S I am using Varnish on Port 80 and nginx on 8080**
Sukhjinder Singh (111 rep)
Sep 9, 2015, 05:36 AM • Last activity: Jun 4, 2025, 07:00 PM
Showing page 1 of 20 total questions