Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

1 votes
2 answers
2210 views
Nginx and Munin via CGI, no CSS, no graphs
I'm new to Munin and Nginx. I've installed and configured Munin and created a Nginx server block. I can see the index page generated by munin, listing the different nodes. But when I click on a host to see the graphs, the only thing I get is an HTML page without CSS and without graphs. More precisel...
I'm new to Munin and Nginx. I've installed and configured Munin and created a Nginx server block. I can see the index page generated by munin, listing the different nodes. But when I click on a host to see the graphs, the only thing I get is an HTML page without CSS and without graphs. More precisely, there's the same HTML code in the webpage, the CSS and even in favicon.ico. And no graphs are loaded (I have no 404 for example). I followed [this tutorial](http://munin-monitoring.org/wiki/MuninConfigurationMasterCGI) . Here is my Nginx server block: server { listen 80; server_name munin.armagnac.[masked].com; location ^~ /cgi-bin/munin-cgi-graph/ { access_log off; fastcgi_split_path_info ^(/cgi-bin/munin-cgi-graph)(.*); fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_pass unix:/var/run/munin/fcgi-graph.sock; include fastcgi_params; } location /static/ { alias /etc/munin/static/; } location / { fastcgi_split_path_info ^(/munin)(.*); fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_pass unix:/var/run/munin/fcgi-html.sock; include fastcgi_params; } } I have no errors and nothing in the logs. As said above, a node page is almost blank: node webpage There's no CSS because any other resource is just the same HTML page: css and favicon are html Again, there's nothing in the logs and the HTML and Graphs CGIs are working fine. But I don't know where is the configuration problem, eg. on the Nginx side or in the Munin side. OS: Ubuntu Server 15.04
Morgan Touverey Quilling (503 rep)
Jun 30, 2015, 10:31 AM • Last activity: Jul 24, 2025, 07:02 AM
-2 votes
0 answers
31 views
Serve File Directory of Another server in nginx
https://unix.stackexchange.com/q/126745/762527 follow up to this. If I want to server a file directory from another server, what would be the process apart from rsync/scp?
https://unix.stackexchange.com/q/126745/762527 follow up to this. If I want to server a file directory from another server, what would be the process apart from rsync/scp?
SanthoshKumar M (1 rep)
Jul 21, 2025, 08:38 AM
3 votes
1 answers
2018 views
Error on Setup nginx to multiples ReactJS app on the same server?
I'm building a solution with multiple modules. Each module is a ReactJS app, and I trying config nginx to publish it on the same domain. For examples: - http://application-domain/**auth** - http://application-domain/**admin** - http://application-domain/**dashboard** - http://application-domain/**sa...
I'm building a solution with multiple modules. Each module is a ReactJS app, and I trying config nginx to publish it on the same domain. For examples: - http://application-domain/**auth** - http://application-domain/**admin** - http://application-domain/**dashboard** - http://application-domain/**sales** My public directory for nginx stay like this:
/var/www
├── /auth
├── /admin
├── /dashboard
└── /sales
Where auth, admin, dashboard, and sales are subfolders for each project. My nginx server conf: server { listen 9000 default_server; listen [::]:9000 default_server; server_name localhost; index index.html; location / { root /var/www/auth; } location /admin { root /var/www; } location /dashboard { root /var/www; } location /sales { root /var/www; } } Each project's subfolder has the similar structure like this enter image description here The problem is when access http://application-domain/admin , for example, the application try to load the static files on root instead subfolder project GET http://localhost:9000/static/js/main.6314dcaa.js net::ERR_ABORTED the correct would be get files on **admin** sub folder like this: GET http://localhost:9000/admin/static/js/main.6314dcaa.js What's the better approach to correct nginx configuration for this?
DBs (31 rep)
Aug 26, 2018, 02:10 PM • Last activity: Jul 20, 2025, 11:10 PM
0 votes
1 answers
3092 views
500 error on index.php with nginx + php-fpm
Before starting, I'd like to ssay this is my first experience with `VPS`, I have an Ubuntu 18.04 64bit minimal server. > For everything I tried so far I didn't tried using complex application. Just plain html file with `Hello` message and WordPress blank installation. To begin with, I'm installing `...
Before starting, I'd like to ssay this is my first experience with VPS, I have an Ubuntu 18.04 64bit minimal server. > For everything I tried so far I didn't tried using complex application. Just plain html file with Hello message and WordPress blank installation. To begin with, I'm installing Vesta Panel because it's easier for me to control some basic tasks and configurations. In order to install this panel, I'm using nginx + php-ftpm. After I install WordPress with this configuration I'm getting 500 error with this message: 2020/06/23 23:09:09 [error] 12335#12335: *11 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: example.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9001", host: "example.com" This error only appears when I try to access the WordPress index page, or any file with the naem index.php, if it's an index.html file it loads properly. --- After that I restored the VPS and installed Vesta Panel using nginx + apache. With this configuration, WordPress is working as expected. When I access my domain example.com the steps to create an WordPress website appears as expected. With both configurations, the folder holding all the websites files is /home/admin/web/{domain.com}/public_html. --- Edit: As requested in the comments, I'm adding more information to the question. systemctl status php-fpm.service returns: php-fpm.service - LSB: starts php7.2-fpm Loaded: loaded (/etc/init.d/php-fpm; generated) Active: active (exited) since Wed 2020-06-24 01:44:40 UTC; 10h ago Docs: man:systemd-sysv-generator(8) Tasks: 0 (limit: 614) CGroup: /system.slice/php-fpm.service Jun 24 01:44:40 agdevision.com.br systemd: Starting LSB: starts php7.2-fpm... Jun 24 01:44:40 agdevision.com.br systemd: Started LSB: starts php7.2-fpm. sudo journalctl -u php-fpm.service returns: -- Logs begin at Fri 2019-03-08 08:44:31 UTC, end at Wed 2020-06-24 12:08:35 UTC. -- Jun 24 01:44:40 agdevision.com.br systemd: Starting LSB: starts php7.2-fpm... Jun 24 01:44:40 agdevision.com.br systemd: Started LSB: starts php7.2-fpm.
celsomtrindade (101 rep)
Jun 24, 2020, 12:46 AM • Last activity: Jul 19, 2025, 02:06 PM
0 votes
2 answers
70 views
Nginx unable to create log directory
On every reboot `sudo systemctl status nginx` reports: ``` Jul 07 12:43:46 myhost nginx[409]: nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (2: No such fil>Jul 07 12:43:47 myhost nginx[409]: 2025/07/07 12:43:47 [emerg] 409#409: open() "/var/log/nginx/access.l...
On every reboot sudo systemctl status nginx reports:
Jul 07 12:43:46 myhost nginx: nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (2: No such fil>Jul 07 12:43:47 myhost nginx: 2025/07/07 12:43:47 [emerg] 409#409: open() "/var/log/nginx/access.log" failed (2: No such file or dir>Jul 07 12:43:47 myhost nginx: nginx: configuration file /etc/nginx/nginx.conf test failed
Jul 07 12:43:47 myhost systemd: nginx.service: Control process exited, code=exited, status=1/FAILURE
Jul 07 12:43:47 myhost systemd: nginx.service: Failed with result 'exit-code'.
Jul 07 12:43:47 myhost systemd: Failed to start A high performance web server and a reverse proxy server.
sudo mkdir /var/log/nginx followed by systemctl restart nginx solves the problem. cat /etc/nginx/sites-enabled/myproject
server {
    listen 8000;
    server_name 192.168.0.10;

    location = /favicon.ico { access_log off; log_not_found off; }
    location /static/ {
        root /home/sammy/myprojectdir;
    }

    location / {
        include proxy_params;
        proxy_pass http://unix:/run/gunicorn.sock ;
    }
}
What am I missing in order to avoid manually creating the directory on every reboot?
jjk (445 rep)
Jul 7, 2025, 11:41 AM • Last activity: Jul 7, 2025, 05:00 PM
1 votes
1 answers
4179 views
nginx: How to handle 404 directly in a reverse proxy for some filenames (*.txt) only?
I have a complex `nginx` setup where a front `nginx` at ports 80 and 443 handles all outside access including TLS. For files in `/texts` the frontend-nginx shall proxy request to a second backend-nginx which modifies existing text files on the fly in a complicated process, using up CPU and other res...
I have a complex nginx setup where a front nginx at ports 80 and 443 handles all outside access including TLS. For files in /texts the frontend-nginx shall proxy request to a second backend-nginx which modifies existing text files on the fly in a complicated process, using up CPU and other resources. For those files *.txt that do not exist (404) I wish not to bother the backend at all but instead provide the client with a default file /texts/default.txt directly. However, currently non-existing files are still only handled in the backend's error_page 404 line. Existing files are served without a problem, the proxy works. This is my config:
frontend-nginx.conf:
http {
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  frontend.example.org;
        root         /srv/www;

        location /texts/ {

            location ~ \*.txt$ {
                root /srv/www/backend;

                ####### the next line has absolutely no effect
                try_files $uri /texts/default.txt;
            }

            proxy_pass          http://localhost:90;
            proxy_redirect      http://localhost:90/ /;
            proxy_set_header    Host             $host;
            proxy_set_header    X-Real-IP        $remote_addr;
            proxy_set_header    X-Forwarded-For  $proxy_add_x_forwarded_for;
            proxy_set_header    X-Client-Verify  SUCCESS;
            proxy_set_header    Upgrade          $http_upgrade;
            proxy_set_header    Connection       "upgrade";
            proxy_http_version  1.1;

            proxy_redirect off;
        }
    }
    # https goes here, all the same except TLS
}
backend-nginx.conf:
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;

    server {
        listen       127.0.0.1:90;

        root /srv/www/backend;
        charset utf-8;

        expires -1;  # no-cache
        location ~ /..*\.txt$ {
            # longer cache time for text files
            expires 10m;

            # this actually works but only here in the backend
            error_page  404 @404;
        }

        location @404 {
            return 302 $scheme://frontend.example.org/texts/default.txt
        }
    }
}
` I have that useless statement in the frontend config file which looks to me as if it could handle 404 redirects to default.txt but when I do wget -v http://frontend.example.org/texts/notexist.txt I get a redirect only inside the backend (so proxying does take place).
Ned64 (9256 rep)
Apr 2, 2020, 04:00 PM • Last activity: Jul 7, 2025, 06:04 AM
0 votes
2 answers
1957 views
Fedora - Nginx PHP-FPM - constantly changing FPM Socket to root
Apologies in advance if my terminology is not on par. So I've just setup my work dev machine successfully on Fedora Workstation with Nginx and multiple PHP versions (using Remi Collet's Software Collection). I have PHP-5.6.30 listening on Port 9056 and PHP-7.0.19 listening on Port 9070. This works p...
Apologies in advance if my terminology is not on par. So I've just setup my work dev machine successfully on Fedora Workstation with Nginx and multiple PHP versions (using Remi Collet's Software Collection). I have PHP-5.6.30 listening on Port 9056 and PHP-7.0.19 listening on Port 9070. This works perfectly. This morning I decided to try running both PHP instances using FPM Sockets, which initially worked until I restarted the PHP-FPM service (this resulted in a *502 Bad gateway* in the browser, and *(13) Permission Denied* error in the nginx error.log). So using PHP5.6 as an example ... when I first started the php56-php-fpm service which generated /opt/remi/php56/root/var/run/php-fpm/www.sock, I changed the generated www.sock file's user and group to nginx:nginx. After restarting php56-php-fpm I learned that www.sock was being reset to root:root. Now granted I won't be restarting FPM constantly, but there must be a way to set some defaults on the .sock file?? My fpm conf files looks like this: - **/opt/remi/php56/root/etc/php-fpm.d/www.conf**: https://pastebin.com/EasyHyEs - **/etc/opt/remi/php70/php-fpm.d/www.conf**: https://pastebin.com/dhT8AEJK - **/etc/nginx/nginx.conf**: https://pastebin.com/tMuAFnGM - **/etc/nginx/conf.d/default.conf**: https://pastebin.com/UjkrcaYw I realise that this sounds like a pain to get working correctly, considering that I am just doing this for local development, ***and*** that I did have this all working correctly using ports 9056 & 9070. But I've read that there are some speed benefits using sockets versus TCP, and anything that would speed up my local dev environment is worth making the effort for. So my questions: 1. What in my config is incorrect that is causing www.sock to be reset to root:root after restarting the respective FPM service? 2. Is it really worth moving away from ports in favour of sockets? 3. [slightly off-topic]: using Remi Collet's software collection, I see that the 2 PHP packages install to different locations: /opt/remi/php56 and /etc/opt/remi/php70 ... for the purpose of consistency, should I consider moving either one of these into a more common location? Thank you
maGz (993 rep)
May 25, 2017, 12:23 PM • Last activity: Jun 27, 2025, 12:04 PM
0 votes
1 answers
1931 views
where is nginx.lock file?
I am using centos 7.2,and I installed nginx 1.9.14 form source, wget -c http://nginx.org/download/nginx-1.9.14.tar.gz tar zxf nginx-1.9.14.tar.gz wget -O nginx-ct.zip -c https://github.com/grahamedgecombe/nginx-ct/archive/v1.2.0.zip unzip nginx-ct.zip cd nginx-1.9.14/ ./configure --add-module=../ngi...
I am using centos 7.2,and I installed nginx 1.9.14 form source, wget -c http://nginx.org/download/nginx-1.9.14.tar.gz tar zxf nginx-1.9.14.tar.gz wget -O nginx-ct.zip -c https://github.com/grahamedgecombe/nginx-ct/archive/v1.2.0.zip unzip nginx-ct.zip cd nginx-1.9.14/ ./configure --add-module=../nginx-ct-1.2.0 --with-openssl=../openssl --with-http_v2_module --with-http_ssl_module make make install **Question 1:** I need add the path of nginx.lock to shell script,but I can't find it in /var/lock/,where is it? **Question 2:** The shell script is like this,is it ok? #! /bin/bash # # nginx - this script starts and stops the nginx daemon # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # proxy and IMAP/POP3 proxy server # # processname: nginx # config: /etc/nginx/nginx.conf # pidfile: /var/run/nginx/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/var/lock/nginx.lock start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop sleep 1 start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac
sunshine (171 rep)
Apr 9, 2016, 02:19 PM • Last activity: Jun 12, 2025, 07:05 AM
1 votes
1 answers
3829 views
Redirect to the encode url present in query parameter in NGINX
Nginx is not redirecting when the value of redirect URL is encoded using `encodeURIComponent`. location /redirect/ { return 307 $arg_target_url; } When I enter url from the browser: ``` mylocalserver.com/redirect/?target_url=example.com%3Fx%3Dy%26z%3Dk ``` it gets redirected to ``` mylocalserver.com...
Nginx is not redirecting when the value of redirect URL is encoded using encodeURIComponent. location /redirect/ { return 307 $arg_target_url; } When I enter url from the browser:
mylocalserver.com/redirect/?target_url=example.com%3Fx%3Dy%26z%3Dk
it gets redirected to
mylocalserver.com/redirect/example.com%3Fx%3Dy%26z%3Dk
The expectation is, it should be redirected to url example.com?x=y&z=k. However, when $arg_target_url contains plain value, it does work.
Peter (11 rep)
Jan 7, 2021, 05:43 AM • Last activity: Jun 7, 2025, 08:03 PM
0 votes
1 answers
3900 views
dpkg errors when trying to install nginx
Trying to install nginx: 1. First: sudo curl -sL https://deb.nodesource.com/setup_lts.x | sudo -E bash - Result: Looks everything okay: > ## Run `sudo apt-get install -y nodejs` to install Node.js 16.x and npm > ## You may also need development tools to build native addons: > sudo apt-get install gc...
Trying to install nginx: 1. First: sudo curl -sL https://deb.nodesource.com/setup_lts.x | sudo -E bash - Result: Looks everything okay: > ## Run sudo apt-get install -y nodejs to install Node.js 16.x and npm > ## You may also need development tools to build native addons: > sudo apt-get install gcc g++ make > ## To install the Yarn package manager, run: > curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | sudo tee /usr/share/keyrings/yarnkey.gpg >/dev/null > echo "deb [signed-by=/usr/share/keyrings/yarnkey.gpg] https://dl.yarnpkg.com/debian stable main" | sudo tee > /etc/apt/sources.list.d/yarn.list > sudo apt-get update && sudo apt-get install yarn 2. Then: sudo apt install -y nodejs nano nginx Looks okay but asked to run apt --fix-broken install You might want to run 'apt --fix-broken install' to correct these. The following packages have unmet dependencies: nginx : Depends: nginx-core (= 1.18.0-6ubuntu11) but it is not going to be installed or nginx-full (>= 1.18.0-6ubuntu11) but it is not going to be installed or nginx-light (>= 1.18.0-6ubuntu11) but it is not going to be installed or nginx-extras (>= 1.18.0-6ubuntu11) but it is not going to be installed E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). 3. Run: apt --fix-broken install` There are errors: dpkg: error processing archive /var/cache/apt/archives/nodejs_16.13.0-deb-1nodesource1_amd64.deb (--unpack): trying to overwrite '/usr/share/doc/nodejs/api/fs.html', which is also in package nodejs-doc 12.22.5~dfsg-5ubuntu1 dpkg-deb: error: paste subprocess was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/nodejs_16.13.0-deb-1nodesource1_amd64.deb needrestart is being skipped since dpkg has failed E: Sub-process /usr/bin/dpkg returned an error code (1) I don't know what to next, but looks nginx is not installed
Mediator (101 rep)
Oct 30, 2021, 05:46 PM • Last activity: Jun 5, 2025, 02:00 PM
1 votes
1 answers
5042 views
nginx stop is not working and nginx is creating new process after killing processes
**nginx version: nginx/1.8.0** I am trying to stop nginx with the following command `/etc/init.d/nginx stop`, however it is not returning any successful message. Then I tried to view the nginx processes with this command `[![pidof nginx][1]][1]` and it returns following pids `58058 58057`. ***My fir...
**nginx version: nginx/1.8.0** I am trying to stop nginx with the following command /etc/init.d/nginx stop, however it is not returning any successful message. Then I tried to view the nginx processes with this command ![pidof nginx ][1] and it returns following pids 58058 58057. ***My first query is why nginx is not stopping?*** Another thing which I tried is to kill the processes, so as above mentioned **PIDs** I tried to remove them by following command kill 58058 & kill 58057, the processes are kill but amazingly new processes created automatically. When I again checked the with the command pidof nginx, this time it returns 2 more new processes 58763 58762. ***My Second query is how these processes are automatically being created?*** I know following query is off topic, however I also want to make changes to the configuration file under sites-available. ***Is there any way the config file changes will be implemented without restarting nginx server?*** (For this reason I am restarting my nginx) as we generally do with nginx.conf file with this command service nginx reload or /etc/init.d/nginx reload. My configurations files with pastebin link are as following 1. /etc/init/nginx.conf 2. /etc/init.d/nginx 3. /etc/nginx/nginx.conf 4. php5/fpm/pool.d/www.conf > root@BS-Web-02:/var/run# cat nginx.pid > 58762 > root@BS-Web-02:/var/run# pidof nginx > 58763 58762 > root@BS-Web-02:/var/run# kill 58762 > root@BS-Web-02:/var/run# pidof nginx > 3809 3808 > root@BS-Web-02:/var/run# cat nginx.pid > 3808 Tried Following Solutions but didn't work 1. Why doesn't stopping the nginx server kill the processes associated with it? 2. Not able to stop nginx server **P.S I am using Varnish on Port 80 and nginx on 8080**
Sukhjinder Singh (111 rep)
Sep 9, 2015, 05:36 AM • Last activity: Jun 4, 2025, 07:00 PM
0 votes
1 answers
6271 views
How to do a redirect from the root location in nginx without preventing access to sub locations?
How to get nginx to redirect to another path only if the root path is requested? Here is part of my server configuration: server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 # Make site accessible from http://localho...
How to get nginx to redirect to another path only if the root path is requested? Here is part of my server configuration: server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 # Make site accessible from http://localhost/ server_name wiki wiki.leerdomain.lan; # Note: There should never be more than one root in a # virutal host # Also there should never be a root in the location. #root /var/www/nginx/; rewrite ^/$ /rootWiki/ redirect; location ^~ /rootWiki/ { resolver 127.0.0.1 valid=300s; access_log ./logs/RootWiki_access.log; error_log ./logs/RootWiki_error.log; proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_set_header Host $host; proxy_set_header X-Real_IP $remote_addr; rewrite /rootWiki/(.*) /$1 break; proxy_pass http://192.168.1.200:8080 ; } location ^~ /usmle/ { access_log ./logs/usmle_access.log; ... When I configure it as above I am unable to access any of the sub-locations under root...but the root directory does forward to /rootWiki/ but I receive a 502 Bad Gateway instead of the application on port 8080. When I remove the line: rewrite ^/$ /rootWiki/ redirect; I'm able to access the rootWiki application, and all the sub locations from root just fine. It seems to me like it should work but it does not appear to.
leeand00 (4937 rep)
Jan 25, 2018, 06:22 PM • Last activity: May 20, 2025, 09:03 PM
0 votes
1 answers
5600 views
connect() to unix:///tmp/uwsgi_dev.sock failed (2: No such file or directory) while connecting to upstream
I tried to run django application using uwsgi+nginx and the cronjob command is, `* * * * * /usr/local/bin/lockrun --lockfile /path/to/lock_file -- uwsgi --close-on-exec -s /path/to/socket_file --chdir /path/to/project/folder/ --pp .. -w project_name.wsgi -C666 -p 32 -H /path/to/virtualenv/ 1>> /path...
I tried to run django application using uwsgi+nginx and the cronjob command is,
* * * * * /usr/local/bin/lockrun --lockfile /path/to/lock_file -- uwsgi --close-on-exec -s /path/to/socket_file --chdir /path/to/project/folder/ --pp .. -w project_name.wsgi -C666 -p 32 -H /path/to/virtualenv/ 1>> /path/to/success_log 2>> /path/to/error_log
but I receive the error in nginx error log file as,
**019/11/20 06:45:21 [crit] 1986#1986: *2 connect() to unix:///path/to/socket_file failed (2: No such file or directory) while connecting to upstream, client: xxx.xxx.xx.xxx, server: localhost, request: "GET /auth/status/ HTTP/1.1", upstream: "uwsgi://unix:///path/to/socket_file:", host: "xx.xxx.xx.xxx", referrer: "http://xx.xxx.xx.xxx/ "**
the path for the socket_file in nginx configuration file and in cronjob command is same, can any one have idea where I'm making the mistake???
aji prabhakaran (1 rep)
Nov 20, 2019, 07:00 AM • Last activity: May 18, 2025, 02:07 AM
0 votes
2 answers
2105 views
Need to parse nginx access.log by minute, chunks for statistics - is there better way?
I need to collect current data from the nginx access.log every minute to monitor amount of requests and errors. It's nginx's *frontend* log - with **a lot** of requests. It's formatted, rotated, then archived every hour. So — requests per minute and various errors per minute — how to parse that I kn...
I need to collect current data from the nginx access.log every minute to monitor amount of requests and errors. It's nginx's *frontend* log - with **a lot** of requests. It's formatted, rotated, then archived every hour. So — requests per minute and various errors per minute — how to parse that I know — but how to get 1 minute worth of logs? I am trying to timeout 60s tail -f /var/log/veryfastmovingaccess.log >> 60s_log.tmp Then awk parses tmp, cleans it, restarts tail. Am I doing it wrong?
Dach (1 rep)
Jul 8, 2015, 02:44 PM • Last activity: May 17, 2025, 02:04 AM
0 votes
1 answers
82 views
Allow specific IP addresses through iptables with Wireguard
I have a number of self hosted services on my home server, running Arch Linux. ## Context A number of these are held in Docker containers (each with their own Docker compose file), though one (Jellyfin) is installed via `pacman`. Using Nginx Proxy Manager, I have set up proxy sites for each service...
I have a number of self hosted services on my home server, running Arch Linux. ## Context A number of these are held in Docker containers (each with their own Docker compose file), though one (Jellyfin) is installed via pacman. Using Nginx Proxy Manager, I have set up proxy sites for each service (using the http scheme, the local IP, and the respective port for the service), and generated an SSL certificate for the site and its subdomains. ## Issue My problem comes in that I am using Mullvad VPN with Wireguard following the guide available here . Pertinent to this question is the section on the kill switch and local network sharing (available here ). Implementing these two iptables rules for when Wireguard is up or down works well for ssh connections. The rule, to be clear, is as follows;
REJECT     all  --  0.0.0.0/0           !192.168.0.0/24       mark match ! 0xca6c ADDRTYPE match dst-type !LOCAL reject-with icmp-port-unreachable
This is as per the guide provided by Mullvad, linked above. Though I can ssh to the server, when attempting to connect to the Docker services via my browser, I am met with a 502 error. Conversely, flushing the OUTPUT chain rules (of which there is only one) allows the services to work. The same is true if I run wg-quick down wg-proxy.conf. ## Question What I am wondering is if, in the same way the iptables rule rejects all traffic but the LAN IPs, whether I can do the same for 'WAN' IPs. Put another way, I would like to keep the iptables rules set by Wireguard above, but also allow my services to not generate a 502 error and *only* be accessible to addresses on my local network (ideally if an explanation could be given to any answers so that I could extend the rule to a future Tailscale set up, that would be much appreciated). My apologies if I have formatted my question poorly here or omitted any information, as I am very new to this process. Thank you for reading.
twelfth (26 rep)
May 10, 2025, 11:31 PM • Last activity: May 12, 2025, 01:12 AM
2 votes
1 answers
2697 views
How to configure nginx for multiple URL to which provide different resources
I've come across several examples demonstrating how to set up multiple domains with Nginx, but my URLs are serving resources that are distinct from the web page domain. Are there any differences in terms of configuration requirements between these scenarios?
I've come across several examples demonstrating how to set up multiple domains with Nginx, but my URLs are serving resources that are distinct from the web page domain. Are there any differences in terms of configuration requirements between these scenarios?
celcin (146 rep)
Dec 20, 2019, 12:16 PM • Last activity: Apr 25, 2025, 07:03 AM
0 votes
1 answers
2380 views
What is the equivalent of localhost in Debian using nginx?
Every nginx config guide I find is about setting up the server for, say, example.com. But I don't have a domain name, and I want to set up a local DNS, something like localhost in Windows with Apache that comes with XAMPP. I want to create two ports, which is I believe server blocks in nginx. One of...
Every nginx config guide I find is about setting up the server for, say, example.com. But I don't have a domain name, and I want to set up a local DNS, something like localhost in Windows with Apache that comes with XAMPP. I want to create two ports, which is I believe server blocks in nginx. One of the ports is for api, one of the ports is for the frontend. I have created two files: /etc/nginx/conf.d/chubak.conf: server { listen 85; server_name chubak.com; access_log /srv/logs/vue.access.log; error_log /srv/logs/vue.error.log; gzip_static on; # root /srv/default; root /var/www/chubak.com/html; index index.html; location / { add_header 'Access-Control-Allow-Origin' '*'; try_files $uri $uri/ /index.html; } And /etc/nginx/conf.d/api.chubak.conf: server { listen 180; server_name api.chubak.com; access_log /var/www/api.chubak.com/logs/api.access.log; error_log /var/www/api.chubak.com/logs/api.error.log; root /var/www/api.chubak.com/html; index index.php index.html; client_max_body_size 128M; location / { try_files $uri $uri/ /index.php?_url=$uri&$args; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; try_files $uri =404; fastcgi_pass unix:/var/run/php/php7.2-fpm.sock; fastcgi_read_timeout 600; fastcgi_intercept_errors on; gzip off; fastcgi_index index.php; } And I've created index.html files in the /var/www/site/html folder, but I don't know how to access them. As I said, the tutorials always assume that you have a domain name pointed to your server.
Major Despard (57 rep)
Apr 17, 2019, 05:59 PM • Last activity: Apr 20, 2025, 02:07 AM
0 votes
1 answers
51 views
Cannot access static files for subdomain in nginx
I have a static website and a Django app backend running using gunicorn on port 8000 on a server. I am collecting CSS files for the static website into /mywebsite-deployment/staticDir/homepage/css/static_website.css, and the CSS for the Django templates in /mywebsite-deployment/staticDir/dashboard/c...
I have a static website and a Django app backend running using gunicorn on port 8000 on a server. I am collecting CSS files for the static website into /mywebsite-deployment/staticDir/homepage/css/static_website.css, and the CSS for the Django templates in /mywebsite-deployment/staticDir/dashboard/css/dashboard.css; I am able to access the static website using mywebsite.com and www.mywebsite.com with CSS, and the Django backend using app.mywebsite.com but its CSS doesn't load. The template in which I am trying to access it looks like this:
{% load static %}
My Django app settings.py looks like this:
STATICFILES_DIRS = [
    os.path.join(BASE_DIR, 'staticFiles'),
]

STATIC_ROOT = '/mywebsite-deployment/staticDir/'

STATIC_URL = 'static/'
This is my nginx config:
ssl_certificate         /etc/letsencrypt/live/mywebsite.co/fullchain.pem;    # managed by Certbot
    ssl_certificate_key     /etc/letsencrypt/live/mywebsite.co/privkey.pem;      # managed by Certbot
    
    root            /mywebsite-deployment/staticDir/;
    index           /homepage/index.html;
    error_page 404 500 502 503 504  /error.html;
    proxy_intercept_errors on;      # If proxy errors, let nginx process it
    
    # error_log /var/log/nginx/error.log info;
    
    server {
        # If host isn't a mywebsite host, close the connection to prevent host spoofing
        server_name _;
        listen 80 default_server;
        listen 443 ssl default_server;
    
        return 444;
    }
    
    # Redirect HTTP requests to HTTPS
    server {
        server_name mywebsite.co www.mywebsite.co app.mywebsite.co;
        listen 80;
        return 301  https://$host$uri ;  # managed by Certbot
    }
    
    # Handle HTTPS static webserver requests
    server {
        server_name mywebsite.co www.mywebsite.co;
        listen 443  ssl;    # managed by Certbot
    }
    
    # Proxy HTTPS app requests to backend gunicorn service
    server {
        server_name     app.mywebsite.co;
        listen 443  ssl;    # managed by Certbot
    
        location / {    # Catch all - Match everything else not matched in any above location blocks within this server block
            proxy_redirect off;    # Stop redirects from proxy
    
            proxy_connect_timeout 3;    # Abort if service is unreachable
            proxy_read_timeout 3;       # Abort if the service is unresponsive
    
            include     proxy_params;
            proxy_pass  http://localhost:8000;
        }
    
        # If Gunicorn Django proxy app throws an error (like 404), nginx will handle it and show custom error page
        location /error.html {
            internal;
        }
    }
When I inspect the template HTML in the browser, I see the URL it's trying to load is: https://app.mywebsite.co/static/dashboard/css/dashboard.css , but that doesn't load. I can access the homepage CSS as:
://www.mywebsite.co/homepage/css/static_website.css
and the dashboard CSS as:
://www.mywebsite.co/dashboard/css/dashboard.css
How can I use the right path to view the CSS in the dashboard? I feel like this is an nginx config issue, but I am new to nginx and don't know how to solve this. Please help.
Vee (3 rep)
Apr 7, 2025, 02:24 AM • Last activity: Apr 7, 2025, 03:29 PM
0 votes
0 answers
45 views
I can't connect using my public ip in nginx
I have setup port forwarding on my router, Disabled my firewall, tried many different things in nginx.conf and changed stuff like the server_name, but nothing is working. I have also tried to use sites like canyouseeme.org to try and connect and use other devices. Another thing I have tried is reboo...
I have setup port forwarding on my router, Disabled my firewall, tried many different things in nginx.conf and changed stuff like the server_name, but nothing is working. I have also tried to use sites like canyouseeme.org to try and connect and use other devices. Another thing I have tried is rebooting and using a different port. My linux distro is Arch linux. Here is my nginx.conf:
#user http;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen 80;
	listen [::]:80;
        server_name 192.168.1.182;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   /srv/http;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /srv/http;
        }


#EVERYTHING UNDERNEATH IS JUST OPTIONAL IN CASE I NEED TO DO SOMETHING




        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1 ;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}

}
ChmodRWX (1 rep)
Apr 4, 2025, 11:22 AM • Last activity: Apr 4, 2025, 12:13 PM
1 votes
0 answers
1123 views
nginx can't follow symlinks even after "disable_symlinks off;" setting
I have `django/gunicorn/nginx` based site. The static files when changed and deployed to the server get uploaded to `/home/username/src/static` folder. The website itself is provided from `/home/username/src` folder. Then I copy the static folder to `/var/www` folder and restart nginx. Everything wo...
I have django/gunicorn/nginx based site. The static files when changed and deployed to the server get uploaded to /home/username/src/static folder. The website itself is provided from /home/username/src folder. Then I copy the static folder to /var/www folder and restart nginx. Everything works. So to make things simpler and reduce the possibility of error. I decided to create a symlink in /var/www folder pointing to /home/usernam/src/static folder. I also added the disable_symlinks off; directive in nginx.conf file and restarted gunicorn and nginx. I am now getting net::ERR_ABORTED 403 (Forbidden) error. What should I do?
user1933205 (119 rep)
Apr 3, 2024, 06:23 PM • Last activity: Mar 19, 2025, 10:01 AM
Showing page 1 of 20 total questions