Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
2
votes
1
answers
2854
views
Docker: Restricting inbound and outbound traffic using iptables
We have lot of applications that run on Linux server using Docker. As an example, let us say my application runs on **ServerA** as a container (Docker). CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES df68695a00f1 app/myapp:latest "/run.sh" 2 weeks ago Up 2 days 0.0.0.0:50423->3000/tcp reallym...
We have lot of applications that run on Linux server using Docker.
As an example, let us say my application runs on **ServerA** as a container (Docker).
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
df68695a00f1 app/myapp:latest "/run.sh" 2 weeks ago Up 2 days 0.0.0.0:50423->3000/tcp reallymyapp
The app is listening on the port 50423 on the host (mapped to port 3000 on the container).
The DNS (endpoint) that is used to access the app is pointing to the HAProxy host (say **ServerB**), that routes the traffic to **ServerA:50423**.
Everything works well so far.
The security team in our org raised a concern that all external source IPs are potentially allowed to connect to such Docker hosts (like **ServerA**) and they want us to restrict traffic to allow only a specific IP (**ServerB** which is a load balancer) to access the containers and vice versa (**ServerA** to **ServerB**). We would then allow connectivity from our users' machines to **ServerB**/load balancer only.
Now, I followed Docker documentation and tried to insert the following rule using iptables to DOCKER-USER chain:
iptables -I DOCKER-USER -i ekf192 -s 10.1.2.10, 10.1.2.11, 10.1.2.12 -j ACCEPT
iptables -I DOCKER-USER -i ekf192 -j DROP
ACCEPT all -- 10.1.2.10 anywhere
ACCEPT all -- 10.1.2.11 anywhere
ACCEPT all -- 10.1.2.12 anywhere
LOG all -- anywhere anywhere LOG level info prefix "IPTables Dropped: "
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Please note that we need both incoming and outgoing traffic from/to these hosts (10.1.2.10, 10.1.2.11, 10.1.2.12).
Now, as per my (limited) knowledge on iptables, these rules should drop all incoming requests except for when it is origination from the mentioned IP addresses and vice versa i.e. allow outgoing traffic to mentioned IPs.
The incoming traffic works as expected but the outgoing traffic to these HOSTS is getting dropped.
I am scratching my head over this and cannot figure out what is going wrong...and not to mention that I absolutely suck at understanding how iptables rules work.
Jan 12 16:24:43 sms100394 kernel: IPTables Dropped: IN=docker0 OUT=ekf192 MAC=02:42:09:37:a0:14:02:42:ac:11:00:02:08:00 SRC=172.17.0.2 DST=10.1.2.10 LEN=40 TOS=0x00 PREC=0x00 TTL=63 ID=40235 DF PROTO=TCP SPT=3000 DPT=42579 WINDOW=242 RES=0x00 ACK FIN URGP=0
Jan 12 16:24:44 sms100394 kernel: IPTables Dropped: IN=docker0 OUT=ekf192 MAC=02:42:09:37:a0:14:02:42:ac:11:00:02:08:00 SRC=172.17.0.2 DST=10.1.2.11 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=TCP SPT=3000 DPT=45182 WINDOW=29200 RES=0x00 ACK SYN URGP=0
Jan 12 16:24:45 sms100394 kernel: IPTables Dropped: IN=docker0 OUT=ekf192 MAC=02:42:09:37:a0:14:02:42:ac:11:00:02:08:00 SRC=172.17.0.2 DST=10.1.2.12 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=TCP SPT=3000 DPT=45182 WINDOW=29200 RES=0x00 ACK SYN URGP=0
Koshur
(1399 rep)
Jan 12, 2021, 05:39 PM
• Last activity: Aug 5, 2025, 01:01 PM
0
votes
1
answers
1927
views
How can I access the web console of CodeReady Containers, installed on CentOS Docker container, from Host machine?
I have this scenario: - a HOST machine running Debian that runs docker containers. - a CentOS docker container that have **CodeReady Containers (CRC)** installed on itself. CRC working on the container, via command line, without problems. I want access, from the Host machine, to CRC web console that...
I have this scenario:
- a HOST machine running Debian that runs docker containers.
- a CentOS docker container that have **CodeReady Containers (CRC)** installed on itself. CRC working on the container, via command line, without problems.
I want access, from the Host machine, to CRC web console that works on
https://console-openshift-console.apps-crc.testing
(on a specific IP in the hosts
file).
I found this [RedHat guide for accessing CRC remotely](https://www.openshift.com/blog/accessing-codeready-containers-on-a-remote-server/) . But how can I apply it to docker containers logic?
And above all, do I really need it?
----------
I had to make the following **changes to haproxy.conf
**:
global
log 127.0.0.1 local0
debug
defaults
log global
mode http
timeout connect 5000
timeout check 5000
timeout client 30000
timeout server 30000
frontend apps
bind CONTAINER_IP:80
bind CONTAINER_IP:443
option tcplog
mode tcp
default_backend apps
backend apps
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check
frontend api
bind CONTAINER_IP:6443
option tcplog
mode tcp
default_backend api
backend api
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check
and **enabling forwarding** for the container:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
I can successfully call the url https://console-openshift-console.apps-crc.testing
from the Host machine!!! but I get this error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Anyway the Network part is solved. Now I don't know why I get this error!
Kambei
(171 rep)
Jul 18, 2020, 11:22 AM
• Last activity: Jun 4, 2025, 09:04 AM
0
votes
0
answers
17
views
HAproxy 2.6.12 TCP LB with NoMachine NX servers (SSH-like)
I tried to install HAProxy 2.6.12 in TCP mode to do load balancing (round robin) between 2 NoMachine 8.16.1 ECS (Enterprise Cloud Server) accepting NX protocol (SSH-like). NoMachine is a remote desktop solution. I use 4 VM (Debian 12): - 1 NoMachine Client (NX or SSH) - 1 HAProxy - ECS 1 = 1st membe...
I tried to install HAProxy 2.6.12 in TCP mode to do load balancing (round robin) between 2 NoMachine 8.16.1 ECS (Enterprise Cloud Server) accepting NX protocol (SSH-like).
NoMachine is a remote desktop solution.
I use 4 VM (Debian 12):
- 1 NoMachine Client (NX or SSH)
- 1 HAProxy
- ECS 1 = 1st member of the cluster
- ECS 2 = 2nd member of the cluster
It's working but I get a server identity warning each time I connect to an ECS of the cluster. :(
The RSA public keys of the 2 ECS of the cluster are not saved together in a file called "/home/my_user/.nx/config/hosts.crt".
It seems that each time I connect to an ECS, its public key overwrites the key of the other ECS already in the hosts.crt file.
I actually don't understand the logic of this behaviour.
NB: ECS supports SSH protocol and it works like a charm, I get a server identity warning only the 1st time I connect to HAProxy (I see the public key of HAProxy server in /home/my_user/.ssh/known_hosts).
Any solution at HAProxy level or is it a pure NX protocol (no more open source since 2008!) problem ?
SSH could be an alternative to NX but it is less performant...
Thanks!
haproxy.cfg (a little bit messy ;) )
global
#log /dev/loglocal0
#STP
log 127.0.0.1:514 local0 info
log /dev/loglocal1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
logglobal
#STP
modetcp
#STP
#optionhttplog
optiondontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
#STP
frontend ecs-in
mode tcp
# HAProxy listening port
bind *:4000
default_backend backend-ecs
# redondant with default ?
log global
backend backend-ecs
mode tcp
balance roundrobin
# ECS listening port
# NO send-proxy : NO go-mmproxy
server ecs1 172.16.104.175:4000 check
server ecs2 172.16.104.178:4000 check
#server ecs3 172.16.104.179:4000 check
# redondant with section 'global' ?
log 127.0.0.1:514 local0 info
frontend ecs-in-ssh
mode tcp
# HAProxy listening port
bind *:22
default_backend backend-ecs-ssh
# redondant with section default ?
log global
backend backend-ecs-ssh
mode tcp
balance roundrobin
# ECS listening port
server ecs1 172.16.104.175:22 check
server ecs2 172.16.104.178:22 check
#server ecs3 172.16.104.179:22 check
# redondant with section 'global' ?
log 127.0.0.1:514 local0 info
Steph_P92
(1 rep)
May 20, 2025, 07:58 PM
0
votes
1
answers
2000
views
haproxy SSL Load balancer empty response
my haproxy.cfg ``` global maxconn 50000 defaults timeout connect 10s timeout client 30s timeout server 30s log global mode http option httplog maxconn 3000 frontend bravo bind 0.0.0.0:443 ssl crt /etc/haproxy.ssl/haproxy.pem mode tcp acl app1 path_beg -i /abc acl app2 path_beg -i /xyx use_backend ba...
my haproxy.cfg
global
maxconn 50000
defaults
timeout connect 10s
timeout client 30s
timeout server 30s
log global
mode http
option httplog
maxconn 3000
frontend bravo
bind 0.0.0.0:443 ssl crt /etc/haproxy.ssl/haproxy.pem
mode tcp
acl app1 path_beg -i /abc
acl app2 path_beg -i /xyx
use_backend back_app1 if app1
use_backend back_app2 if app2
backend back_app1
server host1 192.168.2.32:8444/xvx
server host2 192.168.2.33:8444/xvx backup
backend back_app2
server host1 192.168.2.32:8444
server host2 192.168.2.33:8444 backup
When I browse for https://haproxy/abc
or https://haproxy/xyx
I'm getting
This page isn’t working
haproxy didn’t send any data.
ERR_EMPTY_RESPONSE
Nasir Mahmood
(178 rep)
Sep 16, 2020, 07:03 PM
• Last activity: May 18, 2025, 09:06 AM
0
votes
1
answers
7068
views
kubectl connection issues on Ubuntu 22.04LTS with Kubernetes 1.26.0
I have installed `kubeadm`, `kubelet` and `kubectl` using `sudo apt install` command, and they are all version `1.26.0`. I use Ubuntu 22.04LTS machine. I just wanted to test and use Kubernetes on my local computer. But I don't know why I get the following error messages whenever I try `kubectl` comm...
I have installed
kubeadm
, kubelet
and kubectl
using sudo apt install
command, and they are all version 1.26.0
. I use Ubuntu 22.04LTS machine. I just wanted to test and use Kubernetes on my local computer. But I don't know why I get the following error messages whenever I try kubectl
commands like kubelctl get pods/nodes/etc
:
kubectl get pods
E1212 08:13:11.758898 4321 memcache.go:238] couldn't get current server API group list: Get "https://192.168.1.2:6443/api?timeout=32s ": dial tcp 192.168.1.2:6443: connect: connection refused
E1212 08:13:11.759479 4321 memcache.go:238] couldn't get current server API group list: Get "https://192.168.1.2:6443/api?timeout=32s ": dial tcp 192.168.1.2:6443: connect: connection refused
E1212 08:13:11.761197 4321 memcache.go:238] couldn't get current server API group list: Get "https://192.168.1.2:6443/api?timeout=32s ": dial tcp 192.168.1.2:6443: connect: connection refused
E1212 08:13:11.762636 4321 memcache.go:238] couldn't get current server API group list: Get "https://192.168.1.2:6443/api?timeout=32s ": dial tcp 192.168.1.2:6443: connect: connection refused
E1212 08:13:11.764083 4321 memcache.go:238] couldn't get current server API group list: Get "https://192.168.1.2:6443/api?timeout=32s ": dial tcp 192.168.1.2:6443: connect: connection refused
The connection to the server 192.168.1.2:6443 was refused - did you specify the right host or port?
I don't know why I get this error message.
best_of_man
(161 rep)
Dec 12, 2022, 04:33 PM
• Last activity: Oct 11, 2023, 05:11 AM
2
votes
0
answers
471
views
Add timezone to HAProxy logs
I have a HAProxy (2.2.9) running on an Debian 11 machine and I would like to add the timezone to the HAProxy logs like Apache has: `[25/Jul/2023:14:27:40 +0200]`. I have found that I can use the [`log-format`](http://docs.haproxy.org/2.2/configuration.html#8.2.4) parameter in the configuration `defa...
I have a HAProxy (2.2.9) running on an Debian 11 machine and I would like to add the timezone to the HAProxy logs like Apache has:
[25/Jul/2023:14:27:40 +0200]
.
I have found that I can use the [log-format
](http://docs.haproxy.org/2.2/configuration.html#8.2.4) parameter in the configuration defaults
settings. In combination with the [ltime
](http://docs.haproxy.org/2.2/configuration.html#7.3.1-ltime) converter I have formatted the following: [%[date,ltime(%d/%b/%Y:%H:%M:%S %z)]]
which kind of works.
The problem I'm having right now is that it displays the time like this: [25/Jul/2023:12:27:40 +0000]
Where the time is UTC and no timezone is set.
Taking a look at the servers time using timedatectl
displays the following:
Local time: Tue 2023-07-25 14:37:12 CEST
Universal time: Tue 2023-07-25 12:37:12 UTC
RTC time: Tue 2023-07-25 12:37:13
Time zone: Europe/Amsterdam (CEST, +0200)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
Which to me looks like the timezone and time are set correctly.
So what am I missing?
Mr. Diba
(400 rep)
Jul 25, 2023, 12:41 PM
0
votes
1
answers
290
views
Adding SSL certificates to HAProxy with certbot
I am trying to configure my Nginx server to act as my primary load balancer. I have done the necessary package installation with certbot but the problem comes in when I try to configure my haproxy.cfg file. All the default haproxy configuratins are left untouched , so i added these lines ``` fronten...
I am trying to configure my Nginx server to act as my primary load balancer. I have done the necessary package installation with certbot but the problem comes in when I try to configure my haproxy.cfg file.
All the default haproxy configuratins are left untouched , so i added these lines
frontend www-https-frontend
bind *:80
bind *:443 ssl crt /etc/letsencrypt/archive/www.example.tech/fullchain.pem
http-request redirect scheme https unless { ssl_fc }
http-request set-header X-Forwarded-Proto https
default_backend www-backend
backend www-backend
balance roundrobin
server web-01 54.90.15.228:80 check
server web-02 35.153.66.157:80 check
But when I run sudo haproxy -c -f /etc/haproxy/haproxy.cfg, I get this error:
[NOTICE] (70084) : haproxy version is 2.5.14-1ppa1~focal
[NOTICE] (70084) : path to executable is /usr/sbin/haproxy
[ALERT] (70084) : config : parsing [/etc/haproxy/haproxy.cfg:39] : 'bind *:443' : unable to stat SSL certificate from file '/etc/letsencrypt/archive/www.codingbro.tech/fullchain.pem' : No such file or directory.
[ALERT] (70084) : config : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] (70084) : config : Fatal errors found in configuration.
When the certificate was generated they provided this exact path for the fullchain /etc/letsencrypt/archive/www.example.tech/fullchain.pem
. Any help would be much appreciated
Santagotthejuice
(1 rep)
Jul 8, 2023, 05:06 PM
• Last activity: Jul 9, 2023, 03:02 AM
1
votes
1
answers
3173
views
Why is HAProxy forwarding HTTP2 requests as HTTP 1.1?
I'm trying to configure a load balancer between 2 servers with HAPoxy, this is my configuration: frontend haproxynode bind *:443 ssl crt /etc/ssl/private/isel.pem alpn h2,http/1.1 mode tcp default_backend backendnodes backend backendnodes balance roundrobin option forwardfor server node1 192.168.1.5...
I'm trying to configure a load balancer between 2 servers with HAPoxy, this is my configuration:
frontend haproxynode
bind *:443 ssl crt /etc/ssl/private/isel.pem alpn h2,http/1.1
mode tcp
default_backend backendnodes
backend backendnodes
balance roundrobin
option forwardfor
server node1 192.168.1.5:80 check
server node2 192.168.1.6:80 check
This is my network:
I'm running a word press server in both server VM's. And I'm using h2load to do the benchmarking.
When I use the command
As you can see the application protocol is h2, but in WireShark the requests appear as HTTP 1.1, why?

h2load -n 30 -c 30 https://192.168.1.26/blog/
I get this:


Rodrigo Pina
(11 rep)
Jan 26, 2021, 12:13 AM
• Last activity: Jun 27, 2023, 08:01 AM
0
votes
1
answers
412
views
HAProxy over Tomcat returns Error 403
I have an old server which runs a Tomcat service on port 8080. For various reasons (including securing the access from clients) I had to set up a HAProxy server in front of it, secured with a SSL cert. This is the HAProxy relevant config: frontend myservice mode tcp option tcplog option logasap log...
I have an old server which runs a Tomcat service on port 8080. For various reasons (including securing the access from clients) I had to set up a HAProxy server in front of it, secured with a SSL cert.
This is the HAProxy relevant config:
frontend myservice
mode tcp
option tcplog
option logasap
log global
option tcpka
bind 10.10.10.10:80
bind 10.10.10.10:443 ssl crt /etc/ssl/haproxy/myservice.example.org.pem
acl secure dst_port eq 443
http-response add-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload;"
http-response replace-header Set-Cookie (.*) \1;\ Secure if secure
use_backend bck_myservice if { hdr(Host) -i myservice.example.org myservice }
default_backend bck_deny
backend bck_myservice
mode tcp
balance leastconn
option prefer-last-server
server oldserver.example.org oldserver.example.org:8080 weight 1 check port 8080 inter 2000 rise 2 fall 5 ssl verify none
backend bck_deny
mode http
http-request deny
10.10.10.10 is the VIP of the new service, mapped to myservice.example.org.
Accessing http://oldserver.example.org:8080 works fine as usual.
**The problem:** https://myservice.example.org results in an error "403 Forbidden". Accessing that URL does not seem to hit the Tomcat backend, as there is no trace of it in the Tomcat logs. (Note: the HAProxy config used to have
mode http
but it resulted in an error "503 Service Unavailable".)
dr_
(32068 rep)
Jun 14, 2023, 12:23 PM
• Last activity: Jun 15, 2023, 07:16 AM
0
votes
0
answers
54
views
centos 7 haproxy.service stop sometimes
l’m on centos 7 with cpanel & WHM, i have haproxy binded to port 80 thats workings fine, every day at a random time the haproxy stops so i have to start the service again (service enabled). please any help. in the haproxy log i find : ...haproxy-systemd-wrapper: exit, haproxy RC=0 and no details why...
l’m on centos 7 with cpanel & WHM, i have haproxy binded to port 80 thats workings fine, every day at a random time the haproxy stops so i have to start the service again (service enabled). please any help.
in the haproxy log i find :
...haproxy-systemd-wrapper: exit, haproxy RC=0
and no details why it stopped.
GhostSt94
(1 rep)
Mar 2, 2023, 10:00 AM
• Last activity: Mar 2, 2023, 11:14 AM
0
votes
1
answers
473
views
What hostname to put in main.cf for self-hosted postfix, behind HAProxy?
Pfsense (HAproxy as reverse proxy)—->Unraid I run postfix on Debian Bullseye VM (under Unraid) on my home server. It is up and running. I can send the mail out but can’t receive any incoming mail. I’m wondering whether I’ve set a wrong host name or not. At home local network, I can access my Debian...
Pfsense (HAproxy as reverse proxy)—->Unraid
I run postfix on Debian Bullseye VM (under Unraid) on my home server. It is up and running. I can send the mail out but can’t receive any incoming mail. I’m wondering whether I’ve set a wrong host name or not. At home local network, I can access my Debian server with either debiantest or debiantest.local.
When installing Debian, I input hostname “debiantest”, domain “mydomain.com”.
My mx record at cloudflare for “mydomain.com” is mail.mydomain.com.
In postfix main.cf, I tried specifying hostname as debiantest, debiantest.local, debiantest.mydomain.com. Same results, ie. can receive any mails, but can send mails out.
Welcome any suggestion.
bthoven
(1 rep)
Jan 13, 2023, 05:25 PM
• Last activity: Jan 13, 2023, 06:12 PM
1
votes
1
answers
692
views
Run Haproxy 2.6.7 as service on CentOS 9
I want to run HAProxy 2.6.7 on my CentOS 9. I have downloaded and compiled the project with `USE_SYSTEMD` flag enabled and installed the compiled file. Here are the commands I have used: ``` make TARGET=linux-glibc USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_OPENSSL=1 S...
I want to run HAProxy 2.6.7 on my CentOS 9.
I have downloaded and compiled the project with
USE_SYSTEMD
flag enabled and installed the compiled file. Here are the commands I have used:
make TARGET=linux-glibc USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_OPENSSL=1 SSL_INC=/usr/include SSL_LIB=/usr/lib ADDLIB=-ldl ADDLIB=-lpthread USE_PROMEX=1 USE_SYSTEMD=1
make install
mkdir -p /etc/haproxy
mkdir -p /var/lib/haproxy
touch /var/lib/haproxy/stats
ln -s /usr/local/sbin/haproxy /usr/sbin/haproxy
cp examples/haproxy.init /etc/init.d/haproxy
chmod 755 /etc/init.d/haproxy
systemctl daemon-reload
systemctl start haproxy.service
The last instruction returns the following:
haproxy.service: Can't open PID file /run/haproxy.pid (yet?) after start: Operation not permitted
haproxy.service: Failed with result 'protocol'.
Failed to start SYSV:...
/run/systemd/generator.late/haproxy.service:20: PIDFile= references a path below legacy directory /var/run/, updating /var/run/haproxy.pid
Running
sudo haproxy -f /etc/haproxy/haproxy.cfg
with flags -c
and -d
don't show any problems. Any suggestions?
kavehmb2000
(131 rep)
Dec 11, 2022, 04:04 PM
• Last activity: Dec 17, 2022, 07:02 AM
0
votes
1
answers
4574
views
HAPROXY traffic logs not logging in /var/log/haproxy.log
I have below configuration in /etc/haproxy/haproxy.cfg ```#--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full configuration options online. # # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #------...
I have below configuration in /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
#log 127.0.0.1 local2
log /dev/log local0 debug
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
ssl-default-bind-options no-sslv3
tune.ssl.default-dh-param 2048
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main *:5000
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
log /dev/log local0 debug
option tcplog
use_backend static if url_static
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
balance roundrobin
server static 127.0.0.1:4331 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
server app1 127.0.0.1:5001 check
server app2 127.0.0.1:5002 check
server app3 127.0.0.1:5003 check
server app4 127.0.0.1:5004 check
Configured rsyslog to send haproxy logs to /var/log/haproxy.log but I have see only haproxy service start stop logs, I don't see any logs which makes call to haproxy.
vijaykumar nemannavar
(1 rep)
May 8, 2022, 02:34 PM
• Last activity: Oct 20, 2022, 02:54 PM
4
votes
1
answers
1533
views
How to redirect rsyslog messges from a specific unix socket to a different log file without duplication?
I have been trying to implement separate logging for haproxy. But I end up with duplicate logging and can't separate logs based on the input socket or facility alone. My sample configuration in haproxy: **Global configuration:** log /dev/log len 1024 format local local0 debug **Frontend -1 configura...
I have been trying to implement separate logging for haproxy.
But I end up with duplicate logging and can't separate logs based on the input socket or facility alone.
My sample configuration in haproxy:
**Global configuration:**
log /dev/log len 1024 format local local0 debug
**Frontend -1 configuration (for Web requests):**
log /dev/request-log len 1024 format local local1 debug
**Frontend-2 configuration (for DB requests):**
log /dev/db-log len 1024 format local local2 debug
So here I was basically trying to redirect logs to different sockets and additionally I also used different facilities for each, because I don't know how to redirect messages based on input socket.
And in rsyslog configuration, I added the following:
$AddUnixListenSocket /var/lib/haproxy/dev/log
local0.* /var/log/haproxy/haproxy.log
$AddUnixListenSocket /var/lib/haproxy/dev/request-log
local2.* /var/log/haproxy/requests.log
$AddUnixListenSocket /var/lib/haproxy/dev/db-log
local3.* /var/log/haproxy/db.log
But all the above log files have the same logging i.e., web request logging, db logging and other haproxy logging all are duplicated in these three files.and including default /var/log/messages.
Complete rsyslog.conf:
$ModLoad imuxsock
$ModLoad imjournal
$WorkDirectory /var/lib/rsyslog
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$SystemLogSocketName /run/systemd/journal/syslog
$OmitLocalLogging on
$IMJournalStateFile imjournal.state
*.info;mail.none;authpriv.none;cron.none /var/log/messages
authpriv.* /var/log/secure
mail.* -/var/log/maillog
cron.* /var/log/cron
*.emerg :omusrmsg:*
uucp,news.crit /var/log/spooler
local7.* /var/log/boot.log
local1.* /var/log/keepalived.log
$AddUnixListenSocket /var/lib/haproxy/dev/log
local0.* /var/log/haproxy/haproxy.log
$AddUnixListenSocket /var/lib/haproxy/dev/request-log
local2.* /var/log/haproxy/requests.log
$AddUnixListenSocket /var/lib/haproxy/dev/db-log
local3.* /var/log/haproxy/db.log
Note: Same issue with keepalived.log also which is mentioned in the above config.
I see logging works correctly without any duplication if I use something like this instead of using facility:
:programname, startswith, "haproxy" {
/var/log/haproxy/haproxy.log
stop
}
But it is unwanted extra processing when it could be easy to filter out messages based on input socket or facility name.
Can anyone help me understand why the duplication is happening, but not in the case for other default sections like cron, mail, authpriv etc. where there is no duplication. Or how to redirect messages based on socket input?
GP92
(915 rep)
Oct 14, 2022, 11:45 AM
• Last activity: Oct 14, 2022, 04:41 PM
0
votes
1
answers
1340
views
haproxy: command not found
I am trying to use haproxy on linux. I installed it using ```sudo apt install haproxy``` but after install is complete when I run ```haproxy -vv``` i get error ```haproxy: command not found```. I tried running ```sudo systemctl status haproxy.service -l --no-pager``` and it shows that haproxy load b...
I am trying to use haproxy on linux. I installed it using
apt install haproxy
but after install is complete when I run -vv
i get error : command not found
. I tried running systemctl status haproxy.service -l --no-pager
and it shows that haproxy load balancer is active(running). What is happening? Why isn't haproxy command not found? Thanks in advance.
seriously
(113 rep)
Sep 12, 2022, 01:53 PM
• Last activity: Sep 12, 2022, 03:42 PM
30
votes
5
answers
108993
views
Does HAProxy support logging to a file?
I've just installed **haproxy** on my test server. Is there a way of making it write its logs to a local file, rather than syslog? This is only for testing so I don't want to start opening ports / cluttering up syslog with all my test data. Unfortunately, the only information I can find all revolves...
I've just installed **haproxy** on my test server.
Is there a way of making it write its logs to a local file, rather than syslog?
This is only for testing so I don't want to start opening ports / cluttering up syslog with all my test data.
Unfortunately, the only information I can find all revolves around logging to a syslog server.
I tried using:
log /home/user/ha.log local0
in my config. But that told me:
[ALERT] 039/095022 (9528) : sendto logger #1 failed: No such file or directory (errno=2)
When I restarted. So I created the file with
touch /home/user/ha.log
and restarted at which point I got:
[ALERT] 039/095055 (9593) : sendto logger #1 failed: Connection refused (errno=111)
Is this possible, or am I going to have to configure syslog etc. to see my test data?
IGGt
(2547 rep)
Feb 9, 2016, 10:07 AM
• Last activity: Sep 7, 2022, 10:25 AM
1
votes
1
answers
539
views
Where is a mistake in HAProxy configuration?
I have a problem with HAProxy configuration. I have several backend servers and in most cases, redirections are being done correctly. It works everywhere except one backend, where Rocket.Chat is configured. HAProxy is redirecting traffic for two domains and subdomains to correct Virtual Machines whi...
I have a problem with HAProxy configuration. I have several backend servers and in most cases, redirections are being done correctly. It works everywhere except one backend, where Rocket.Chat is configured.
HAProxy is redirecting traffic for two domains and subdomains to correct Virtual Machines which are installed on my two bare-metal Hyper-V Servers. Everything is connected via OpenVPN with split-tunnelling. All domains are subdomains are correctly set up in Cloudflare DNS.
The problem is that one redirection is not working properly. I have many backends, but only one, with Rocket.Chat is being forwarded to incorrect backend.
**Frontend and backend configuration:**
main
bind *:5000
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
use_backend static if url_static
default_backend app
#Frontend for traffic to VPN.
frontend https--in
bind 0.0.0.0:443 ssl crt /etc/cert/
mode http
option httplog
use_backend cloud if { hdr_dom(host) -i cloud.domain.dev } { dst_port 443 }
use_backend cloud if { hdr_dom(host) -i www.cloud.domain.dev } { dst_port 443 }
use_backend steldev if { hdr_dom(host) -i www.domain.dev } { dst_port 443 }
use_backend steldev if { hdr_dom(host) -i domain.dev } { dst_port 443 }
use_backend chat if { hdr_dom(host) -i chatx.domain.dev } { dst_port 443 }
use_backend adsb if { hdr_dom(host) -i adsb.domain2.xyz } { dst_port 443 }
use_backend monitoring if { hdr_dom(host) -i monitoring.domain.dev } { dst_port 443 }
use_backend itsm if { hdr_dom(host) -i itsm.domain.dev } { dst_port 443 }
use_backend itsm if { hdr_dom(host) -i itsm.domain2.xyz } { dst_port 443 }
use_backend board if { hdr_dom(host) -i board.domain2.xyz } { dst_port 443 }
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
balance roundrobin
server static 127.0.0.1:4331 check
---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
server app1 127.0.0.1:5001 check
server app2 127.0.0.1:5002 check
server app3 127.0.0.1:5003 check
server app4 127.0.0.1:5004 check
backend cloud
balance leastconn
option httpclose
cookie JSESSIONID prefix
server node1 10.11.12.2:80 cookie A check
backend monitoring
balance leastconn
option httpclose
cookie JSESSIONID prefix
server node1 10.11.12.7:80 cookie A check
backend steldev
balance leastconn
option httpclose
cookie JSESSIONID prefix
server node1 10.11.12.4:80 cookie A check
backend chat #That one is forwarding to steldev on 10.11.12.4:80, not chat.
balance roundrobin
server node1 10.11.12.5:3000 cookie A check
backend itsm
balance leastconn
option httpclose
cookie JSESSIONID prefix
server node1 10.11.12.9:80 cookie A check
backend board
balance leastconn
option httpclose
cookie JSESSIONID prefix
server node1 10.11.12.10:80 cookie A check
backend adsb
balance leastconn
option httpclose
cookie JSESSIONID prefix
server node1 10.11.12.3:88 cookie A check
I completely do not know what is wrong with chatx.domain.dev (**chat** backend) configuration.
If this is important: CentOS 8.3, kernel 4.18, HAProxy v1.8.23
Does anyone of you see any mistake in my configuration? I'm pretty new in HAProxy.
BrejSki
(31 rep)
Mar 20, 2021, 12:09 PM
• Last activity: Jun 14, 2022, 04:05 PM
2
votes
1
answers
1445
views
Where does each of my local facilities logs to in Unix?
I was using `local0` facility to log info in `HAProxy`. What I don't understand is in which file each of my facilities (`local0` ... `local7`) logs to?
I was using
local0
facility to log info in HAProxy
. What I don't understand is in which file each of my facilities (local0
... local7
) logs to?
Himanshuman
(321 rep)
May 18, 2022, 04:44 PM
• Last activity: May 19, 2022, 09:00 AM
0
votes
1
answers
120
views
Does the "offical" HAProxy ingress controller provide drain support?
We're exploring using the "offical" HARoxy ingress controller (https://www.haproxy.com/documentation/kubernetes/latest/) and require traffic to continue to be sent to a pod that is in a terminating state, allowing state-full client back-end communications to complete before the pod terminates. This...
We're exploring using the "offical" HARoxy ingress controller (https://www.haproxy.com/documentation/kubernetes/latest/) and require traffic to continue to be sent to a pod that is in a terminating state, allowing state-full clientback-end communications to complete before the pod terminates. This is handled via "drain support" on JCMorais' HAProxy ingress controller (https://haproxy-ingress.github.io/) .
We are unable to find any equivalent in the "official" ingress controller. Does it exist?
user3546411
(101 rep)
Nov 4, 2021, 08:21 PM
• Last activity: Nov 4, 2021, 09:21 PM
0
votes
2
answers
3667
views
Getting "ERROR 1364 (HY000): Field 'ssl_cipher' doesn't have a default value" Error using mariadb
While creating a haproxy_check user in MariaDB I am getting error ERROR 1364 (HY000): Field 'ssl_cipher' doesn't have a default value, what I need to do?
While creating a haproxy_check user in MariaDB I am getting error ERROR 1364 (HY000): Field 'ssl_cipher' doesn't have a default value, what I need to do?
Aftab Ali
(1 rep)
Apr 8, 2019, 08:09 AM
• Last activity: Oct 18, 2021, 05:17 PM
Showing page 1 of 20 total questions