Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
6
votes
3
answers
2451
views
Port fowarding and load balancer in ubuntu server 12.04
I am looking to create a load balancing server. Essentially here is what I want to do: I have a public IP address, lets say 1.1.1.1 I have a second public IP address, lets say 2.2.2.2. I have a website, www.f.com point to 1.1.1.1 via an A record. I want that Ubuntu server to forward traffic like thi...
I am looking to create a load balancing server. Essentially here is what I want to do:
I have a public IP address, lets say 1.1.1.1 I have a second public IP address, lets say 2.2.2.2. I have a website, www.f.com point to 1.1.1.1 via an A record. I want that Ubuntu server to forward traffic like this:
- Port 80 traffic is forwarded to 2.2.2.2 on port 60,000 and port 60,001.
- Port 443 traffic is forwaded to 2.2.2.2 on port 60,010 and port 60,011.
- Port 25 traffic is forwared to 2.2.2.2 on port 60,020 and port 60,021
The port forwarding is more important then being able to load balance.
I look forward to some responses. Both server 1.1.1.1 and 2.2.2.2 are both running Ubuntu 12.04 server edition.
Matthew St Nicholas Iverson
(69 rep)
Dec 3, 2012, 02:32 AM
• Last activity: Jun 7, 2025, 05:04 AM
0
votes
0
answers
130
views
could bonding mode 6 receive traffic load balancing to the slaves
Is the mode 6 traffic will receive load balance relatively according to the slaves interfaces? OS: ubuntu 18.04.6 LTS; bonding driver: v3.7.1; kernel: 4.15.0-213-generic In mode 6, I had setted the parameter `bond_arp_interval 100`, `arp_validate 3` and `bond_arp_ip_target ip1,ip2` and restart inter...
Is the mode 6 traffic will receive load balance relatively according to the slaves interfaces?
OS: ubuntu 18.04.6 LTS; bonding driver: v3.7.1; kernel: 4.15.0-213-generic
In mode 6, I had setted the parameter
bond_arp_interval 100
, arp_validate 3
and bond_arp_ip_target ip1,ip2
and restart interface, then only get *ip1
* from file */sys/class/net/bond6/bonding/arp_ip_target
*, but get *0
* from both file */sys/class/net/bond6/bonding/arp_interval
* and */sys/class/net/bond6/bonding/arp_validate
*, and pushed the traffic to the dest host by iperf3
, all traffic from different src hosts with different arp records(same dest ip with different mac addresses which belong to the dest host mode 6 bonding slaves) always received by the same slave interface in dest host. Maybe the ARP record does not update properly in the subnet, so it can't receive traffic load balancing.
And I tested bond_arp_interval 100
, arp_validate 3
and bond_arp_ip_target ip1
in mode 1, it works, reference to this redhat solution . Maybe arp probes not suitable for mode 6? Why mode 6 say it could achieve that receive traffic balancing?
How the bonding driver initiates an ARP reply to the peer for updating the ARP record? I can't find any other parameter to work for it.
Linux Ethernet Bonding Driver HOWTO :
> Receive load balancing is handled by Address Resolution Protocol (ARP) negotiation and table mapping to the relevant group interface.
> Hence, peers learn the hardware address
of the bond and the balancing of receive traffic
collapses to the current slave. This is handled by
sending updates (ARP Replies) to all the peers with
their individually assigned hardware address such that
the traffic is redistributed.
Questions:
1, how bonding achieve that the mode 6 **receive traffic load balancing** to the slaves?
2, why it doesn't work with **arp monitor
** in mode 6?
3, could it work in **Distributed VXLAN Gateway ** with dynamically learns ARP entries
and ARP broadcast suppression
?
4, when two Distributed VXLAN Gateway(Q3) leaves learned the **same host IP ARP entry but with different mac addresses** from local network(switch port), what would they do?
VictorLee
(37 rep)
Sep 29, 2024, 03:04 PM
• Last activity: May 23, 2025, 05:05 AM
2
votes
2
answers
2527
views
keepalived no route to host, firewall issue?
I have a simple two server config of keepalived. The master/backup selection is working fine but I can't connect to the VIP from the backup server. When I try connecting, on the master I can see ARP requests from the backup server and responses from the master; on the backup server I only see the re...
I have a simple two server config of keepalived. The master/backup selection is working fine but I can't connect to the VIP from the backup server. When I try connecting, on the master I can see ARP requests from the backup server and responses from the master; on the backup server I only see the requests (i.e., I don't see the ARP responses from the master).
Master keepalived.conf:
vrrp_script haproxy-check {
script "/usr/bin/pgrep python"
interval 5
}
vrrp_instance haproxy-vip {
state MASTER
priority 101
interface eth0
virtual_router_id 47
advert_int 3
unicast_src_ip 192.168.122.4
unicast_peer {
192.168.122.9
}
virtual_ipaddress {
192.168.122.250
}
track_script {
haproxy-check weight 20
}
}
Backup keepalived.conf:
vrrp_script haproxy-check {
script "/usr/bin/pgrep python"
interval 5
}
vrrp_instance haproxy-vip {
state BACKUP
priority 99
interface eth0
virtual_router_id 47
advert_int 3
unicast_src_ip 192.168.122.9
unicast_peer {
192.168.122.4
}
virtual_ipaddress {
192.168.122.250
}
track_script {
haproxy-check weight 20
}
}
ip addr on master:
2: eth0: mtu 1458 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:9e:e8:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.4/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0
valid_lft 55567sec preferred_lft 55567sec
inet 192.168.122.250/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::571a:df5f:930c:2b57/64 scope link noprefixroute
valid_lft forever preferred_lft forever
And on backup:
2: eth0: mtu 1458 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:2e:59:3d brd ff:ff:ff:ff:ff:ff
inet 192.168.122.9/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0
valid_lft 79982sec preferred_lft 79982sec
inet6 fe80::f816:3eff:fe2e:593d/64 scope link
valid_lft forever preferred_lft forever
tcpdump from master:
# tcpdump -nni eth0 arp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:44:06.299398 ARP, Request who-has 192.168.122.250 tell 192.168.122.9, length 28
11:44:06.299435 ARP, Reply 192.168.122.250 is-at fa:16:3e:9e:e8:18, length 28
11:44:07.298939 ARP, Request who-has 192.168.122.250 tell 192.168.122.9, length 28
11:44:07.298985 ARP, Reply 192.168.122.250 is-at fa:16:3e:9e:e8:18, length 28
11:44:08.300920 ARP, Request who-has 192.168.122.250 tell 192.168.122.9, length 28
11:44:08.300954 ARP, Reply 192.168.122.250 is-at fa:16:3e:9e:e8:18, length 28
11:44:09.303039 ARP, Request who-has 192.168.122.250 tell 192.168.122.9, length 28
11:44:09.303062 ARP, Reply 192.168.122.250 is-at fa:16:3e:9e:e8:18, length 28
And from the backup:
# tcpdump -nni eth0 arp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:44:39.430367 ARP, Request who-has 192.168.122.250 tell 192.168.122.9, length 28
11:44:40.431810 ARP, Request who-has 192.168.122.250 tell 192.168.122.9, length 28
11:44:41.433847 ARP, Request who-has 192.168.122.250 tell 192.168.122.9, length 28
11:44:42.435979 ARP, Request who-has 192.168.122.250 tell 192.168.122.9, length 28
11:44:43.437814 ARP, Request who-has 192.168.122.250 tell 192.168.122.9, length 28
I don't believe it's a firewall issue (iptables -L | grep -i arp
doesn't show anything), is there a kernel setting that could be causing an issue? Any suggestions for debugging?
OS is Centos 7, keepalived is 2.1.5.
user693861
(131 rep)
Jul 28, 2020, 03:51 PM
• Last activity: Apr 27, 2025, 11:02 PM
1
votes
1
answers
1837
views
route gateway proxy traffic through different interface with identical upstream gateways
My question is similar to the below with a small caveat not answered by any of these threads: https://unix.stackexchange.com/questions/21093/output-traffic-on-different-interfaces-based-on-destination-port https://serverfault.com/questions/648460/load-balancing-network-traffic-using-iptables https:/...
My question is similar to the below with a small caveat not answered by any of these threads:
https://unix.stackexchange.com/questions/21093/output-traffic-on-different-interfaces-based-on-destination-port
https://serverfault.com/questions/648460/load-balancing-network-traffic-using-iptables
https://unix.stackexchange.com/questions/58635/iptables-set-mark-route-diferent-ports-through-different-interfaces
https://unix.stackexchange.com/questions/12085/only-allow-certain-outbound-traffic-on-certain-interfaces
I have three network devices: eth0 (router), eth1 (connected to internet gw1), and eth2 (connected to internet gw2)
eth0 -> eth1 (ip via dhcp) -> gw1 (192.168.0.1 netmask 255.255.255.0) -> internet
eth0 -> eth2 (ip via dhcp) -> gw2 (192.168.0.1 netmask 255.255.255.0) -> internet
I am running Polipo on the router and want traffic of all the people connecting to polipo to go through eth2. Everyone else will have their traffic routed via eth1.
The problem is that both eth1 and eth2 get their IP address via DHCP. And gw1 is identical to gw2. This is part of our infrastructure and they will both have same IP address (i.e. gw1 and gw2 are both 192.168.0.1 with netmask 255.255.255.0).
All answers in the threads mentioned above involve distinct subnets and IP addresses to isolate/mark gateway traffic. In my case this is not an option. I have to accomplish this without changing anything about gw1/gw2 and not involving static IP addresses in eth1 and eth2. Is this even possible?
iptables
(11 rep)
Dec 21, 2014, 02:07 AM
• Last activity: Jun 10, 2024, 05:01 PM
1
votes
0
answers
479
views
Can linux load balance between multiple routes to the same subnet?
I'm looking for a way to load balance between multiple routes for the same subnet under linux. The scenario is that we have multiple Linux machines (EC2 instances) each configured with a Strongswan VPN client. Each one has access to the same subnet on a remote site. We'd like to configure other mach...
I'm looking for a way to load balance between multiple routes for the same subnet under linux.
The scenario is that we have multiple Linux machines (EC2 instances) each configured with a Strongswan VPN client. Each one has access to the same subnet on a remote site.
We'd like to configure other machines to route to that subnet through those machines. But the problem is that the VPN tunnels have rate limiting associated so we don't just want to route everything through one server. It's also possible that one could go down for any reason so a solution involving a level of failover.
I've been unable to find much information on the subject.
Is this something that typically requires dedicated hardware, or did I miss something in iptables / nftables?
Philip Couling
(20391 rep)
Mar 4, 2022, 10:43 AM
1
votes
0
answers
352
views
Nginx in UDP load balancing continue to send to an unreachable upstream server
I use Nginx to load balance traffic coming from udp syslog sources to logstash Configuration : user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; worker_rlimit_nofile 1000000; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/...
I use Nginx to load balance traffic coming from udp syslog sources to logstash
Configuration :
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
worker_rlimit_nofile 1000000;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
include /etc/nginx/conf.d/*.conf;
#UDP syslog load balancing
stream {
server {
listen 514 udp;
proxy_pass logstash_servers;
proxy_timeout 1s;
proxy_responses 0;
proxy_bind $remote_addr transparent;
}
upstream logstash_servers {
server 192.168.2.90:514 fail_timeout=10s;
server 192.168.2.95:514 fail_timeout=10s;
}
}
It works fine but if one of my upstream logstash server is down, Nginx does not take it into account and I receive the half of message on the remaining upstream logstash server.
How can I tell to Nginx to use only the remaining logstash server if one is down ?
Atreiide
(11 rep)
Sep 30, 2021, 09:39 AM
• Last activity: Sep 30, 2021, 09:57 AM
0
votes
1
answers
103
views
Constant concurrent connections drain my server storage
I apologize in advance if this question is in a wrong forum, this is my first question here! My client has hosting with Aliyun Cloud (Alibaba Cloud in China). I've deployed a microsite to their servers, which has following structure: microsite.com -> CDN1 -> SLB -> 2x ECS -> DB ECS oss.microsite.com...
I apologize in advance if this question is in a wrong forum, this is my first question here!
My client has hosting with Aliyun Cloud (Alibaba Cloud in China). I've deployed a microsite to their servers, which has following structure:
microsite.com -> CDN1 -> SLB -> 2x ECS -> DB ECS
oss.microsite.com -> CDN2 -> OSS
ECS instances under SLB have sticky sessions and serve only HTML response. All other files (js, css etc) are served from OSS domain. These instances also use database to store sessions data (eg. user IP address, timestamp of last activity etc.)
After 3 weeks, database instance ran out of 40GB of storage space. When I looked into it, I saw 23 million session entries.
ECS instances are under constant 100-150 concurrent connections, day and night, 24/7, although actual users (we use GA for tracking) is maybe 10-15 per day (campaign hasn't started yet).
I am baffled as client IT says this is "normal" and not an "attack" cause it would be "much more severe". They have no explanation from where this traffic comes from. I can see however in access log (tail -f access.log) a constant flow of requests.
These are always there, day and night, whenever I SSH in. GA is empty, except when I open the microsite or someone from client side (as link wasn't pushed to media yet).
Anyone has any advice what this is? It seems to me some attempt to run server out of resources, or some unsuccessful DDoS. But because it is still in 100-200 concurrent connections, no firewall / security rule is activated by Aliyun. I don't have access to Aliyun console, only can SSH into servers.
I simply can't believe this is "normal". On CloudFlare I had options for bots protection, javascript challenge etc. Aliyun seems to have nothing. Or they simply don't care.
Some technical info:
All ECS instances are on Ubuntu 20.04. Web service is Apache2, with PHP7.4 and PHP7.4-FPM running. Database instance has MySQL8. Database instance only allows connections from web server instances, and those allow HTTP connection only from SLB (Server Load Balancer, equivalent to Elastic Load Balancer on AWS). This means that all traffic still has to come through SLB to instances under it.
Has anyone experienced anything like this? How can I protect my backend from it if they are unable to do it?
Siniša
(111 rep)
Aug 29, 2021, 09:17 AM
• Last activity: Sep 7, 2021, 07:31 AM
0
votes
0
answers
270
views
Can I have multiple routes to the same remote LAN?
I am trying to connect two pfsense boxes with multiple `OpenVPN` tunnels. Currently, automatic route update makes only one route route active. For example, I am connecting `192.168.10.0/24` and `192.168.33.0/24` LANs via `192.168.27.0/24` and `192.168.29.0/24` tuns and I have routes only via `27` ne...
I am trying to connect two pfsense boxes with multiple
OpenVPN
tunnels. Currently, automatic route update makes only one route route active.
For example, I am connecting 192.168.10.0/24
and 192.168.33.0/24
LANs via 192.168.27.0/24
and 192.168.29.0/24
tuns and I have routes only via 27
network added.
What will happen if I add equivalent route via 29
? Will it work out of the box? Will it load balance?
Dims
(3425 rep)
Jul 8, 2021, 02:35 PM
• Last activity: Jul 9, 2021, 02:37 PM
0
votes
1
answers
538
views
Haproxy: replace any active failing server with backup
My haproxy configuration is like this: ``` backend my-liveBackend timeout connect 5s timeout server 600s mode http balance uri len 52 server my-live-backend1 10.80.1.161:8080 check server my-live-backend2 10.80.1.162:8080 check server my-live-backend3 10.80.1.163:8080 check server my-live-backend4 1...
My haproxy configuration is like this:
I understand that haproxy will only activate a backup if ALL non backup fail:
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-backup
> When "backup" is present on a server line, the server is only used in load
balancing when all other non-backup servers are unavailable.
We actually need a bunch of servers active (ideally 4) and some as backup when we do maintenance on the active ones.
Is there an option that allows to do this? A bit like "I want at least 4 servers always active".
I couldn't find anything in the documentation to do that.
My expectation would be like this:
- live1 UP
- live2 DOWN
- live3 UP
- live4 UP
- live5 BACKUP UP ACTIVE <<== replaces 2 while 2 is in maintenance
- live6 BACKUP UP INACTIVE
- live7 BACKUP UP INACTIVE
- live8 BACKUP UP INACTIVE
backend my-liveBackend
timeout connect 5s
timeout server 600s
mode http
balance uri len 52
server my-live-backend1 10.80.1.161:8080 check
server my-live-backend2 10.80.1.162:8080 check
server my-live-backend3 10.80.1.163:8080 check
server my-live-backend4 10.80.1.164:8080 check
server my-live-backend5 10.80.10.165:8080 check backup
server my-live-backend6 10.80.10.166:8080 check backup
server my-live-backend7 10.80.10.167:8080 check backup
server my-live-backend8 10.80.10.168:8080 check backup
When a non-backup server fails, haproxy doesn't activate a backup server to replace it:

Gui13
(168 rep)
Feb 10, 2021, 08:17 AM
• Last activity: Feb 10, 2021, 12:41 PM
0
votes
0
answers
72
views
website STILL need to redirect from HTTP to https after Nginx redirect config
I use AWS EC2 to build a website using load balancer. However, the pentesting result shows that the website still could be reached by http. I have checked that in the Nginx config file, we does have a http to https redirect. server { listen 80 default_server; listen [::]:80 default_server; server_na...
I use AWS EC2 to build a website using load balancer.
However, the pentesting result shows that the website still could be reached by http.
I have checked that in the Nginx config file, we does have a http to https redirect.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
return 301 https://$server_name$request_uri ;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
# disable checking file size for upload
client_max_body_size 0;
location / {
proxy_pass http://127.0.0.1:8000/ ;
proxy_set_header HOST \$host;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
}
location /flower/ {
# rewrite ^/flower/(.*)$ /\$1 break;
proxy_pass http://127.0.0.1:5555/ ;
proxy_set_header HOST \$host;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
# Settings for a TLS enabled server.
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name _;
root /usr/share/nginx/html;
Why the website still allow the communication of HTTP?
Django
(1 rep)
Jan 8, 2021, 02:47 AM
• Last activity: Jan 8, 2021, 06:11 AM
0
votes
1
answers
6402
views
How to NGINX reverse proxy to backend server which has a self signed certificate?
I have a small network with a webserver and an OpenVPN Access Server (with own webinterface). I have only 1 public ip and want to be able to point subdomains to websites on the webserver (e.g. website1.domain.com, website2.domain.com) and point the subdomain vpn.domain.com to the web interface of th...
I have a small network with a webserver and an OpenVPN Access Server (with own webinterface). I have only 1 public ip and want to be able to point subdomains to websites on the webserver (e.g. website1.domain.com, website2.domain.com) and point the subdomain vpn.domain.com to the web interface of the OpenVPN access server.
After some Google actions i think the way to go is setup a proxy server. NGINX seems to be able to do this with the "proxy_pass" function. I got it working for HTTP backend URL's (websites) but it does not work for the OpenVPN Access Server web interface as it forces to use HTTPS. I'm fine with HTTPS and prefer to use it also for the websites hosted on the webserver. By default a self signed cert. is installed and i want to use also self signed cert. for the other websites.
How can i "accept" self signed cert. for the backend servers? I found that i need to generate a cert. and define it in the NGINX reverse proxy config but i do not understand how this works as for example my OpenVPN server already has an SSL certificate installed. I'm able to visit the OpenVPN web interface via https://direct.ip.address.here/admin but got an "This site cannot deliver an secure connection" page when i try to access the web interface via Chrome.
My NGINX reverse proxy config:
server {
listen 443;
server_name vpn.domain.com;
ssl_verify_client off;
location / {
# app1 reverse proxy follow
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://10.128.20.5:443 ;
proxy_ssl_verify off;
}
access_log /var/log/nginx/access_log.log;
error_log /var/log/nginx/access_log.log;
}
server {
listen 80;
server_name website1.domain.com;
location / {
# app1 reverse proxy follow
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://10.128.11.20:80 ;
}
access_log /var/log/nginx/access_log.log;
error_log /var/log/nginx/access_log.log;
}
**A nearby thought...**
Maybe NGINX is not the right tool for this at all (now or on long term)? Lets assume i can fix the cert. issue i currently have and we need more backend web servers to handle the traffic, is it possible to scale the NGINX proxy as well? like a cluster or load balancer or something? Should i look for a completely different tool?
CodeNinja
(231 rep)
May 29, 2020, 07:14 AM
• Last activity: Nov 16, 2020, 04:59 PM
1
votes
0
answers
367
views
linux server sudden slow downs without high load
I have a web server with the following specs: i7-4770K, 32GB RAM, 500GB SSD (1000 Mbit/s connection). The server is dedicated for the server side of an Android dating application. The app has around 15k daily users. The app regularly experiences slow downs to the point where it barely serves any con...
I have a web server with the following specs:
i7-4770K, 32GB RAM, 500GB SSD (1000 Mbit/s connection).
The server is dedicated for the server side of an Android dating application. The app has around 15k daily users.
The app regularly experiences slow downs to the point where it barely serves any content. The slow down happens suddenly, and I’ve noticed that when the app is fast the load is around 2-3 (top command), and when it becomes extremely slow, the load drops to bellow 1.
What could be the issue? I’ve attached screenshots of the top command showing which processes are using the CPU/RAM, etc..
this is the apache config:
StartServers 5
MinSpareServers 5
MaxSpareServers 10
ServerLimit 2048
MaxRequestWorkers 1200
MaxConnectionsPerChild 10000
KeepAlive On
KeepAliveTimeout 5
MaxKeepAliveRequests 300
Timeout 300
output of some commands during the slow down
Command# top
Command# sar -q
Command# uptime



Merry Smith
(11 rep)
Oct 17, 2020, 11:46 AM
1
votes
0
answers
378
views
Download Booster: combine ethernet and WiFi bandwidth (load-balancing internet connections)
I'd like to know and how I can use **simultaneously** two (or more) internet connections, *i.e. Ethernet, WiFi, but also 3G/LTE, ...* For example, I would use Ethernet from my ADSL router and WiFi from an Android phone used as 4G router; how do I **merge** them in order to get a single internet conn...
I'd like to know and how I can use **simultaneously** two (or more) internet connections, *i.e. Ethernet, WiFi, but also 3G/LTE, ...*
For example, I would use Ethernet from my ADSL router and WiFi from an Android phone used as 4G router; how do I **merge** them in order to get a single internet connection?
This is a bleeding-edge request but after some search I found at least the right keyword: *load balancing*, but it looks like a server-only feature or needs a dedicated hardware *i.e. Ubiquiti router*
A feature that mimics download booster, called *Turbo Mode* if I well remember, has been found in some Android smartphones (I remember early Samsung S-series models...); in this case, the phone combines 4G with WiFi.
Thus, being android Linux-based, is there a way to do it on my Arch Linux laptop?
mattia.b89
(3398 rep)
Jun 11, 2017, 08:40 AM
• Last activity: May 1, 2020, 10:14 AM
1
votes
1
answers
1073
views
L4 balancing using ipvs: drop RST packets - failover
I have a L4 ipvs load balancer with L7 envoy balancers setup. Let's say one of my L4 balancers goes down and thanks to consistent hashing the traffic which is now handled (thanks to BGP) by another L4 balancer is proxied to the same L7 node. This should work without any problems and I would think is...
I have a L4 ipvs load balancer with L7 envoy balancers setup. Let's say one of my L4 balancers goes down and thanks to consistent hashing the traffic which is now handled (thanks to BGP) by another L4 balancer is proxied to the same L7 node.
This should work without any problems and I would think is a common setup.
Problem is with long-running connections. When new L4 node receives the traffic (just data - ACK/PUSH packets) and no SYN packet has been received by the node, the node just sends RST packet to the client which terminates the connection. Picture below illustrates this.
This should not be happening and my question is, is there a way (a sysctl config or something) which is the reason for this? I know I can perhaps drop RST packets using iptables, but that doesn't sound right.

Diavel
(61 rep)
Mar 11, 2020, 06:27 AM
• Last activity: Mar 30, 2020, 04:46 PM
0
votes
1
answers
115
views
HTTPD load balancer shows duplicated information about nodes
I have enabled the HTTPD LoadBalancer Manager as: SetHandler balancer-manager allow from all But when I access the HTTP interface, I can see that the nodes are duplicated, I mean by this, if the balancer has 3 nodes I see 6 entries...why can this be happening? > Load Balancer Manager for 172.29.164....
I have enabled the HTTPD LoadBalancer Manager as:
SetHandler balancer-manager
allow from all
But when I access the HTTP interface, I can see that the nodes are duplicated, I mean by this, if the balancer has 3 nodes I see 6 entries...why can this be happening?
> Load Balancer Manager for 172.29.164.174
Server Version: Apache/2.2.15 (Unix) DAV/2
Server Built: Mar 3 2015 12:06:14
LoadBalancer Status for balancer://rws
StickySession Timeout FailoverAttempts Method
ROUTEID 0 5 byrequests
Worker URL Route RouteRedir Factor Set Status Elected To From
http://172.29.164.172:8080 RWS_Node_Ol_1 1 0 Init Ok 52476 73M 393M
http://172.29.164.173:8080 RWS_Node_Ol_2 1 0 Init Ok 52476 74M 391M
http://172.29.164.174:8080 RWS_Node_Ol_3 1 0 Init Ok 52476 74M 409M
http://172.29.164.172:8080 RWS_Node_Ol_1 1 0 Init Ok 52476 73M 393M
http://172.29.164.173:8080 RWS_Node_Ol_2 1 0 Init Ok 52476 74M 391M
http://172.29.164.174:8080 RWS_Node_Ol_3 1 0 Init Ok 52476 74M 409M
This is my LB configuration
BalancerMember http://172.29.164.172:8080 route=RWS_Node_Ol_1
BalancerMember http://172.29.164.173:8080 route=RWS_Node_Ol_2
BalancerMember http://172.29.164.174:8080 route=RWS_Node_Ol_3
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
ProxySet stickysession=ROUTEID
ProxyPass /api balancer://rws/api
ProxyPass /internal-api balancer://rws/internal-api
Jorge Cornejo Bellido
(101 rep)
Feb 12, 2020, 01:15 AM
• Last activity: Feb 26, 2020, 10:47 PM
0
votes
0
answers
154
views
NAT: accept a connection and then forward it?
In linux (or BSD), is it possible to accept an incoming connection, read the first payload packet, and then forward the connection to another server based on the contents of that packet? I'm not blind to the difficulties, and this probably needs to stay in the kernel for performance reasons outside...
In linux (or BSD), is it possible to accept an incoming connection, read the first payload packet, and then forward the connection to another server based on the contents of that packet? I'm not blind to the difficulties, and this probably needs to stay in the kernel for performance reasons outside of the initial connection.
Sophit
(101 rep)
Nov 9, 2019, 05:37 AM
0
votes
0
answers
395
views
Trying to pscp a pem file from windows machine to Brocade adx 1000
Brand new to Brocade or any switch, I managed to configure load balancing without ssl and now working on ssl enabling. I have a PEM file on my IIS server and I try to copy this to the switch using pscp bond.pem admin@x.x.x.x:sslcert:jackash:pem pscp -scp bond.pem admin@x.x.x.x:sslcert:jackash:pem I...
Brand new to Brocade or any switch, I managed to configure load balancing without ssl and now working on ssl enabling.
I have a PEM file on my IIS server and I try to copy this to the switch using
pscp bond.pem admin@x.x.x.x:sslcert:jackash:pem
pscp -scp bond.pem admin@x.x.x.x:sslcert:jackash:pem
I get the error -
The first key-exchange algorithm supported by the server is
diffie-hellman-group1-sha1, which is below the configured warning threshold.
Continue with connection? (y/n) y
bond.pem | 3 kB | 3.4 kB/s | ETA: 00:00:00 | 100%
FATAL ERROR: Received unexpected end-of-file from server
I have enabled scp on brocade. What am I doing wrong here?
WhatsInAName
(1 rep)
Jun 26, 2019, 06:01 PM
• Last activity: Aug 8, 2019, 11:57 AM
0
votes
0
answers
109
views
haproxy setup is not working when resource utilization become high
We are using HA proxy v2.3 in our organization for load balancing our apache application servers. Three web application servers are configured in haproxy. When any one of the web application server resource utilization become high then the load balancing will not work. All the web requests will occu...
We are using HA proxy v2.3 in our organization for load balancing our apache application servers. Three web application servers are configured in haproxy.
When any one of the web application server resource utilization become high then the load balancing will not work. All the web requests will occupy in that high load server.Then the incoming requests will not equally distribute to other servers. So we will not able to access the web page.So at that time we have to edit the haproxy.cfg file and have to comment the high resource utilized server and will reload the haproxy service. Then only the application become up.
Is there any option in haproxy.cfg file to overcome this situation?
If the server load become high then that node have to skip and the request should pass to other nodes.
The configuration is as follows:-
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
maxpipes 1024
nogetaddrinfo
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers HIGH:!aNULL:! MD5!eNULL:!EXPORT:!DES:!RC4:!3DES:!PSK
defaults
log global
mode http
retries 3
option httplog
option dontlognull
option forwardfor
option http-server-close
stats enable
stats auth arun:arun@123
stats uri /haproxy
timeout server 1200s
timeout connect 20s
timeout client 60s
log 127.0.0.1:514 local0 notice
default-server rise 1
default-server fall 20
frontend app_nodes
bind *:443 ssl crt no-sslv3 no-tlsv10 no-tlsv11
reqadd X-Forwarded-Proto:\ https
default_backend Application-nodes
backend Application-nodes
balance roundrobin
server :80 weight 1 maxconn 2500 check
server :80 weight 1 maxconn 2500 check
server :80 weight 1 maxconn 2500 check
log 127.0.0.1:514 local3 alert
log 127.0.0.1:514 local2 info
Arun Thampi
(1 rep)
Aug 1, 2019, 01:12 PM
• Last activity: Aug 2, 2019, 01:45 PM
0
votes
1
answers
2033
views
distribute the JVM across a cluster of machines
*Grasping at straws for search parameters and terminology.* The key attribute to the `JVM` is the `V` for virtual (at least within the context of this question). How do you span a `JVM` across a cluster of machines with load balancing so that the `JVM` itself is distributed? sorta kinda like [this][...
*Grasping at straws for search parameters and terminology.*
The key attribute to the
so that the application only sees **a single**
JVM
is the V
for virtual (at least within the context of this question). How do you span a JVM
across a cluster of machines with load balancing so that the JVM
itself is distributed?
sorta kinda like this :


JVM
?
Thufir
(1970 rep)
Aug 20, 2017, 01:37 PM
• Last activity: Jun 14, 2019, 04:36 AM
1
votes
1
answers
296
views
HAProxy ignores nbproc
In our test environment we've identified a strange HAProxy behavior. We're using the standard RHEL 7 provided `haproxy-1.5.18-8.el7.x86_64` RPM. According to our understanding, the total number of accepted parallel connections is defined as `maxconn*nbproc` from `global` section of the `haproxy.cfg`...
In our test environment we've identified a strange HAProxy behavior. We're using the standard RHEL 7 provided
haproxy-1.5.18-8.el7.x86_64
RPM.
According to our understanding, the total number of accepted parallel connections is defined as maxconn*nbproc
from global
section of the haproxy.cfg
.
However if we define:
maxconn 5
nbproc 2
we'd expect total number of parallel connections to be 10. But we can't get over maxconn
defined 5.
Why is the nbproc being ignored?
Here is the complete haproxy.cfg:
# Global settings
global
log 127.0.0.1 local2 warning
log 10.229.253.86 local2 warning
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 5
user haproxy
group haproxy
daemon
nbproc 2
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
stats socket /var/run/haproxy.sock mode 600 level admin
stats socket /var/run/haproxy_hamonit.sock uid 2033 gid 2033 mode 600 level admin
stats timeout 2m
defaults
mode tcp
log global
option tcplog
option dontlognull
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 30s
timeout server 30s
timeout http-keep-alive 10s
timeout check 10s
bind-process all
frontend ha01
bind 10.229.253.89:80
mode http
option httplog
option http-server-close
option forwardfor except 127.0.0.0/8
default_backend ha01
backend ha01
balance roundrobin
mode http
option httplog
option http-server-close
option forwardfor except 127.0.0.0/8
server server1 10.230.11.252:4240 check
server server2 10.230.11.252:4242 check
listen stats 10.229.253.89:1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth admin:foo
Jaroslav Kucera
(10882 rep)
Jun 3, 2019, 11:53 AM
• Last activity: Jun 6, 2019, 02:24 PM
Showing page 1 of 20 total questions