Dynamically create Backend section in HAProxy configuration - proxy

I have a use case where i need to use HAProxy as a proxy rather than a loadbalancer. So in my case i need many backend sections that need to be updated in the config when proxy is started.
But is there a way , where i can create new backend section dynamically ?
global
log stdout format raw daemon
stats socket ipv4#127.0.0.1:9999 level admin
stats socket /var/run/hapee-lb.sock mode 666 level admin
stats timeout 2m
defaults
log global
timeout client 50s
timeout client-fin 50s
timeout connect 5s
timeout server 10s
timeout tunnel 50s
frontend tcp-0_0_0_0-443
bind 135.27.110.163:443
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend %[req.ssl_sni,regsub(.com,.com443,g),lower,map_dom(/usr/local/etc/sample.map,bk_default)]
default_backend example_com_be
frontend tcp-0_0_0_0-5061
bind 135.27.110.163:5061
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend %[req.ssl_sni,regsub(.com,.com5061,g),lower,map_dom(/usr/local/etc/sample.map,bk_default)]
default_backend absanity_5061
backend example_com_be
mode tcp
server name1 x.x.x.x:443
backend absanity_5061
mode tcp
server name1 y.y.y.y:5061
AM using Runtime API using Socat for updating maps. But assume i was to insert a new backend section with new server details in config .. how can we achieve that ?

I don't think you can create new backends at runtime with the socket API. This article gives a good overview of what you can modify at runtime: https://www.haproxy.com/blog/dynamic-configuration-haproxy-runtime-api/.
However, you can add new backends without using the socket API by creating a new config with the new backends and reloading HAProxy. This article gives a good overview of how to reload HAProxy without losing connections:
https://www.haproxy.com/blog/truly-seamless-reloads-with-haproxy-no-more-hacks/

Related

Ha Proxy with 3 springboot applications

I have 3 spring boot applications, each running on a different port. Can someone guide me how to set up Ha Proxy to demonstarte load balancing between the 3 applications (can make multiple instances). Is there any feature in spring boot which integrates Ha Proxy? What are the thing that I have to change in the config file of Ha Proxy?
Actually, there are several ways one can achieve this. But, I don't think there is anything in spring boot to integrate with HAProxy, because they two are different processes and they two work independently and nothing linked to each other as you might know what spring boot does. And HAProxy is a load balancer, and also a proxy server for TCP and HTTP process that are distributed across multiple servers.
That explains the first part of your question.
Now actually, how can you achieve this is entirely based on how you want to set this up.
Run individual applications as service like you did, and route traffic to each of them based on url.
Another deploying the individual applications on a single tomcat and taking the help of context path in your application properties you can route traffic from outside world to tomcat while tomcat takes care of everything.
And there might be other ways to do this, someone can add in the future to this answer. But either way you need to use a proxy server to do this, it could be either HAProxy, Nginx, or anything that fits the purpose.
So, taking your approach let's assume you are running your applications on port 8081, 8082, 8083. Your HAProxy setting should look something like this.
frontend www_http
mode http
bind *:80
bind *:443 ssl crt /etc/ssl/certs/mycompany.pem
# passing on that browser is using https
reqadd X-Forwarded-Proto:\ https
# for Clickjacking
rspadd X-Frame-Options:\ SAMEORIGIN
# prevent browser from using non-secure
rspadd Strict-Transport-Security:\ max-age=15768000
redirect scheme https code 301 if !{ ssl_fc }
stats enable
stats refresh 30s
stats show-node
stats realm Haproxy\ Statistics
stats uri /haproxy?stats
acl app1 hdr(host) -i app1.mycompany.com
acl app2 hdr(host) -i app2.mycompany.com
acl app3 hdr(host) -i app3.mycompany.com
# Just incase if you are using path instead of subdomain. But it's commented.
# acl app1 url_beg /app1
# acl app2 url_beg /app2
# acl app3 url_beg /app3
use_backend app_1_backend if app1
use_backend app_2_backend if app2
use_backend app_3_backend if app3
# backend for app 1
backend app_1_backend
timeout client 300000
timeout server 300000
redirect scheme https if !{ ssl_fc }
server app-1 127.0.0.1:8081 check
http-response set-header X-TS-Server-ID %s
# backend for app 2
backend app_2_backend
timeout client 300000
timeout server 300000
redirect scheme https if !{ ssl_fc }
server app-2 127.0.0.1:8082 check
http-response set-header X-TS-Server-ID %s
# backend for app 3
backend app_3_backend
timeout client 300000
timeout server 300000
redirect scheme https if !{ ssl_fc }
server app-3 127.0.0.1:8083 check
http-response set-header X-TS-Server-ID %s
This is some basic setup, but you can add your options and change everything as you like.
Hope this helps.

Dropped WebSocket connection in Rancher's LoadBalancer

I have simple WebSocket connection from my browser to a service inside Rancher.
I tried to connect to the service in 2 ways:
1) directly to the service:
browser ---> service
2) via Rancher's Load Balancer:
browser ---> Load Balancer ---> service
In the first case everything is fine: the connection is established and the messages are sent through it.
In the 2nd case the connection is dropped in after ~50s. Messages are send through the connection correctly in both directions.
What is the reason?
EDIT: I tested in on ws and wss protocol. In both cases there is the same issue.
Rancher Load Balancer internally uses HAProxy, which can be customized to your needs.
Here is an example HAProxy config for websockets:
global
maxconn 4096
ssl-server-verify none
defaults
mode http
balance roundrobin
option redispatch
option forwardfor
timeout connect 5s
timeout queue 5s
timeout client 36000s
timeout server 36000s
frontend http-in
mode http
bind *:443 ssl crt /etc/haproxy/certificate.pem
default_backend rancher_servers
# Add headers for SSL offloading
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend rancher_servers if is_websocket
backend rancher_servers
server websrv1 <rancher_server_1_IP>:8080 weight 1 maxconn 1024
server websrv2 <rancher_server_2_IP>:8080 weight 1 maxconn 1024
server websrv3 <rancher_server_3_IP>:8080 weight 1 maxconn 1024
Reference: https://rancher.com/docs/rancher/v1.6/en/installing-rancher/installing-server/basic-ssl-config/#example-haproxy-configuration
Only the relevant config can be used in "Custom haproxy.cfg" section of the LB.
See screenshot:
Here is the link for more documentation for custom haproxy in Rancher: https://rancher.com/docs/rancher/v1.6/en/cattle/adding-load-balancers/#custom-haproxy-configuration

Socket.io behind HAProxy behind Google Cloud load balancer giving connection errors

We are trying to configure our Socket.io socket servers behind HAProxy and over HAProxy we are using Google Cloud Load Balancer, so that HAProxy is not the single point of failure. As mentioned in this post by https://medium.com/google-cloud/highly-available-websockets-on-google-cloud-c74b35ee20bc#.o6xxj5br8. Also depicted in the picture below.
At the Google cloud load balancer we are using TCP Load balancing with SSL Proxy with Proxy Protocol ON.
The HAProxy is configured to use Cookies so that a client always connects to the same server. However since cookies might not be available on all our clients system, we decided to use load balancing algorithm as source in HAProxy. Here is the HAProxy configuration
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
maxconn 16384
tune.ssl.default-dh-param 2048
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
mode http
log global
option httplog
option http-server-close
option dontlognull
option redispatch
option contstats
retries 3
backlog 10000
timeout client 25s
timeout connect 5s
timeout server 25s
timeout tunnel 3600s
timeout http-keep-alive 1s
timeout http-request 15s
timeout queue 30s
timeout tarpit 60s
default-server inter 3s rise 2 fall 3
option forwardfor
frontend public
bind *:443 ssl crt /etc/ssl/private/key.pem ca-file /etc/ssl/private/cert.crt accept-proxy
maxconn 50000
default_backend ws
backend ws
timeout check 5000
option tcp-check
option log-health-checks
balance source
cookie QUIZIZZ_WS_COOKIE insert indirect nocache
server ws1 socket-server-1:4000 maxconn 4096 weight 10 check rise 1 fall 3 check cookie ws1 port 4000
server ws2 socket-server-1:4001 maxconn 4096 weight 10 check rise 1 fall 3 check cookie ws2 port 4001
server ws3 socket-server-2:4000 maxconn 4096 weight 10 check rise 1 fall 3 check cookie ws3 port 4000
server ws4 socket-server-2:4001 maxconn 4096 weight 10 check rise 1 fall 3 check cookie ws4 port 4001
This is however giving connection errors on around 5% of our clients as compared to our old single server system. Any suggestions?
Edit: Connection errors means that the client was not able to connect to the socket server and the socket.io client was throwing connection errors.
Thanks in advance.

Haproxy 1.5 performance issues

I need to improve the performance of an haproxy 1.5 running as a load balancer in an Ubuntu 14.04 instance. I have an analytics like code on many sites and for every pageview the client asks between 2-5 diferent scrips of ours. The other day we received more than 1k requests per second on the load balancer and it started to run really slow. It reached the active sessions limit 2000 at a rate for 1000 per second. On the configuration we use option http-keep-alive 100 to maintain the connection open for 100 ms until it is closed. How can we improve this? What is the best config for this use case? I may be loosing many details here please ask for them is there is info missing.
EDIT
Here are some details:
I'm running an AWS ubuntu 14.04 server in a c3.xlarge virtual machine. There we use haproxy 1.5 to load balance web traffic between several app instances. (Every app has its own haproxy to load balance between its own app instances - deployed one per core).
The server only has haproxy and no other software installed.
The bottleneck as per haproxy stat page is the front end load balancer as at that moment it had a session rate of 258 and current sessions of 2000 (being 2000 the max), and all the apps had a 96 sessions rate and 0/1 as the current sessions. I would post image but because of my reputation points I can't do that.
This was the configuration at that point in time:
global
maxconn 18000
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
mode http
retries 2
option redispatch
timeout connect 5s
timeout client 15s
timeout server 15s
timeout http-keep-alive 1
frontend public
log 127.0.0.1 local0 notice
option dontlognull
option httplog
bind *:80
bind *:443 ssl crt /etc/ssl/private/server.pem
default_backend rely_apps
frontend private
bind 127.0.0.1:80
stats enable
stats auth xxx:xxx
stats admin if LOCALHOST
stats uri /haproxy?stats
stats show-legends
stats realm Haproxy\ Statistics
backend rely_apps
option forwardfor
balance roundrobin
option httpchk
server app1 10.0.0.1:80 check
server app2 10.0.0.2:80 check
server app3 10.0.0.3:80 check
The connections were very high, it seems like it was closing them (or closing at a really low rate).
CPU and memory usage was really low.
Now we changed that config for the following one and it's working without problems:
global
maxconn 64000
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
tune.bufsize 16384
tune.maxrewrite 1024
nbproc 4
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
mode http
retries 2
option redispatch
option forceclose
option http-pretend-keepalive
timeout connect 5s
timeout client 15s
timeout server 15s
timeout http-keep-alive 1s
frontend public
log 127.0.0.1 local0 notice
option dontlognull
option httplog
maxconn 18000
bind *:80
bind *:443 ssl crt /etc/ssl/private/server.pem
default_backend rely_apps
#frontend private
# bind 127.0.0.1:80
stats enable
stats auth xxx:xxx
stats admin if LOCALHOST
stats uri /haproxy?stats
stats show-legends
stats realm Haproxy\ Statistics
backend rely_apps
option forwardfor
balance leastconn
option httpchk
server app1 10.0.0.1:80 check maxconn 100
server app2 10.0.0.2:80 check maxconn 100
server app3 10.0.0.3:80 check maxconn 100
However all connections are being closed on the return (and we have the same rate of sessions and requests).
This is not good also because we have to open a new connection for every client request (and we have 3/4 requests per client).
How can we achieve a good keep-alive (like 100ms I think could work), without hitting the max connections limit?
Thanks.
The number you give are very very low.
Please give more details about your architecture, type of server, third party software running on it (such as iptables), also share your configuration.
Baptiste

HAProxy load balancing MySQL servers

I have a database cluster of 3 nodes using Percona XtraDB. The three nodes are configured on three different systems. I have used HAProxy load balancer to pass requests to these nodes.
Two of the 3 nodes are configured as backup in HAProxy. When I fire a request to the load balancer connection URL, I can see the request go to node A by default. If node A is down and I request a new database connection, I see the request being routed to node B. This is as per the desired design.
However, if a connection request is sent to HAProxy using a Java program (jdbc URL), the request is routed to node A, after serving a few requests if node A goes down, I wish node B/ node C to serve the request. In the current scenario I see "Connection Failed".
Is there any configuration which will ensure that in case of failure of a node, the database connection will not fail and future requests will be routed to the next available node?
My current HAProxy configuration file is as follows:
global
stats socket /var/run/haproxy.sock mode 0600 level admin
log 127.0.0.1 local2 debug
#chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
daemon
defaults
mode tcp
log global
option tcplog
timeout connect 10000 # default 10 second time out if a backend is not found
timeout client 300000
timeout server 300000
maxconn 20000
# For Admin GUI
listen stats
bind :8080
mode http
stats enable
stats uri /stats
listen mysql *:3306
mode tcp
balance roundrobin
option mysql-check user haproxyUser
option log-health-checks
server MySQL-NodeA <ip-address>:3306 check
server MySQL-NodeB <ip-address>:3306 check backup
server MySQL-NodeC <ip-address>:3306 check backup
Mode tcp under listen *:3306 cannot be use. Check before post here using this command:
haproxy -f /etc/haproxy.cfg -V

Resources