Dropped WebSocket connection in Rancher's LoadBalancer - websocket

I have simple WebSocket connection from my browser to a service inside Rancher.
I tried to connect to the service in 2 ways:
1) directly to the service:
browser ---> service
2) via Rancher's Load Balancer:
browser ---> Load Balancer ---> service
In the first case everything is fine: the connection is established and the messages are sent through it.
In the 2nd case the connection is dropped in after ~50s. Messages are send through the connection correctly in both directions.
What is the reason?
EDIT: I tested in on ws and wss protocol. In both cases there is the same issue.

Rancher Load Balancer internally uses HAProxy, which can be customized to your needs.
Here is an example HAProxy config for websockets:
global
maxconn 4096
ssl-server-verify none
defaults
mode http
balance roundrobin
option redispatch
option forwardfor
timeout connect 5s
timeout queue 5s
timeout client 36000s
timeout server 36000s
frontend http-in
mode http
bind *:443 ssl crt /etc/haproxy/certificate.pem
default_backend rancher_servers
# Add headers for SSL offloading
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend rancher_servers if is_websocket
backend rancher_servers
server websrv1 <rancher_server_1_IP>:8080 weight 1 maxconn 1024
server websrv2 <rancher_server_2_IP>:8080 weight 1 maxconn 1024
server websrv3 <rancher_server_3_IP>:8080 weight 1 maxconn 1024
Reference: https://rancher.com/docs/rancher/v1.6/en/installing-rancher/installing-server/basic-ssl-config/#example-haproxy-configuration
Only the relevant config can be used in "Custom haproxy.cfg" section of the LB.
See screenshot:
Here is the link for more documentation for custom haproxy in Rancher: https://rancher.com/docs/rancher/v1.6/en/cattle/adding-load-balancers/#custom-haproxy-configuration

Related

502 Bad Gateway Haproxy in server

We have a server running on port 8080, whenever posting a request to the server it is giving response back.
On the same instance Haproxy is running on 443 (with SSL), when I'm posting the the same request to haproxy (IP:443) it is throwing "502 Bad Gateway" error.
May I know what could be the problem?
Below is the Haproxy config:
global
maxconn 2048
tune.ssl.default-dh-param 2048
daemon
defaults
mode http
option forwardfor
option http-server-close
retries 3
timeout http-request 5000s
timeout queue 3m
timeout connect 5000s
timeout client 3m
timeout server 3m
timeout http-keep-alive 5000s
timeout check 4000s
maxconn 2048
frontend www-https
bind *:443 ssl crt /etc/ssl/haproxy/app-ssl.pem
reqadd X-Forwarded-Proto:\ https
default_backend www-backend
backend www-backend
redirect scheme https if !{ ssl_fc }
server www-1 localhost:8080 check
listen stats
bind *:28080
mode http
stats enable
stats uri /haproxy?stats
Add Global value
tune.maxrewrite 4096 then it worked
Haproxy Config should be as below:
global
maxconn 2048
tune.ssl.default-dh-param 2048
tune.maxrewrite 4096
daemon
defaults
mode http
option forwardfor
option http-server-close
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 2048
frontend www-https
bind *:443 ssl crt /etc/ssl/haproxy/ede-ssl.pem
reqadd X-Forwarded-Proto:\ https
default_backend www-backend
backend www-backend
redirect scheme https if !{ ssl_fc }
server www-1 localhost:8080 check
listen stats
bind *:28080
mode http
stats enable
stats uri /haproxy?stats
Please find the below description:
tune.bufsize
Sets the buffer size to this size (in bytes). Lower values allow more
sessions to coexist in the same amount of RAM, and higher values allow some
applications with very large cookies to work. The default value is 16384 and
can be changed at build time. It is strongly recommended not to change this
from the default value, as very low values will break some services such as
statistics, and values larger than default size will increase memory usage,
possibly causing the system to run out of memory. At least the global maxconn
parameter should be decreased by the same factor as this one is increased.
If HTTP request is larger than (tune.bufsize - tune.maxrewrite), haproxy will
return HTTP 400 (Bad Request) error. Similarly if an HTTP response is larger
than this size, haproxy will return HTTP 502 (Bad Gateway)

HaProxy as HttpProxy with list of underlying proxies

Is it possible to configure haproxy as a real http proxy which can forward requests to other proxies?
What I want to do: I have a list of working proxies. I want to configure haproxy to proxy via these proxies.
I thought about such case:
frontend proxy
bind *:80
default_backend proxyBackend
option http_proxy
backend proxyBackend
option http_proxy
server server1 35.199.76.79:80
server server2 198.1.122.29:80
balance roundrobin
Example:
curl --proxy localhost:80 http://check-host.net/ip
I thought that request will go throw proxy server1 or server2. But it fails.
Is it possible? Or who can recommend good solutions?
I found a solution:
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen stats
bind *:9999
stats enable
stats hide-version
stats uri /stats
frontend proxy
bind *:80
default_backend proxyBackend
option http_proxy
option http-use-proxy-header
backend proxyBackend
server serverName1 35.199.76.79:80
server serverName2 198.1.122.29:80
server serverName3 129.213.76.9:3128
balance roundrobin
For such configuration we have proxy list rotation using haproxy. So great.

Socket.io behind HAProxy behind Google Cloud load balancer giving connection errors

We are trying to configure our Socket.io socket servers behind HAProxy and over HAProxy we are using Google Cloud Load Balancer, so that HAProxy is not the single point of failure. As mentioned in this post by https://medium.com/google-cloud/highly-available-websockets-on-google-cloud-c74b35ee20bc#.o6xxj5br8. Also depicted in the picture below.
At the Google cloud load balancer we are using TCP Load balancing with SSL Proxy with Proxy Protocol ON.
The HAProxy is configured to use Cookies so that a client always connects to the same server. However since cookies might not be available on all our clients system, we decided to use load balancing algorithm as source in HAProxy. Here is the HAProxy configuration
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
maxconn 16384
tune.ssl.default-dh-param 2048
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
mode http
log global
option httplog
option http-server-close
option dontlognull
option redispatch
option contstats
retries 3
backlog 10000
timeout client 25s
timeout connect 5s
timeout server 25s
timeout tunnel 3600s
timeout http-keep-alive 1s
timeout http-request 15s
timeout queue 30s
timeout tarpit 60s
default-server inter 3s rise 2 fall 3
option forwardfor
frontend public
bind *:443 ssl crt /etc/ssl/private/key.pem ca-file /etc/ssl/private/cert.crt accept-proxy
maxconn 50000
default_backend ws
backend ws
timeout check 5000
option tcp-check
option log-health-checks
balance source
cookie QUIZIZZ_WS_COOKIE insert indirect nocache
server ws1 socket-server-1:4000 maxconn 4096 weight 10 check rise 1 fall 3 check cookie ws1 port 4000
server ws2 socket-server-1:4001 maxconn 4096 weight 10 check rise 1 fall 3 check cookie ws2 port 4001
server ws3 socket-server-2:4000 maxconn 4096 weight 10 check rise 1 fall 3 check cookie ws3 port 4000
server ws4 socket-server-2:4001 maxconn 4096 weight 10 check rise 1 fall 3 check cookie ws4 port 4001
This is however giving connection errors on around 5% of our clients as compared to our old single server system. Any suggestions?
Edit: Connection errors means that the client was not able to connect to the socket server and the socket.io client was throwing connection errors.
Thanks in advance.

Appending Path to Host HAPROXY

I am new to haproxy (actually proxy'ing in general) and I can't figure out how to add a path to my backend. I have my backend defined as:
server server1 ns.foo.com:7170 check
I want to add /web such that the request is directed to https://ns.foo.com:7170/web.
Thanks,
Mark
What you need is HTTP rewriting
https://www.haproxy.com/doc/aloha/7.0/haproxy/http_rewriting.html#rewriting-http-urls
Adding this to your backend should solve your problem:
acl p_root path -i /
http-request set-path /web if p_root
If you would like to send a request coming in a given port to a specific path,
you can modify the request either in the frontend or backend configuration by specifying a http-request rule using the set-path action
For example if you would like to send any request to a /web then you should write
http-request set-path /web
into your backend configuration
Otherwise if you would like to prepend the incoming request path with /web
(so for example
localhost:[port]/somepath
should go to
serverhost:[serverport]/web/somepath) as Mawardy asked.
Then you should also use the %[path] variable like this
http-request set-path /web/%[path]
I have created a proof of concept of a spring server running with 2 instances in docker
which are loadbalanced with a HA proxy in docker that also modifies the path depending on which server won the loadbalancing.
For this the ha proxy is configured to loadbalance between its own frontends which has their own backend with their modified path
The configuration looks like this
defaults
retries 3
maxconn 20
timeout connect 5s
timeout client 6s
timeout server 6s
frontend http-in
bind *:9002
mode http
use_backend proxy-backend
backend proxy-backend
balance roundrobin
mode http
option forwardfor
http-response set-header X-Forwarded-Port %[dst_port]
http-response set-header X-ProxyServer %s
server proxy-server-1 localhost:9000
server proxy-server-2 localhost:9001
frontend proxy-in1
bind *:9000
mode http
use_backend poc-server2
frontend http-in2
bind *:9001
mode http
use_backend poc-server1
backend poc-server1
mode http
http-response set-header X-Server %s
http-request set-path /api/one/%[path]
server poc-server-1 proxypochost1:9000
backend poc-server2
mode http
http-response set-header X-Server %s
http-request set-path /api/two/%[path]
server poc-server-2 proxypochost2:9001
For more information you can check the whole project with some additional information in its readme here: ha-proxy-poc

In HAProxy, how do I redirect all to HTTPS except for certain domains?

I have HAProxy in front of all my frontend servers working as a load balancer. It redirects all incoming requests to https:
frontend front_http
mode http
redirect scheme https if !{ ssl_fc }
maxconn 10000
bind 0.0.0.0:80
reqadd X-Forwarded-Proto:\ http
default_backend back_easycreadoc
frontend front_https
mode http
maxconn 10000
bind 0.0.0.0:443 ssl crt /etc/haproxy/ssl.crt
reqadd X-Forwarded-Proto:\ https
default_backend back_easycreadoc
We are going to add a few domains for which we do not have a certificate (we do not own those domains, our clients own them). How do I let connections go through on port 80 without redirecting them to https, but only for these domains?
frontend front_http
mode http
acl host_one hdr(host) -i www.one.com
acl host_two hdr(host) -i www.two.com
redirect scheme https if !host_one !host_two
maxconn 10000
bind 0.0.0.0:80
reqadd X-Forwarded-Proto:\ http
default_backend back_easycreadoc
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#redirect

Resources