Trivial HAProxy UrlRewrite - url-rewriting

I would like all traffic to an arbitrary domain, let's call it www.example.com, to be url-rewritten to www.google.com so that when I hit http://www.example.com/search?q=haproxy doesn't result in a 404. Is this even possible?
Here is my .cfg file's content:
global
defaults
mode http
timeout connect 5000ms
timeout client 5000ms
timeout server 5000ms
frontend http-in
bind *:80
default_backend my_backend
reqrep ^([^\ :]*)\ /\d+/(.+)/? \1\ /\2
backend my_backend
server forwardsvr www.google.com:80 check
Thanks in advance for any help!

Related

Ha Proxy with 3 springboot applications

I have 3 spring boot applications, each running on a different port. Can someone guide me how to set up Ha Proxy to demonstarte load balancing between the 3 applications (can make multiple instances). Is there any feature in spring boot which integrates Ha Proxy? What are the thing that I have to change in the config file of Ha Proxy?
Actually, there are several ways one can achieve this. But, I don't think there is anything in spring boot to integrate with HAProxy, because they two are different processes and they two work independently and nothing linked to each other as you might know what spring boot does. And HAProxy is a load balancer, and also a proxy server for TCP and HTTP process that are distributed across multiple servers.
That explains the first part of your question.
Now actually, how can you achieve this is entirely based on how you want to set this up.
Run individual applications as service like you did, and route traffic to each of them based on url.
Another deploying the individual applications on a single tomcat and taking the help of context path in your application properties you can route traffic from outside world to tomcat while tomcat takes care of everything.
And there might be other ways to do this, someone can add in the future to this answer. But either way you need to use a proxy server to do this, it could be either HAProxy, Nginx, or anything that fits the purpose.
So, taking your approach let's assume you are running your applications on port 8081, 8082, 8083. Your HAProxy setting should look something like this.
frontend www_http
mode http
bind *:80
bind *:443 ssl crt /etc/ssl/certs/mycompany.pem
# passing on that browser is using https
reqadd X-Forwarded-Proto:\ https
# for Clickjacking
rspadd X-Frame-Options:\ SAMEORIGIN
# prevent browser from using non-secure
rspadd Strict-Transport-Security:\ max-age=15768000
redirect scheme https code 301 if !{ ssl_fc }
stats enable
stats refresh 30s
stats show-node
stats realm Haproxy\ Statistics
stats uri /haproxy?stats
acl app1 hdr(host) -i app1.mycompany.com
acl app2 hdr(host) -i app2.mycompany.com
acl app3 hdr(host) -i app3.mycompany.com
# Just incase if you are using path instead of subdomain. But it's commented.
# acl app1 url_beg /app1
# acl app2 url_beg /app2
# acl app3 url_beg /app3
use_backend app_1_backend if app1
use_backend app_2_backend if app2
use_backend app_3_backend if app3
# backend for app 1
backend app_1_backend
timeout client 300000
timeout server 300000
redirect scheme https if !{ ssl_fc }
server app-1 127.0.0.1:8081 check
http-response set-header X-TS-Server-ID %s
# backend for app 2
backend app_2_backend
timeout client 300000
timeout server 300000
redirect scheme https if !{ ssl_fc }
server app-2 127.0.0.1:8082 check
http-response set-header X-TS-Server-ID %s
# backend for app 3
backend app_3_backend
timeout client 300000
timeout server 300000
redirect scheme https if !{ ssl_fc }
server app-3 127.0.0.1:8083 check
http-response set-header X-TS-Server-ID %s
This is some basic setup, but you can add your options and change everything as you like.
Hope this helps.

Dropped WebSocket connection in Rancher's LoadBalancer

I have simple WebSocket connection from my browser to a service inside Rancher.
I tried to connect to the service in 2 ways:
1) directly to the service:
browser ---> service
2) via Rancher's Load Balancer:
browser ---> Load Balancer ---> service
In the first case everything is fine: the connection is established and the messages are sent through it.
In the 2nd case the connection is dropped in after ~50s. Messages are send through the connection correctly in both directions.
What is the reason?
EDIT: I tested in on ws and wss protocol. In both cases there is the same issue.
Rancher Load Balancer internally uses HAProxy, which can be customized to your needs.
Here is an example HAProxy config for websockets:
global
maxconn 4096
ssl-server-verify none
defaults
mode http
balance roundrobin
option redispatch
option forwardfor
timeout connect 5s
timeout queue 5s
timeout client 36000s
timeout server 36000s
frontend http-in
mode http
bind *:443 ssl crt /etc/haproxy/certificate.pem
default_backend rancher_servers
# Add headers for SSL offloading
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend rancher_servers if is_websocket
backend rancher_servers
server websrv1 <rancher_server_1_IP>:8080 weight 1 maxconn 1024
server websrv2 <rancher_server_2_IP>:8080 weight 1 maxconn 1024
server websrv3 <rancher_server_3_IP>:8080 weight 1 maxconn 1024
Reference: https://rancher.com/docs/rancher/v1.6/en/installing-rancher/installing-server/basic-ssl-config/#example-haproxy-configuration
Only the relevant config can be used in "Custom haproxy.cfg" section of the LB.
See screenshot:
Here is the link for more documentation for custom haproxy in Rancher: https://rancher.com/docs/rancher/v1.6/en/cattle/adding-load-balancers/#custom-haproxy-configuration

HaProxy as HttpProxy with list of underlying proxies

Is it possible to configure haproxy as a real http proxy which can forward requests to other proxies?
What I want to do: I have a list of working proxies. I want to configure haproxy to proxy via these proxies.
I thought about such case:
frontend proxy
bind *:80
default_backend proxyBackend
option http_proxy
backend proxyBackend
option http_proxy
server server1 35.199.76.79:80
server server2 198.1.122.29:80
balance roundrobin
Example:
curl --proxy localhost:80 http://check-host.net/ip
I thought that request will go throw proxy server1 or server2. But it fails.
Is it possible? Or who can recommend good solutions?
I found a solution:
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen stats
bind *:9999
stats enable
stats hide-version
stats uri /stats
frontend proxy
bind *:80
default_backend proxyBackend
option http_proxy
option http-use-proxy-header
backend proxyBackend
server serverName1 35.199.76.79:80
server serverName2 198.1.122.29:80
server serverName3 129.213.76.9:3128
balance roundrobin
For such configuration we have proxy list rotation using haproxy. So great.

Appending Path to Host HAPROXY

I am new to haproxy (actually proxy'ing in general) and I can't figure out how to add a path to my backend. I have my backend defined as:
server server1 ns.foo.com:7170 check
I want to add /web such that the request is directed to https://ns.foo.com:7170/web.
Thanks,
Mark
What you need is HTTP rewriting
https://www.haproxy.com/doc/aloha/7.0/haproxy/http_rewriting.html#rewriting-http-urls
Adding this to your backend should solve your problem:
acl p_root path -i /
http-request set-path /web if p_root
If you would like to send a request coming in a given port to a specific path,
you can modify the request either in the frontend or backend configuration by specifying a http-request rule using the set-path action
For example if you would like to send any request to a /web then you should write
http-request set-path /web
into your backend configuration
Otherwise if you would like to prepend the incoming request path with /web
(so for example
localhost:[port]/somepath
should go to
serverhost:[serverport]/web/somepath) as Mawardy asked.
Then you should also use the %[path] variable like this
http-request set-path /web/%[path]
I have created a proof of concept of a spring server running with 2 instances in docker
which are loadbalanced with a HA proxy in docker that also modifies the path depending on which server won the loadbalancing.
For this the ha proxy is configured to loadbalance between its own frontends which has their own backend with their modified path
The configuration looks like this
defaults
retries 3
maxconn 20
timeout connect 5s
timeout client 6s
timeout server 6s
frontend http-in
bind *:9002
mode http
use_backend proxy-backend
backend proxy-backend
balance roundrobin
mode http
option forwardfor
http-response set-header X-Forwarded-Port %[dst_port]
http-response set-header X-ProxyServer %s
server proxy-server-1 localhost:9000
server proxy-server-2 localhost:9001
frontend proxy-in1
bind *:9000
mode http
use_backend poc-server2
frontend http-in2
bind *:9001
mode http
use_backend poc-server1
backend poc-server1
mode http
http-response set-header X-Server %s
http-request set-path /api/one/%[path]
server poc-server-1 proxypochost1:9000
backend poc-server2
mode http
http-response set-header X-Server %s
http-request set-path /api/two/%[path]
server poc-server-2 proxypochost2:9001
For more information you can check the whole project with some additional information in its readme here: ha-proxy-poc

haproxy keep session after server fail?

I have 2 HAProxys that are load balancing user requests with roundrobin algorithm successfully to my 2 WebServers.
When a webserver fails, HAProxy sends the request to the next available server, but for some reason I'm not able to keep the user session, and therefore information isn't displayed properly.
How can I make it so the session is saved during a failover?
Here's my haproxy.cfg:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 2000
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
timeout connect 5000
timeout client 10000
timeout server 1000
listen app 192.168.1.100:80
mode http
stats enable
stats uri /haproxy?stats
stats realm Strictly\ Private
stats auth admin:admin
balance roundrobin
cookie LSW_WEB insert
option httpclose
option forwardfor
option httpchk HEAD / HTTP/1.0
server server1 192.168.1.93:80 cookie LSW_WEB01 check
server server2 192.168.1.94:80 cookie LSW_WEB02 check
This has to do with how you share sessions between your servers not with haproxy itself.
The problem lies in your application or how you are storing sessions.

Resources