HAProxy Domain / Subdomain ACL rule - proxy

I'm trying to setup HAProxy for 1 Domain and 1 Subdomain.
The actual situation is that:
Internet -> pfSense Firewall -> HAProxy -> SRV1 (192.168.100.1) domain.com
-> SRV2 (192.168.100.2) srv2.domain.com
Actually it IS working - I can access SRV1 if I type domain.com and I can access SRV2 if I type srv2.domain.com.. BUT:
If I open my Browser and type domain.com and open a new tab with srv2.domain.com, the 2nd tab (srv2) points me to domain.com instead of srv2.domain.com.. Same happens vice versa.. If I first browse to srv2.domain.com and then browse to domain.com, it points me to srv2.. So I always have to restart my browser if I want to browse to SRV1 and to SRV2..
here's my HAProxy config:
acl host_srv2 hdr_dom(host) -i srv2.domain.com
acl host_domain hdr_dom(host) -i domain.com
use_backend srv2 if host_srv2
use_backend domain if host_domain
backend srv2
balance roundrobin
option httpclose
option forwardfor
cookie JSESSIONID prefix
server srv2 192.168.100.2:80 check
backend domain
balance roundrobin
option httpclose
option forwardfor
cookie JSESSIONID prefix
server domain 192.168.100.1:80 check
do you have any Ideas?

Change your acl's to:
acl host_domain hdr_dom(host) -i domain.com
acl host_server hdr_dom(host) -i srv2.domain.com
`
Once the first match hits it will skip remaining acl

Related

Experiencing latency with haproxy load balancer

I'm experiencing high latency with haproxy load balancer when backend configuration uses private network IP addresses.
But when I replace backend server addresses with public IP address or Reverse DNS name I experience no latency.
What is causing the latency?
If one uses public IP or fqdn or
Reverse DNS name, does network traffic bypasses haproxy?
Is it
allowed to use public IP or fqdn or Reverse DNS name for backend
servers in haproxy conf?
Configuration With Private Network IP Addresses
global
log /dev/log local0
log 127.0.0.1 local1 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
maxconn 18000
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
defaults
log global
mode http
option httplog
timeout client 30s
timeout connect 4s
timeout server 30s
frontend www
bind *:80
default_backend webservers
backend webservers
mode http
balance roundrobin
server server1 10.0.0.20:80
server server2 10.0.0.30:80
Configuration With Reverse DNS or Public IP Addresses
global
log /dev/log local0
log 127.0.0.1 local1 notice
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
maxconn 18000
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
defaults
log global
mode http
option httplog
timeout client 30s
timeout connect 4s
timeout server 30s
frontend www
bind *:80
default_backend webservers
backend webservers
mode http
balance roundrobin
server server1 90-80-70-40.aws.com:80
server server2 90-80-70-50.aws.com:80
It looks like it was a DNS issue on the host Windows machines. Added entries to host file and it worked as expected with private network IP settings.

How to set haproxy session and static page?

I have these two problems:
HAProxy session
Login into administrator management page for Jira/Confluence by load balancer, can't login.
If login into each server, it can login into administrator management page.
HAProxy Static page
Jira load balancer IP can't show regular CSS page. But they are good if access to different servers.
So the reason will be the configuration about static in HAProxy.
Add /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend jira
bind :::8080 v4v6
# acl url_static path_beg -i /static /images /javascript /stylesheets
# acl url_static path_end -i .jpg .gif .png .css .js
# use_backend static if url_static
default_backend jira
frontend confluence
bind :::8090 v4v6
# acl url_static path_beg -i /static /images /javascript /stylesheets
# acl url_static path_end -i .jpg .gif .png .css .js
# use_backend static if url_static
default_backend confluence
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
# backend static
# balance roundrobin
# server static 127.0.0.1:4331 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend jira
balance roundrobin
cookie JSESSIONID prefix nocache
server jira1 [IP1]:8080 check cookie jira1
server jira2 [IP2]:8080 check cookie jira2
backend confluence
balance roundrobin
cookie JSESSIONID prefix nocache
server confluence1 [IP3]:8090 check cookie confluence1
server confluence2 [IP4]:8090 check cookie confluence2

DNS with Heroku and Google Domains - SSL forwarding

I am hosting a website at example.herokuapp.com. I own the domain example.com on Google Domains (Registrar). I also have GSuite setup for this domain (email).
I'd like to forward everything to either https://www or https://.
So far, I can get:
[Failure] https://example.com/ -> "This site can’t be reached" Chrome error page
[Success] http://example.com/ -> https://www.example.com/
[Success] https://www.example.com/ -> https://www.example.com/
[Failure] http://www.example.com/ -> http://www.example.com/
In Google Registrar, I have:
Name Servers
Use the Google Domains name servers
Synthetic records
example.com -> https://www.example.com (302)
Custom resource records
NAME | TYPE | TTL | DATA
-----+-------+-----+------------------------------
# | MX | 1h | 10 aspmx.l.google.com. (and others) <- mail
www | CNAME | 1h | www.example.com.herokudns.com.
Running heroku domains gives me
www.example.com www.example.com.herokudns.com
example.com example.com.herokudns.com
Running heroku certs:auto gives me
www.example.com DNS Verified
example.com Failing
How do I get the two failing urls to work?
My hypothesis is I need another CNAME for example.com.herokudns.com but I can't add one for #.
I have gotten this to work via the following steps:
In Heroku, Settings > Domains and Certificates: Configure SSL, Select Automatically - Automated Certificate Management
Copy the DNS Target that looks something like: www.sitename.com.herokudns.com
In Google DNS, My Domains > Configure DNS Synthetic Records. In subdomain, input the actual site address again. I know that it sounds like this shouldn't work, but typing in the domain name as the subdomain forces Google DNS to just do the synthetic record as the standard raw domain. It ends up looking like website.com .website.com --> https://www.website.com. Check Temporary redirect, Do not forward path, Enable SSL.
Custom resource records -- Name: www, Type CNAME, TTL: 1h, Data: DNS Target copied from step 2.
If the user types in website.com or http://website.com, it will properly redirect to https://www.website.com. I can't seem to get www.website.com to redirect and have resorted to using the options here.
Hope this helps!

Haproxy redirect configuration for plex?

Hope someone can help me :)
I try to configure HAProxy for plex redirection but didn't found the solution yet.
So basically to run plex home page you should go to => IPADRESS:PORT/web which redirect to IPADRESS:PORT/web/index.html
I made this kind of redirect:
use_backend plex if { hdr_beg(Host) -i plex. }
backend plex
server plex localhost:32400 check
This is ok, i can join plex => plex.mydomain.tld/web
But i would like to be able to join plex with this URL => plex.mydomain.tld
I tried to add this line:
reqrep ^([^\ :]*)\ /(.*) \1\ /web\2
Changing is fine, my URL switch to => plex.mydomain.tld/web/index.html
But i have a 404 ERROR...
What kind of trick i should do to acces plex from plex.mydomain.tld ?
Thanks !
Found some info that helped me figure it out:
global
log 127.0.0.1 syslog
maxconn 1000
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 4096
ssl-default-bind-options no-sslv3 no-tls-tickets
ssl-default-bind-ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
option contstats
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
listen stats
bind *:9090
mode http
maxconn 10
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth admin:admin
frontend ALL
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem
mode http
# Define path for lets encrypt
acl is_letsencrypt path_beg -i /.well-known/acme-challenge/
use_backend letsencrypt if is_letsencrypt
# Define hosts
acl host_website hdr(host) -i nomercy.myqnapcloud.com
# Direct hosts to backend
use_backend website if host_website
# Redirect port 80 to 443
# But do not redirect letsencrypt since it checks port 80 and not 443
redirect scheme https code 301 if !{ ssl_fc } !is_letsencrypt
backend letsencrypt
server letsencrypt 127.0.0.1:8888
backend website
balance roundrobin
option httpchk GET /check
cookie SERVERID insert indirect nocache
http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 192.168.2.151:8888 check
server server2 192.168.2.152:8888 check
server server3 192.168.2.153:8888 check
listen plex
bind *:32400 ssl crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem
balance roundrobin
option httpchk GET /check
http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 192.168.2.149:32400 check port 8888
server server2 192.168.2.148:32400 check port 8888
server server3 192.168.2.147:32400 check port 8888
You can remove the ssl credentials if you don't have it installed.
The problem here unfortunately has nothing to do with your HAProxy configuration. Instead it is Plex that is causing the issue.
Example
With your configuration, when you go to plex.mydomain.tld HAProxy is adding in the /web and as a result Plex gets the following url: plex.mydomain.tld/web. This is correct however Plex will then want to forward the browser on to plex.mydomain.tld/web/index.html. But when the browser sends a request or that url, HAProxy steps in and adds that extra /web again and the resulting url that is set to Plex is plex.mydomain.tld/web/web/index.html which doesn't exist and hence the 404 error you got.
While going to plex.mydomain.tld/index.html may work, I assume all links from that page to any other page won't work due to the say issue.
To solve this you could
Look through Plex's configuration to see if it is possible to run it with out the /web.
Taking inspiration from here, you could configure HAProxy something like this:
frontend http
mode http
bind *:80
acl plex hdr_beg(Host) -i plex.
acl root_dir path_reg ^$|^/$
acl no_plex_header req.hdr_cnt(X-Plex-Device-Name) -i 0
redirect location http://plex.mydomain.tld/web/index.html 301 if no_plex_header root_dir plex
use_backend plex if plex
backend plex
server plex localhost:32400 check
The key difference being the redirect location line which will redirect from / to /web/index.html if the header X-Plex-Device-Name isn't set. The reason you have to check for the header is that it appears that plex uses / for something else.
Note: This config is an example and I haven't tested this at all
Hope that helps.
I want to echo that I used the solution provided by JamesStewy and it worked, with the minor correction;
redirect location http://plex.mydomain.tld/web/index.html code 301 if no_plex_header root_dir plex
At least, that was necessary for me (running haproxy 1.7.2).

Forcing youtube to be cached throught cache_peer in squid

I would like to force a squid proxy to use a proxy cache peer form everything related to Youtube.
My main squid server is on server A, it is a realy simple setup, with nearly no rules, everything is authorized.
Server A is linked to server B via OpenVPN. Server B also have a squid proxy set up and works well (tested with ssh tunnel, no problem).
On server A I have added theses rules :
acl youtube_videos_regx url_regex -i ^http://[^/]+\.youtube\.com/videoplayback\?
acl youtube_videos_regx url_regex ^http://(.*?)/get_video\?
acl youtube_videos_regx url_regex ^http://(.*?)/videodownload\?
acl youtube_videos_regx url_regex ^http://(.*?)/videoplayback\?
acl youtube_videos dstdomain .youtube.com
acl youtube_videos dstdomain .youtube-nocookie.com
acl youtube_videos dstdomain .googlevideo.com
acl youtube_videos dstdomain .ytimg.com
cache_peer 10.4.0.1 parent 3128 0 proxy-only no-query connect-timeout=5
cache_peer_access 10.4.0.1 allow youtube_videos
cache_peer_access 10.4.0.1 allow youtube_videos_regx
cache_peer_access 10.4.0.1 deny all
But this doesn't seems to works :
1383861430.377 578 192.168.0.103 TCP_MISS/200 192976 GET http://r9---sn-5hn7ym7e.googlevideo.com/videoplayback? - HIER_DIRECT/208.117.250.14 application/octet-stream
1383861430.636 935 192.168.0.103 TCP_MISS/200 238032 GET http://r9---sn-5hn7ym7e.googlevideo.com/videoplayback? - HIER_DIRECT/208.117.250.14 application/octet-stream
1383861430.642 2353 192.168.0.103 TCP_MISS/200 238032 GET http://r9---sn-5hn7ym7e.googlevideo.com/videoplayback? - HIER_DIRECT/208.117.250.14 application/octet-stream
1383861432.467 617 192.168.0.103 TCP_MISS/200 192976 GET http://r9---sn-5hn7ym7e.googlevideo.com/videoplayback? - HIER_DIRECT/208.117.250.14 application/octet-stream
Sometimes it works :
1383860987.725 125 192.168.0.103 TCP_MISS/204 353 GET http://r20---sn-5hn7ym7r.googlevideo.com/generate_204 - FIRSTUP_PARENT/10.4.0.1 text/html
Could it be because of the data type ?
If so, I don't know what kind of rule to add.
Thanks in advance.
Ok, I found how to solve my problem.
I just have to add this to my config file :
never_direct allow youtube_videos
never_direct allow youtube_videos_regx
These 2 lines, force squid to not use direct connection to my 2 acl.

Resources