Squid4 forward proxy to upgrade from ws to wss - websocket

Squid4.6 is used as a forward proxy to convert all traffic to secure traffic.
The configuration of squid is very simple, it allows all traffic and uses urlrewrite.pl to replace "http" to "https".(SSL-BUMP is NOT used) Squid proxy has tls_outgoing_options set, so the following works:
client(http) -----> Squid ------> Server(https)
Now, I am trying to replicate the same with websockets.
There are 3 test cases,
1.
client(ws)------> Squid -----> Server(ws)
client(wss) ------> Squid -----> Server(wss)
3
client(ws) ------> Squid -----> Server(wss)
The first two cases work with squid, but the third one does not work. And I only need the third option.
I have given debug logs for urlrewrite.pl to show the exact request received for a websocket connection, and the following is the log:
Here port 8080: is server and port 3128: is squid
DEBUG:root:localhost:8080 127.0.0.1/localhost - CONNECT myip=127.0.0.1 myport=3128
Even wireshark shows the same,
1. CONNECT HTTP 1.1
2. GET
3. upgrade protocol.
Question:
1.Is there any way to upgrade a websocket connection to secure websocket using squid4.6?
2.Or say I use wss-client (without certificate) and a wss-server(with certificates), is there a way to inform squid to use its own certificates even mentioned in "tls_outgoing_options" to establish the connection?
REQUIRED:
Client will always send a unsecure traffic HTTP/WS
and Squid should upgrade it to HTTPS/WSS.
In our application setup, we use our own openssl libraries to create certificates - which cannot be included in the (client.go) go-tls package, so we use squid proxy to use the certificates generated by our own openssl libraries.
Client and Forward-Proxy (Squid) are both in our specific environment, so squid.conf is very simple and allows all traffic.
And we need mutual cert authentication.
SQUID CONF CODE
#
# Recommended minimum configuration:
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localhost src 127.0.0.1
acl SSL_ports port 443
acl Safe_ports port 443 # https
acl Safe_ports port 80 # http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
# Squid normally listens to port 3128
http_port 3128
url_rewrite_program /etc/squid/urlrewrite.pl
url_rewrite_access allow all
tls_outgoing_options cert=/etc/squid/proxy.crt
tls_outgoing_options key=/etc/squid/proxy.key
tls_outgoing_options cafile=/etc/squid/serverauth.crt
urlrewrite CODE
#!/usr/bin/perl
select(STDOUT);
$| = 1;
while (<>) {
#print STDOUT "OK rewrite-url=\"https://google.com\"\n";
if (/^(|\d+\s+)((\w+):\/+)([^\/:]+)(|:(\d+))(|\/\S*)(|\s.*)$/) {
my $channel = $1;
my $protocolClean = $3;
my $domain = $4;
my $port = $5;
my $portClean = $6;
my $urlPath = $7;
if ($protocolClean eq 'http' ){#&& ($port eq '' || $portClean eq '80')) {
print STDOUT "${channel}OK rewrite-url=\"https://${domain}${port}${urlPath}\"\n";
#print STDOUT "${channel}OK rewrite-url=\"https://google.com\"\n";
} else {
print STDOUT "${channel}ERR\n";
}
}
}

Related

Squid proxy: how to use ipv6 address to send requests

I have installed Squid in one digital ocean ubuntu machine.
What I want to do is: I will send a ipv4 address to this proxy server and I need to pick up and use a iPv6 address configured in the squid.conf.
I have added 2 ipv6 addresses in the conf file.
If I try the following command by specifying address as ipv6, it works fine.
curl -H 'Cache-Control: no-cache' --proxy localhost:3128 [2400:xxx:x:xx::xxx:a001]
ie, it will pick a random ipv6 address from the conf file and send the request through that ip address.
If i try the following command by specifying address as ipv4, its not working
curl -H 'Cache-Control: no-cache' --proxy localhost:3128 34.xxx.xxx.148
ie, Its not picking the ipv6 address specified in the conf file. Instead its using the server public ip .
My /etc/squid/squid.conf file content is something like this now.
acl one_fourth_3 random 1/2
acl one_fourth_4 random 1/1
tcp_outgoing_address 2604:xxxx:xxx:xxx::xx:a001 one_fourth_3
tcp_outgoing_address 2604:xxxx:xxx:xxx::xx:a002 one_fourth_4
http_access deny !safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access allow all
http_access deny all
http_port 3128
My squid version is
Squid Cache: Version 3.5.12
Service Name: squid
Squid implicitly use the IP version, you should use or force IPv6.
Squid will add an implicit IP version test to each line.
Requests going to IPv4 websites will use the outgoing 10.1.0.* addresses.
Requests going to IPv6 websites will use the outgoing 2001:db8:* addresses.
http://www.squid-cache.org/Doc/config/tcp_outgoing_address/

Haproxy redirect configuration for plex?

Hope someone can help me :)
I try to configure HAProxy for plex redirection but didn't found the solution yet.
So basically to run plex home page you should go to => IPADRESS:PORT/web which redirect to IPADRESS:PORT/web/index.html
I made this kind of redirect:
use_backend plex if { hdr_beg(Host) -i plex. }
backend plex
server plex localhost:32400 check
This is ok, i can join plex => plex.mydomain.tld/web
But i would like to be able to join plex with this URL => plex.mydomain.tld
I tried to add this line:
reqrep ^([^\ :]*)\ /(.*) \1\ /web\2
Changing is fine, my URL switch to => plex.mydomain.tld/web/index.html
But i have a 404 ERROR...
What kind of trick i should do to acces plex from plex.mydomain.tld ?
Thanks !
Found some info that helped me figure it out:
global
log 127.0.0.1 syslog
maxconn 1000
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 4096
ssl-default-bind-options no-sslv3 no-tls-tickets
ssl-default-bind-ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
option contstats
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
listen stats
bind *:9090
mode http
maxconn 10
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth admin:admin
frontend ALL
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem
mode http
# Define path for lets encrypt
acl is_letsencrypt path_beg -i /.well-known/acme-challenge/
use_backend letsencrypt if is_letsencrypt
# Define hosts
acl host_website hdr(host) -i nomercy.myqnapcloud.com
# Direct hosts to backend
use_backend website if host_website
# Redirect port 80 to 443
# But do not redirect letsencrypt since it checks port 80 and not 443
redirect scheme https code 301 if !{ ssl_fc } !is_letsencrypt
backend letsencrypt
server letsencrypt 127.0.0.1:8888
backend website
balance roundrobin
option httpchk GET /check
cookie SERVERID insert indirect nocache
http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 192.168.2.151:8888 check
server server2 192.168.2.152:8888 check
server server3 192.168.2.153:8888 check
listen plex
bind *:32400 ssl crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem
balance roundrobin
option httpchk GET /check
http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 192.168.2.149:32400 check port 8888
server server2 192.168.2.148:32400 check port 8888
server server3 192.168.2.147:32400 check port 8888
You can remove the ssl credentials if you don't have it installed.
The problem here unfortunately has nothing to do with your HAProxy configuration. Instead it is Plex that is causing the issue.
Example
With your configuration, when you go to plex.mydomain.tld HAProxy is adding in the /web and as a result Plex gets the following url: plex.mydomain.tld/web. This is correct however Plex will then want to forward the browser on to plex.mydomain.tld/web/index.html. But when the browser sends a request or that url, HAProxy steps in and adds that extra /web again and the resulting url that is set to Plex is plex.mydomain.tld/web/web/index.html which doesn't exist and hence the 404 error you got.
While going to plex.mydomain.tld/index.html may work, I assume all links from that page to any other page won't work due to the say issue.
To solve this you could
Look through Plex's configuration to see if it is possible to run it with out the /web.
Taking inspiration from here, you could configure HAProxy something like this:
frontend http
mode http
bind *:80
acl plex hdr_beg(Host) -i plex.
acl root_dir path_reg ^$|^/$
acl no_plex_header req.hdr_cnt(X-Plex-Device-Name) -i 0
redirect location http://plex.mydomain.tld/web/index.html 301 if no_plex_header root_dir plex
use_backend plex if plex
backend plex
server plex localhost:32400 check
The key difference being the redirect location line which will redirect from / to /web/index.html if the header X-Plex-Device-Name isn't set. The reason you have to check for the header is that it appears that plex uses / for something else.
Note: This config is an example and I haven't tested this at all
Hope that helps.
I want to echo that I used the solution provided by JamesStewy and it worked, with the minor correction;
redirect location http://plex.mydomain.tld/web/index.html code 301 if no_plex_header root_dir plex
At least, that was necessary for me (running haproxy 1.7.2).

Blocking HTTPS sites through Squid

Can I block https://www.facebook.com through Squid?
It is working very nicely except HTTPS sites
I am using Squid 3.1 in Debian 7
Yes, you can. You should also block it with CONNECT:
acl fb dstdomain .facebook.com
http_reply_access deny fb
http_access deny CONNECT fb
then:
squid3 -k reconfigure
or
squid -k reconfigure
Now, it is done!

Allow squid to permit skype with other restrictions for website

I've been trying to configure Squid proxy in my local network.
Here is a snippet of my squid.conf file:
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
## Custom rules for allowing just the websites
acl AllowedSites dstdomain "c:/squid/etc/allowed.site"
#
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 2367 # Skype
#acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
#acl Safe_ports port 70 # gopher
#acl Safe_ports port 210 # wais
#acl Safe_ports port 1025-65535 # unregistered ports
#acl Safe_ports port 280 # http-mgmt
#acl Safe_ports port 488 # gss-http
#acl Safe_ports port 591 # filemaker
#acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
# TAG: http_access
# Allowing or Denying access based on defined access lists
#
# Access to the HTTP port:
# http_access allow|deny [!]aclname ...
#
# NOTE on default values:
#
# If there are no "access" lines present, the default is to deny
# the request.
#
# If none of the "access" lines cause a match, the default is the
# opposite of the last line in the list. If the last line was
# deny, the default is allow. Conversely, if the last line
# is allow, the default will be deny. For these reasons, it is a
# good idea to have an "deny all" or "allow all" entry at the end
# of your access lists to avoid potential confusion.
#
#Default:
# http_access deny all
#
#Recommended minimum configuration:
#
# Only allow cachemgr access from localhost
acl numeric_IPs dstdom_regex ^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9af]+)?:([0-9af:]+)?:([0-9af]+)?\])):443
acl Skype_UA browser ^skype^
http_access allow manager localhost
http_access deny manager
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
#
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow AllowedSites
http_access allow CONNECT localnet numeric_IPs Skype_UA
http_access deny !AllowedSites
# And finally deny all other access to this proxy
http_access deny all
Now, the problem here is, that when I allow skype, it starts allowing ALL websites.
I need a way by which I can restrict the websites just to the allowed.site file domains, which contains a list of allowed sites.
Also, I need to block port 443, but allow that same port for Skype.
Please guide me on how this can be possible.
Thanks,

Squid Transparent + HTTPS

I enable Squid transparent in windows using this method (youtube link). But after enabled, pages with HTTPS is showing error.
In Chrome this message is: SSL Connection Error (ERR_SSL_PROTOCOL_ERROR)
In Firefox: Security Connection Fail (Error code: ssl_error_rx_record_too_long)
IE: Check if TLS and SSL Protocols has enabled.
Is possible disable the SSL / HTTPS in squid?
Or solve this problem in another way.
Tks.
Yes, I didn't watch the video clip - but simply tell your browser to not use the proxy for HTTPS or port 443.
Alternately, if you're using the transparent firewall method, you can either tell the firewall to skip port 443, or to ONLY redirect port 80 through the proxy eg.
iptables -t nat -I PREROUTING -p tcp --dport 443 -j ACCEPT
the above will just accept HTTPS-port traffic and ignore all the other firewall rules for it
or
iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to 3128
which will only redirect port 80 to your transparent squid.
PS It's a really bad idea to try and proxy SSL - it completely defeats the purpose of SSL.
According to this link: http://wiki.squid-cache.org/KnowledgeBase/Windows
Squid features not operational:
DISKD: still needs to be ported
Transparent Proxy: missing Windows non commercial interception driver
SMP support: Windows equivalent of UDS sockets has not been implemented
So it might not be possible to use squid as transparent proxy on window.

Resources