Blocking HTTPS sites through Squid - https

Can I block https://www.facebook.com through Squid?
It is working very nicely except HTTPS sites
I am using Squid 3.1 in Debian 7

Yes, you can. You should also block it with CONNECT:
acl fb dstdomain .facebook.com
http_reply_access deny fb
http_access deny CONNECT fb
then:
squid3 -k reconfigure
or
squid -k reconfigure
Now, it is done!

Related

Squid4 forward proxy to upgrade from ws to wss

Squid4.6 is used as a forward proxy to convert all traffic to secure traffic.
The configuration of squid is very simple, it allows all traffic and uses urlrewrite.pl to replace "http" to "https".(SSL-BUMP is NOT used) Squid proxy has tls_outgoing_options set, so the following works:
client(http) -----> Squid ------> Server(https)
Now, I am trying to replicate the same with websockets.
There are 3 test cases,
1.
client(ws)------> Squid -----> Server(ws)
client(wss) ------> Squid -----> Server(wss)
3
client(ws) ------> Squid -----> Server(wss)
The first two cases work with squid, but the third one does not work. And I only need the third option.
I have given debug logs for urlrewrite.pl to show the exact request received for a websocket connection, and the following is the log:
Here port 8080: is server and port 3128: is squid
DEBUG:root:localhost:8080 127.0.0.1/localhost - CONNECT myip=127.0.0.1 myport=3128
Even wireshark shows the same,
1. CONNECT HTTP 1.1
2. GET
3. upgrade protocol.
Question:
1.Is there any way to upgrade a websocket connection to secure websocket using squid4.6?
2.Or say I use wss-client (without certificate) and a wss-server(with certificates), is there a way to inform squid to use its own certificates even mentioned in "tls_outgoing_options" to establish the connection?
REQUIRED:
Client will always send a unsecure traffic HTTP/WS
and Squid should upgrade it to HTTPS/WSS.
In our application setup, we use our own openssl libraries to create certificates - which cannot be included in the (client.go) go-tls package, so we use squid proxy to use the certificates generated by our own openssl libraries.
Client and Forward-Proxy (Squid) are both in our specific environment, so squid.conf is very simple and allows all traffic.
And we need mutual cert authentication.
SQUID CONF CODE
#
# Recommended minimum configuration:
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localhost src 127.0.0.1
acl SSL_ports port 443
acl Safe_ports port 443 # https
acl Safe_ports port 80 # http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
# Squid normally listens to port 3128
http_port 3128
url_rewrite_program /etc/squid/urlrewrite.pl
url_rewrite_access allow all
tls_outgoing_options cert=/etc/squid/proxy.crt
tls_outgoing_options key=/etc/squid/proxy.key
tls_outgoing_options cafile=/etc/squid/serverauth.crt
urlrewrite CODE
#!/usr/bin/perl
select(STDOUT);
$| = 1;
while (<>) {
#print STDOUT "OK rewrite-url=\"https://google.com\"\n";
if (/^(|\d+\s+)((\w+):\/+)([^\/:]+)(|:(\d+))(|\/\S*)(|\s.*)$/) {
my $channel = $1;
my $protocolClean = $3;
my $domain = $4;
my $port = $5;
my $portClean = $6;
my $urlPath = $7;
if ($protocolClean eq 'http' ){#&& ($port eq '' || $portClean eq '80')) {
print STDOUT "${channel}OK rewrite-url=\"https://${domain}${port}${urlPath}\"\n";
#print STDOUT "${channel}OK rewrite-url=\"https://google.com\"\n";
} else {
print STDOUT "${channel}ERR\n";
}
}
}

Squid proxy: how to use ipv6 address to send requests

I have installed Squid in one digital ocean ubuntu machine.
What I want to do is: I will send a ipv4 address to this proxy server and I need to pick up and use a iPv6 address configured in the squid.conf.
I have added 2 ipv6 addresses in the conf file.
If I try the following command by specifying address as ipv6, it works fine.
curl -H 'Cache-Control: no-cache' --proxy localhost:3128 [2400:xxx:x:xx::xxx:a001]
ie, it will pick a random ipv6 address from the conf file and send the request through that ip address.
If i try the following command by specifying address as ipv4, its not working
curl -H 'Cache-Control: no-cache' --proxy localhost:3128 34.xxx.xxx.148
ie, Its not picking the ipv6 address specified in the conf file. Instead its using the server public ip .
My /etc/squid/squid.conf file content is something like this now.
acl one_fourth_3 random 1/2
acl one_fourth_4 random 1/1
tcp_outgoing_address 2604:xxxx:xxx:xxx::xx:a001 one_fourth_3
tcp_outgoing_address 2604:xxxx:xxx:xxx::xx:a002 one_fourth_4
http_access deny !safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access allow all
http_access deny all
http_port 3128
My squid version is
Squid Cache: Version 3.5.12
Service Name: squid
Squid implicitly use the IP version, you should use or force IPv6.
Squid will add an implicit IP version test to each line.
Requests going to IPv4 websites will use the outgoing 10.1.0.* addresses.
Requests going to IPv6 websites will use the outgoing 2001:db8:* addresses.
http://www.squid-cache.org/Doc/config/tcp_outgoing_address/

squid remove leading slash from target url

I am configuring squid as proxy to forward request and intend to allow request only to example.com. This is the request as made from my browser: http://111.222.333.444/http://example.com where 111.222.333.444 is IP of my proxy server, where squid is installed.
But I am getting Invalid URL error, while the server is trying to fetch /http://example.com (note leading slash), here is access log record:
1505858815.396 0 12.34.56.78 NONE/400 3687 GET /http://example.com - NONE/- text/html
Here is the configuration that I am using
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src all # Allow access from everywhere
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl CONNECT method CONNECT
acl safe_url url_regex example\.com.*
http_access allow safe_url
http_access deny all
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 80
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
What I am doing wrong?
Got it! squid is proxy hence need to do the proxy connection to the server. If using firefox it can be done by setting up proxy and entering proxy server IP and port number. Once done URLs can be used directly in this case, use this directly in the navigation bar: http://example.com
More on how to setup proxy on Firefox and Chrome

Haproxy redirect configuration for plex?

Hope someone can help me :)
I try to configure HAProxy for plex redirection but didn't found the solution yet.
So basically to run plex home page you should go to => IPADRESS:PORT/web which redirect to IPADRESS:PORT/web/index.html
I made this kind of redirect:
use_backend plex if { hdr_beg(Host) -i plex. }
backend plex
server plex localhost:32400 check
This is ok, i can join plex => plex.mydomain.tld/web
But i would like to be able to join plex with this URL => plex.mydomain.tld
I tried to add this line:
reqrep ^([^\ :]*)\ /(.*) \1\ /web\2
Changing is fine, my URL switch to => plex.mydomain.tld/web/index.html
But i have a 404 ERROR...
What kind of trick i should do to acces plex from plex.mydomain.tld ?
Thanks !
Found some info that helped me figure it out:
global
log 127.0.0.1 syslog
maxconn 1000
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 4096
ssl-default-bind-options no-sslv3 no-tls-tickets
ssl-default-bind-ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
option contstats
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
listen stats
bind *:9090
mode http
maxconn 10
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth admin:admin
frontend ALL
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem
mode http
# Define path for lets encrypt
acl is_letsencrypt path_beg -i /.well-known/acme-challenge/
use_backend letsencrypt if is_letsencrypt
# Define hosts
acl host_website hdr(host) -i nomercy.myqnapcloud.com
# Direct hosts to backend
use_backend website if host_website
# Redirect port 80 to 443
# But do not redirect letsencrypt since it checks port 80 and not 443
redirect scheme https code 301 if !{ ssl_fc } !is_letsencrypt
backend letsencrypt
server letsencrypt 127.0.0.1:8888
backend website
balance roundrobin
option httpchk GET /check
cookie SERVERID insert indirect nocache
http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 192.168.2.151:8888 check
server server2 192.168.2.152:8888 check
server server3 192.168.2.153:8888 check
listen plex
bind *:32400 ssl crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem crt /etc/haproxy/certs/nomercy.myqnapcloud.com.pem
balance roundrobin
option httpchk GET /check
http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 192.168.2.149:32400 check port 8888
server server2 192.168.2.148:32400 check port 8888
server server3 192.168.2.147:32400 check port 8888
You can remove the ssl credentials if you don't have it installed.
The problem here unfortunately has nothing to do with your HAProxy configuration. Instead it is Plex that is causing the issue.
Example
With your configuration, when you go to plex.mydomain.tld HAProxy is adding in the /web and as a result Plex gets the following url: plex.mydomain.tld/web. This is correct however Plex will then want to forward the browser on to plex.mydomain.tld/web/index.html. But when the browser sends a request or that url, HAProxy steps in and adds that extra /web again and the resulting url that is set to Plex is plex.mydomain.tld/web/web/index.html which doesn't exist and hence the 404 error you got.
While going to plex.mydomain.tld/index.html may work, I assume all links from that page to any other page won't work due to the say issue.
To solve this you could
Look through Plex's configuration to see if it is possible to run it with out the /web.
Taking inspiration from here, you could configure HAProxy something like this:
frontend http
mode http
bind *:80
acl plex hdr_beg(Host) -i plex.
acl root_dir path_reg ^$|^/$
acl no_plex_header req.hdr_cnt(X-Plex-Device-Name) -i 0
redirect location http://plex.mydomain.tld/web/index.html 301 if no_plex_header root_dir plex
use_backend plex if plex
backend plex
server plex localhost:32400 check
The key difference being the redirect location line which will redirect from / to /web/index.html if the header X-Plex-Device-Name isn't set. The reason you have to check for the header is that it appears that plex uses / for something else.
Note: This config is an example and I haven't tested this at all
Hope that helps.
I want to echo that I used the solution provided by JamesStewy and it worked, with the minor correction;
redirect location http://plex.mydomain.tld/web/index.html code 301 if no_plex_header root_dir plex
At least, that was necessary for me (running haproxy 1.7.2).

Running Squid on localhost

I have a product form Symantec and their help is...less than helpful, including a nice message that says "Contact your reseller" in the "Contact Us" link. My reseller says to contact them. How? Anyways, it's a repackaged version if Squid for Windows. When I point IE to the proxy running locally I get "Access Denied. Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect." However, when I point IE on another machine to the server running Squid everything works fine.
I have zero experience with Squid or proxies. I tried some different configs based on searches here but nothing worked. I'm sure it's something simple. Here is the config:
digest_generation off
hierarchy_stoplist cgi-bin ?
acl all src 0.0.0.0/0.0.0.0
cache deny all
maximum_object_size 0 KB
emulate_httpd_log on
debug_options ALL,1
cache_store_log none
access_log none
useragent_log none
auth_param ntlm program c:/clientsiteproxy/libexec/mswin_ntlm_auth.exe
auth_param ntlm children 80
auth_param ntlm keep_alive on
auth_param negotiate children 80
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
authenticate_ip_shortcircuit_ttl 30 seconds
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
read_timeout 15 minutes
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Smartconnect dstdomain ned.webscanningservice.com
acl CONNECT method CONNECT
acl authproxy proxy_auth REQUIRED
acl our_networks src 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8 169.254.0.0/16
acl HEAD method HEAD
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow HEAD
http_access deny !our_networks
http_access allow Smartconnect
http_access allow authproxy
http_access deny all
icp_access allow all
httpd_suppress_version_string on
visible_hostname ClientSiteProxy
forwarded_for off
header_access Via deny all
never_direct allow all
cache_dir null c:/ClientSiteProxy
coredump_dir c:/clientsiteproxy/var/cache
http_port 3128
This is most likely the culprit: http_access deny !our_networks. This statement denies outbound access for all source IPs apart from 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8 169.254.0.0/16.
When you browse from the same machine, browser would bind on localhost, so you can try expanding the our_networks definition with 127.0.0.1.

Resources