Squid proxy: how to use ipv6 address to send requests - proxy

I have installed Squid in one digital ocean ubuntu machine.
What I want to do is: I will send a ipv4 address to this proxy server and I need to pick up and use a iPv6 address configured in the squid.conf.
I have added 2 ipv6 addresses in the conf file.
If I try the following command by specifying address as ipv6, it works fine.
curl -H 'Cache-Control: no-cache' --proxy localhost:3128 [2400:xxx:x:xx::xxx:a001]
ie, it will pick a random ipv6 address from the conf file and send the request through that ip address.
If i try the following command by specifying address as ipv4, its not working
curl -H 'Cache-Control: no-cache' --proxy localhost:3128 34.xxx.xxx.148
ie, Its not picking the ipv6 address specified in the conf file. Instead its using the server public ip .
My /etc/squid/squid.conf file content is something like this now.
acl one_fourth_3 random 1/2
acl one_fourth_4 random 1/1
tcp_outgoing_address 2604:xxxx:xxx:xxx::xx:a001 one_fourth_3
tcp_outgoing_address 2604:xxxx:xxx:xxx::xx:a002 one_fourth_4
http_access deny !safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access allow all
http_access deny all
http_port 3128
My squid version is
Squid Cache: Version 3.5.12
Service Name: squid

Squid implicitly use the IP version, you should use or force IPv6.
Squid will add an implicit IP version test to each line.
Requests going to IPv4 websites will use the outgoing 10.1.0.* addresses.
Requests going to IPv6 websites will use the outgoing 2001:db8:* addresses.
http://www.squid-cache.org/Doc/config/tcp_outgoing_address/

Related

Squid4 forward proxy to upgrade from ws to wss

Squid4.6 is used as a forward proxy to convert all traffic to secure traffic.
The configuration of squid is very simple, it allows all traffic and uses urlrewrite.pl to replace "http" to "https".(SSL-BUMP is NOT used) Squid proxy has tls_outgoing_options set, so the following works:
client(http) -----> Squid ------> Server(https)
Now, I am trying to replicate the same with websockets.
There are 3 test cases,
1.
client(ws)------> Squid -----> Server(ws)
client(wss) ------> Squid -----> Server(wss)
3
client(ws) ------> Squid -----> Server(wss)
The first two cases work with squid, but the third one does not work. And I only need the third option.
I have given debug logs for urlrewrite.pl to show the exact request received for a websocket connection, and the following is the log:
Here port 8080: is server and port 3128: is squid
DEBUG:root:localhost:8080 127.0.0.1/localhost - CONNECT myip=127.0.0.1 myport=3128
Even wireshark shows the same,
1. CONNECT HTTP 1.1
2. GET
3. upgrade protocol.
Question:
1.Is there any way to upgrade a websocket connection to secure websocket using squid4.6?
2.Or say I use wss-client (without certificate) and a wss-server(with certificates), is there a way to inform squid to use its own certificates even mentioned in "tls_outgoing_options" to establish the connection?
REQUIRED:
Client will always send a unsecure traffic HTTP/WS
and Squid should upgrade it to HTTPS/WSS.
In our application setup, we use our own openssl libraries to create certificates - which cannot be included in the (client.go) go-tls package, so we use squid proxy to use the certificates generated by our own openssl libraries.
Client and Forward-Proxy (Squid) are both in our specific environment, so squid.conf is very simple and allows all traffic.
And we need mutual cert authentication.
SQUID CONF CODE
#
# Recommended minimum configuration:
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localhost src 127.0.0.1
acl SSL_ports port 443
acl Safe_ports port 443 # https
acl Safe_ports port 80 # http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
# Squid normally listens to port 3128
http_port 3128
url_rewrite_program /etc/squid/urlrewrite.pl
url_rewrite_access allow all
tls_outgoing_options cert=/etc/squid/proxy.crt
tls_outgoing_options key=/etc/squid/proxy.key
tls_outgoing_options cafile=/etc/squid/serverauth.crt
urlrewrite CODE
#!/usr/bin/perl
select(STDOUT);
$| = 1;
while (<>) {
#print STDOUT "OK rewrite-url=\"https://google.com\"\n";
if (/^(|\d+\s+)((\w+):\/+)([^\/:]+)(|:(\d+))(|\/\S*)(|\s.*)$/) {
my $channel = $1;
my $protocolClean = $3;
my $domain = $4;
my $port = $5;
my $portClean = $6;
my $urlPath = $7;
if ($protocolClean eq 'http' ){#&& ($port eq '' || $portClean eq '80')) {
print STDOUT "${channel}OK rewrite-url=\"https://${domain}${port}${urlPath}\"\n";
#print STDOUT "${channel}OK rewrite-url=\"https://google.com\"\n";
} else {
print STDOUT "${channel}ERR\n";
}
}
}

squid remove leading slash from target url

I am configuring squid as proxy to forward request and intend to allow request only to example.com. This is the request as made from my browser: http://111.222.333.444/http://example.com where 111.222.333.444 is IP of my proxy server, where squid is installed.
But I am getting Invalid URL error, while the server is trying to fetch /http://example.com (note leading slash), here is access log record:
1505858815.396 0 12.34.56.78 NONE/400 3687 GET /http://example.com - NONE/- text/html
Here is the configuration that I am using
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src all # Allow access from everywhere
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl CONNECT method CONNECT
acl safe_url url_regex example\.com.*
http_access allow safe_url
http_access deny all
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 80
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
What I am doing wrong?
Got it! squid is proxy hence need to do the proxy connection to the server. If using firefox it can be done by setting up proxy and entering proxy server IP and port number. Once done URLs can be used directly in this case, use this directly in the navigation bar: http://example.com
More on how to setup proxy on Firefox and Chrome

Struggling to grant remote HTTP access on a proxy server

I've been desperately trying to setup squid on a virtual machine (hosted by digital ocean) for the past day or so. I've set up the machine which has Ubuntu operating system, installed Squid and modified the config as I'm trying to grant access to my home PC (as obviously they aren't on the same LAN net work). I thought this was done by making the edits I've shown below,
acl myips src MYPUBLICIP
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow myips
http_access allow manager localhost
http_access allow manager myips
http_access allow purge localhost
http_access allow purge myips
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localhost manager
http_access deny manager
from my limited understanding I've allowed access to the top IP - should I be using my Laptops IPv4 address here (10.213.111.121)?
If someone could talk me through this I was be SO grateful as I'm really not getting anywhere...
Thanks!

Blocking HTTPS sites through Squid

Can I block https://www.facebook.com through Squid?
It is working very nicely except HTTPS sites
I am using Squid 3.1 in Debian 7
Yes, you can. You should also block it with CONNECT:
acl fb dstdomain .facebook.com
http_reply_access deny fb
http_access deny CONNECT fb
then:
squid3 -k reconfigure
or
squid -k reconfigure
Now, it is done!

How to configure direct http access to EC2 instance?

This is a very basic Amazon EC2 question, but I'm stumped so here goes.
I want to launch an Amazon EC2 instance and allow access to HTTP on ports 80 and 8888
from anywhere. So far I can't even allow the instance to connect to on those ports using
its own IP address (but it will connect to localhost).
I configured the "default" security group for HTTP using the standard HTTP option on the management console (and also SSH).
I launched my instance in the default security group.
I connected to the instance on SSH port 22 twice and in one window launch an HTTP server
on port 80. In the other window I verify that I can connect to HTTP using the "localhost".
However when I try to access HTTP from the instance (or anywhere else) using either the public DNS or the Private IP address I het "connection refused".
What am I doing wrong, please?
Below is a console fragment showing the wget that succeeds and the two that fail run from the instance itself.
--2012-03-07 15:43:31-- http://localhost/
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: /__whiff_directory_listing__ [following]
--2012-03-07 15:43:31-- http://localhost/__whiff_directory_listing__
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: “__whiff_directory_listing__”
[ <=>
] 7,512 --.-K/s in 0.03s
2012-03-07 15:43:31 (263 KB/s) - “__whiff_directory_listing__” saved [7512]
[ec2-user#ip-10-195-205-30 tmp]$ wget http://ec2-50-17-2-174.compute-1.amazonaws.com/
--2012-03-07 15:44:17-- http://ec2-50-17-2-174.compute-1.amazonaws.com/
Resolving ec2-50-17-2-174.compute-1.amazonaws.com... 10.195.205.30
Connecting to ec2-50-17-2-174.compute-1.amazonaws.com|10.195.205.30|:80... failed:
Connection refused.
[ec2-user#ip-10-195-205-30 tmp]$ wget http://10.195.205.30/
--2012-03-07 15:46:08-- http://10.195.205.30/
Connecting to 10.195.205.30:80... failed: Connection refused.
[ec2-user#ip-10-195-205-30 tmp]$
The standard tcp sockets interface requires that you bind to a particular IP address when you send or listen. There are a couple of somewhat special addresses: localhost (which you're probably familiar with) which is 127.0.0.1. There's also a special address, 0.0.0.0 or INADDR_ANY (internet protocol, special shorthand for ANY ADDRESS). It's a way to listen on ANY or more commonly, ALL addresses on the host. This is a way to tell the kernel/stack that you're not interested in a particular IP address.
So, when you're setting up a server that listens to "localhost" you're telling the service that you want to use the special reserved address that can only be reached by users of this host, and while it exists on every host, making a connection to localhost will only ever reach the host you're making the request from.
When you want a service to be reachable everywhere (on a local host, on all interfaces, etc.) you can specify 0.0.0.0.
(0) It's silly but the first thing you need to do is to make sure that your web server is running.
(1) You need to edit your Security Group to let incoming HTTP packets access your website. If your website is listening on port 80, you need to edit the Security Group to open access to port 80 as mentioned above. If your website is listening on some other port, then you need to edit the Security Group to access that other port.
(2) If you are running a Linux instance, the iptables firewall may be running by default. You can check that this firewall is active by running
sudo service iptables status
on the command line. If you get output, then the iptables firewall is running. If you get a message "Firewall not running", that's pretty self-explanatory. In general, the iptables firewall is running by default.
You have two options: knock out the firewall or edit the firewall's configuration to let HTTP traffic through. I opted to knock out the firewall as the simpler option (for me).
sudo service iptables stop
There is no real security risk in shutting down iptables because iptables, if active, merely duplicates the functionality of Amazon's firewall, which is using the Security Group to generate its configuration file. We are assuming here that Amazon AWS doesn't misconfigure its firewalls - a very safe assumption.
(3) Now, you can access the URL from your browser.
(4) The Microsoft Windows Servers also run their personal firewalls by default and you'll need to fix the Windows Server's personal firewall, too.
Correction: by AWS default, AWS does not fire up server firewalls such iptables (Centos) or UAF (Ubuntu) when you are ordering the creation of new EC2 instances - That's why EC2 instances that are in the same VPC can ssh into each other and you can "see" the web server that you fired up from another EC2 instance in the same VPC.
Just make sure that your RESTful API is listening on all interfaces i.e. 0.0.0.0:portID
As you are getting connection refused (packets are being rejected) I bet it is iptables causing the problem. Try to run
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
iptables -I INPUT -p tcp --dport 8888 -j ACCEPT
and test the connection.
You will also need to add those rules permanently which you can do by adding the above lines into ie. /etc/sysconfig/iptables if you are running Red Hat.
Apparently I was "binding to localhost" whereas I needed to bind to 0.0.0.0 to respond to port 80 for the all incoming TCP interfaces (?). This is a subtlety of TCP/IP that I don't fully understand yet, but it fixed the problem.
Had to do the following:
1) Enable HTTP access on the instance config, it wasn't on by default only SSH
2) Tried to do nodejs server, so port was bound to 80 -> 3000 did the following commands to fix that
iptables -F
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
sudo service iptables-persistent flush
Amazon support answered it and it worked instantly:
I replicated the issue on my end on a test Ubuntu instance and was able to solve it. The issue was that in order to run Tomcat on a port below 1024 in Ubuntu/Unix, the service needs root privileges which is generally not recommended as running a process on port 80 with root privileges is an unnecessary security risk.
What we recommend is to use a port redirection via iptables :-
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
I hope the above information helps.

Resources