Enable cache for SSL connection in Squid - https

How we can enable squid to cache web content (let says from firefox) for SSL connection, i mean for https URLs?

Actually SQUID can be used to access HTTPS traffic - it is in essence a man-in-the-middle attack - and there are caveats:
See: http://wiki.squid-cache.org/Features/SslBump
I have not tried cacheing this data yet, so can't say that it will work with absolute certainty. If/when I do, I'll update this post.

SSL encrypts the traffic between server and client so it cannot be read by a middle man. When using Squid as a proxy it simply cannot see the actual content in the traffic and therefore it has no means of caching it. The SSL traffic is just random bits that look different each time even if the same content is transferred multiple times and that is how encryption should work. It simply cannot be cached.

I have no problems getting Firefox (version 23.0.1 on Windows) to route SSL traffic via Squid. In Firefox Network Connection settings I just point SSL Proxy and HTTP Proxy to the same Squid installation.
After that I can successfully access https URLs in Firefox and in Squid's access_log I see entries like these:
1379660084.878 115367 10.0.0.205 TCP_MISS/200 6581 CONNECT www.gravatar.com:443 - DIRECT/68.232.35.121 -
Do you have any details about how it doesn't work for you? Squid has quite complicated possibilities to deny and allow certain types of traffic, so it is possible there is a configuration issue in Squid. Do you get any error messages in Squid's logfiles?

Related

Does squidman proxy server support https?

I'm trying to set up a proxy server on my local mac.
http - seems to work.
But Safari is not connecting via https.
Did I miss something?
No it doesn't. You need to specify a separate https port and a ssl certificate, as documented in the squid config:
The socket address where Squid will listen for client requests made
over TLS or SSL connections. Commonly referred to as HTTPS.
This is most useful for situations where you are running squid in
accelerator mode and you want to do the TLS work at the accelerator
level.
You may specify multiple socket addresses on multiple lines, each
with their own certificate and/or options.
The tls-cert= option is mandatory on HTTPS ports.
See http_port for a list of modes and options.
http://www.squid-cache.org/Doc/config/https_port/
By design, it is quite hard to intercept https traffic:
When a browser creates a direct secure connection with an origin
server, there are no HTTP CONNECT requests. The first HTTP request
sent on such a connection is already encrypted. In most cases, Squid
is out of the loop: Squid knows nothing about that connection and
cannot block or proxy that traffic.
You also need to load the proxy settings for the browser as a PAC file, otherwise the browsers won't connect or throw a certificate warning:
Chrome The Chrome browser is able to connect to proxies over SSL
connections if configured to use one in a PAC file or command line
switch. GUI configuration appears not to be possible (yet).
More details at
http://dev.chromium.org/developers/design-documents/secure-web-proxy
Firefox The Firefox 33.0 browser is able to connect to proxies over
TLS connections if configured to use one in a PAC file. GUI
configuration appears not to be possible (yet), though there is a
config hack for embedding PAC logic.
There is still an important bug open:
Using a client certificate authentication to a proxy:
https://bugzilla.mozilla.org/show_bug.cgi?id=209312
https://wiki.squid-cache.org/Features/HTTPS

How to proxy HTTPS via HTTP without CA or MITM?

HTTP proxy with SSL and DNS support.
I must be lacking some key concepts about proxy-ing because I cannot grasp this. I am looking to run a simply http or https proxy without interfering with SSL. Simply, a fully transparent proxy that can passthrough all the traffic to the browser connected via HTTP or HTTPS proxy without modifying or intercepting any packets. Not able to find any code online or I'm not using the right keywords.
EX. On the browser adding server.someVPN.com:80 on the HTTP proxy field and as soon as you try to visit a website, it prompts for authentication. Then it works perfectly with any domain, any security, any ssl, no further steps needed. Most VPN providers have this.
How's this possible? it even resolves DNS itself. I thought on transparent proxy the dns relies on the client. Preferably looking for a nodeJS solution but any lang works.
Please don't propose any solutions such as SOCKS5 or sock forwarding or DNS overriding or CA based MITM. According to HTTP 1.1 which supports 'CONNECT' this should be easy.
Not looking to proxy specific domains, looking for an all inclusive solution just like most VPN Providers providers.
----Found the answer too quickly, feel free to delete this post/question admins.
The way it works is that the browser knows it is talking to a proxy server, so for example if the browser want to connect to htttp://www.example.com it sends a CONNECT www.example.com:443 HTTP/1.1 to the proxy server, the proxy server resolves wwww.example.com via DNS and then opens a TCP connection to wwww.example.com port 443 and proxies the TCP stream transparently to the client.
I don't know any solution for nodejs. Common proxy servers include Squid, Privoxy and Apache Traffic Server
See also: https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT
Found the solution right after I asked...
This module works perfectly https://github.com/mpangrazzi/harrier
Does exactly what I was asking for.

How can a web page send a message to the local network

Our web application has a button that is supposed to send data to a server on the local network that in turn prints something on a printer.
So far it was easy: The button triggered an AJAX POST request to http://printerserver/print.php with a token, that page connected to the web application to verify the token and get the data to print and then printed.
However, we are now delivering our web application via HTTPs (and I would rather not go back to HTTP for this) and newer versions of Chrome and Firefox don't make the request to the HTTP address anymore, they don't even send the request to check CORS headers.
Now, what is a modern alternative to the cross-protocol XHR? Do Websockets suffer from the same problem? (A Google search did not make clear what is the current state here.) Can I use TCP Sockets already? I would rather not switch to GET requests either, because the action is not idempotent and it might have practical implications with preloading and caching.
I can change the application on the printerserver in any way (so I could replace it with NodeJS or something) but I cannot change the users' browsers (to trust a self-signed certificate for printerserver for example).
You could store the print requests on the webserver in a queue and make the printserver periodically poll for requests to print.
If that isn't possible I would setup a tunnel or VPN between the webserver and printserver networks. That way you can make the print request from the webserver on the server-side instead of the client. If you use curl, there are flags to ignore invalid SSL certificates etc. (I still suspect it's nicer to introduce a queue anyway, so the print requests aren't blocking).
If the webserver can make an ssh connection to something on the network where the printserver is on, you could do something like: ssh params user#host some curl command here.
Third option I can think of, if printserver can bind to for example a subdomain of the webserver domain, like: print.somedomain.com, you may be able to make it trusted by the somedomain.com certificate, IIRC you have to create a CSR (Certificate Signing Request) from the printserver certificate, and sign it with the somedomain.com certificate. Perhaps it doesn't even need to be a subdomain for this per se, but maybe that's a requirement for the browser to do it client-side.
The easiest way is to add a route to the webapp that does nothing more than relay the request to the print server. So make your AJAX POST request to https://myapp.com/print, and the server-side code powering that makes a request to http://printerserver/print.php, with the exact same POST content it received itself. As #dnozay said, this is commonly called a reverse proxy. Yes, to do that you'll have to reconfigure your printserver to accept (authenticated) requests from the webserver.
Alternatively, you could switch the printserver to https and directly call it from the client.
Note that an insecure (http) web-socket connection on a secure (https) page probably won't work either. And for good reason: generally it's a bad idea to mislead people by making insecure connections from what appears to them to be a secure page.
The server hosting the https webapp can reverse proxy the print server,
but since the printer is local to the user, this may not work.
The print server should have the correct CORS headers
Access-Control-Allow-Origin: *
or:
Access-Control-Allow-Origin: https://www.example.com
However there are pitfalls with using the wildcard.
From what I understand from the question, printserver is not accessible from the web application so the reverse proxy solution won't work here.
You are restricted from making requests from the browser to the printserver by cross-origin-policy.
If wish to communicate with the printserver from an HTTPS page you will need the printserver to expose print.php as HTTPS too.
You could create a DNS A record as a subdomain of your web application that resolves to the internal address of your printserver.
With those steps in place you should be able to update your printserver page to respond with permissive CORS headers which the browser should then respect. I don't think the browser will even issue CORS requests across different protocol schemes (HTTPS vs HTTP) or to internal domains, without a TLD.

AWS ELB Server-Side HTTPS

I have been working my way though setting up HTTPS using AWS. I have been attempting this with a self-signed certificate and am finding the process a bit problematic.
One question that has come up along the way is this business of server-side HTTPS. The client that I am working with requests that when a user hits the server the URL change to HTTPS. I am wondering if "Server-Side HTTPS" means that the protocol is transparent to the end-user?
Will they still see HTTP int the browser?
Thanks.
Don't know if this is an exact answer to your question, but rather perhaps a piece of advice. When using ELB, I have found it MUCH easier to install the SSL cert on the ELB and use SSL offload to forward requests from port 443 on ELB to port 80 on the EC2 instances.
The pros of this:
There is only one place where you need to install the cert rather than having to install across a number of instances (or update AMI and relaunch instances), making cert updates much easier to perform.
You get better performance on your web servers as they don't have to deal with SSL encryption.
Some cons:
The communication is not encrypted end-to-end so there is the technical (albeit unlikely) chance that the communication could be intercepted between ELB and servers. If you are dealing with something like PCI compliance this might matter to you.
If you needed to directly access one of the instances over HTTPS that would not be possible.
You may need to make sure your application is aware of the https-related headers (i.e. x-forwarded-proto) that the ELB injects into the request if your application needs to check whether the request is over HTTPS.
There is no reason that this configuration would disallow you from redirecting incoming requests over HTTP to HTTPS. You might however need to look the x-forwarded-proto header to do any web-server or application level redirects to HTTPS. The end user would not have any way of knowing that their HTTPS wrapper for their request was being offloaded at the ELB.

How do I write a simple HTTPS proxy server in Ruby?

I've seen several examples of writing an HTTP proxy in Ruby, e.g. this gist by Torsten Becker, but how would I extend it to handle HTTPS, aka for a "man in the middle" SSL proxy?
I'm looking for a simple source code framework which I can extend for my own logging and testing needs.
update
I already use Charles, a nifty HTTPS proxy app similar to Fiddler and it is essentially what I want except that it's packaged up in an app. I want to write my own because I have specific needs for filtering and presentation.
update II
Having poked around, I understand the terminology a little better. I'm NOT after a full "Man in the Middle" SSL proxy. Instead, it will run locally on my machine and so I can honor whatever SSL cert it offers. However, I need to see the decrypted contents of packets of my requests and the decrypted contents of the responses.
Just for background information, a normal HTTP proxy handles HTTPS requests via the CONNECT method: it reads the host name and port, establishes a TCP connection to this target server on this port, returns 200 OK and then merely tunnels that TCP connection to the initial client (the fact that SSL/TLS is exchanged on top of that TCP connection is barely relevant).
This is what the do_CONNECT method if WEBrick::HTTPProxyServer.
If you want a MITM proxy, i.e. if you want to be able to look inside the SSL/TLS traffic, you can certainly use WEBrick::HTTPProxyServer, but you'll need to change do_CONNECT completely:
Firstly, your proxy server will need to embed a mini CA, capable of generating certificates on the fly (failing that, you might be able to use self-signed certificates, if you're willing to bypass warning messages in the browser). You would then import that CA certificate into the browser.
When you get the CONNECT request, you'll need to generate a certificate valid for that host name (preferable with a Suject Alt. Name for that host name, or in the Subject DN's Common Name), and upgrade the socket into an SSL/TLS server socket (using that certificate). If the browser accepts to trust that certificate, what you get from thereon on this SSL/TLS socket is the plain text traffic.
You would then have to handle the requests (get the request line, headers and entity) and take it to use it via a normal HTTPS client library. You might be able to send that traffic to a second instance of WEBrick::HTTPProxyServer, but it would have to be tweaked to make outgoing HTTPS requests instead of plain HTTP requests.
Webrick can proxy ssl:
require 'webrick'
require 'webrick/httpproxy'
WEBrick::HTTPProxyServer.new(:Port => 8080).start
from my experience HTTPS is nowhere near "simple". Do you need a proxy that would catch traffic from your own machine? There are several applications, like Fiddler. Or google for alternatives. Comes with everything you need to debug the web traffic.
That blog is no way to write a proxy. It's very easy: you just accept a connection, read one line which tells you what to connect to, attempt the upstream connection, if it fails send the appropriate response and close the socket, otherwise just start copying bytes in both directions, simultaneously, until EOS has occurred in both directions. The only difference HTTPS makes is that you have to speak SSL instead of plaintext.

Resources