I want to take a look at the direct encrypted https traffic of my requests. I got a server and 2 clients. With client 1 there are no problems. client 2 gets problems if the request exceeds a certain text-size. I was able to decrypt the traffic and found that client 1 extends my SOAP request every 3996 signs by an extra "f9c"-pattern. Client 2 is not doing this which is probably causing the problem on the serverside. But that is not all. If I use fiddler to decrypt the https request the server also throws an exception. So my guess is that the client is probably adding something on the https encryption too. That is why I want to take a look at this but I cannot figure out how to force fiddler to show this to me. I only get the http traffic if I disable https-decryption that shows my the handshake between client and server. So what can I do here?
problem solved.
the "f9c"-pattern is just the hex value for 3996-bytes. The problem was that client 1 was correctly chunking the http-request. Client 2 on the contrary was sending a single block and just setting the content-length of the http-request. Therefore I do not need to take a look anymore on the encrypted request.
Related
I've read a lot of related topics in the net, but I still don't have an answer to my question.
Is it possible to implement flow described below?
Proxy receive request.
If request is encrypted and proxy cert is trusted then intercept.
If request is not encrypted, then intercept.
If request is encrypted and proxy cert is NOT trusted then pass it through without interception.
This behaviour should be default for all traffic going through the proxy.
It'd be also really nice to be able to get all possible info for passing encrypted requests (src and dst ip addresses etc.). Basically the same info which I can get with fiddler.
Not really. The main problem is that mitmproxy can not know if proxy cert is trusted by the client or not.
In the SSL/TLS protocol client starts with the CLIENT_HELLO and in response the server (in this case motmproxy) sends back the SERVER_HELLO message containing the generated server certificate.
The client now checks if the received server certificate is trusted. If not the connection is terminated. As far as I know the SSL/TLS spec does not define how to do so. Sems clients end back an SSL_ALERT message, other simply drop the connection, and a third group continues the SSL/TLS handshake but have certain internal values set in a way that always let the handshake fail.
There is a mitmproxy script that tries to identify connections that were not successful and then if the client asks for the same domain a second time bypasses interception.
Of course this requires that the client resends requests which is not always the case.
https://github.com/sociam/x-ray/blob/master/mitmproxy/examples/tls_passthrough.py
Our web application has a button that is supposed to send data to a server on the local network that in turn prints something on a printer.
So far it was easy: The button triggered an AJAX POST request to http://printerserver/print.php with a token, that page connected to the web application to verify the token and get the data to print and then printed.
However, we are now delivering our web application via HTTPs (and I would rather not go back to HTTP for this) and newer versions of Chrome and Firefox don't make the request to the HTTP address anymore, they don't even send the request to check CORS headers.
Now, what is a modern alternative to the cross-protocol XHR? Do Websockets suffer from the same problem? (A Google search did not make clear what is the current state here.) Can I use TCP Sockets already? I would rather not switch to GET requests either, because the action is not idempotent and it might have practical implications with preloading and caching.
I can change the application on the printerserver in any way (so I could replace it with NodeJS or something) but I cannot change the users' browsers (to trust a self-signed certificate for printerserver for example).
You could store the print requests on the webserver in a queue and make the printserver periodically poll for requests to print.
If that isn't possible I would setup a tunnel or VPN between the webserver and printserver networks. That way you can make the print request from the webserver on the server-side instead of the client. If you use curl, there are flags to ignore invalid SSL certificates etc. (I still suspect it's nicer to introduce a queue anyway, so the print requests aren't blocking).
If the webserver can make an ssh connection to something on the network where the printserver is on, you could do something like: ssh params user#host some curl command here.
Third option I can think of, if printserver can bind to for example a subdomain of the webserver domain, like: print.somedomain.com, you may be able to make it trusted by the somedomain.com certificate, IIRC you have to create a CSR (Certificate Signing Request) from the printserver certificate, and sign it with the somedomain.com certificate. Perhaps it doesn't even need to be a subdomain for this per se, but maybe that's a requirement for the browser to do it client-side.
The easiest way is to add a route to the webapp that does nothing more than relay the request to the print server. So make your AJAX POST request to https://myapp.com/print, and the server-side code powering that makes a request to http://printerserver/print.php, with the exact same POST content it received itself. As #dnozay said, this is commonly called a reverse proxy. Yes, to do that you'll have to reconfigure your printserver to accept (authenticated) requests from the webserver.
Alternatively, you could switch the printserver to https and directly call it from the client.
Note that an insecure (http) web-socket connection on a secure (https) page probably won't work either. And for good reason: generally it's a bad idea to mislead people by making insecure connections from what appears to them to be a secure page.
The server hosting the https webapp can reverse proxy the print server,
but since the printer is local to the user, this may not work.
The print server should have the correct CORS headers
Access-Control-Allow-Origin: *
or:
Access-Control-Allow-Origin: https://www.example.com
However there are pitfalls with using the wildcard.
From what I understand from the question, printserver is not accessible from the web application so the reverse proxy solution won't work here.
You are restricted from making requests from the browser to the printserver by cross-origin-policy.
If wish to communicate with the printserver from an HTTPS page you will need the printserver to expose print.php as HTTPS too.
You could create a DNS A record as a subdomain of your web application that resolves to the internal address of your printserver.
With those steps in place you should be able to update your printserver page to respond with permissive CORS headers which the browser should then respect. I don't think the browser will even issue CORS requests across different protocol schemes (HTTPS vs HTTP) or to internal domains, without a TLD.
I've seen several examples of writing an HTTP proxy in Ruby, e.g. this gist by Torsten Becker, but how would I extend it to handle HTTPS, aka for a "man in the middle" SSL proxy?
I'm looking for a simple source code framework which I can extend for my own logging and testing needs.
update
I already use Charles, a nifty HTTPS proxy app similar to Fiddler and it is essentially what I want except that it's packaged up in an app. I want to write my own because I have specific needs for filtering and presentation.
update II
Having poked around, I understand the terminology a little better. I'm NOT after a full "Man in the Middle" SSL proxy. Instead, it will run locally on my machine and so I can honor whatever SSL cert it offers. However, I need to see the decrypted contents of packets of my requests and the decrypted contents of the responses.
Just for background information, a normal HTTP proxy handles HTTPS requests via the CONNECT method: it reads the host name and port, establishes a TCP connection to this target server on this port, returns 200 OK and then merely tunnels that TCP connection to the initial client (the fact that SSL/TLS is exchanged on top of that TCP connection is barely relevant).
This is what the do_CONNECT method if WEBrick::HTTPProxyServer.
If you want a MITM proxy, i.e. if you want to be able to look inside the SSL/TLS traffic, you can certainly use WEBrick::HTTPProxyServer, but you'll need to change do_CONNECT completely:
Firstly, your proxy server will need to embed a mini CA, capable of generating certificates on the fly (failing that, you might be able to use self-signed certificates, if you're willing to bypass warning messages in the browser). You would then import that CA certificate into the browser.
When you get the CONNECT request, you'll need to generate a certificate valid for that host name (preferable with a Suject Alt. Name for that host name, or in the Subject DN's Common Name), and upgrade the socket into an SSL/TLS server socket (using that certificate). If the browser accepts to trust that certificate, what you get from thereon on this SSL/TLS socket is the plain text traffic.
You would then have to handle the requests (get the request line, headers and entity) and take it to use it via a normal HTTPS client library. You might be able to send that traffic to a second instance of WEBrick::HTTPProxyServer, but it would have to be tweaked to make outgoing HTTPS requests instead of plain HTTP requests.
Webrick can proxy ssl:
require 'webrick'
require 'webrick/httpproxy'
WEBrick::HTTPProxyServer.new(:Port => 8080).start
from my experience HTTPS is nowhere near "simple". Do you need a proxy that would catch traffic from your own machine? There are several applications, like Fiddler. Or google for alternatives. Comes with everything you need to debug the web traffic.
That blog is no way to write a proxy. It's very easy: you just accept a connection, read one line which tells you what to connect to, attempt the upstream connection, if it fails send the appropriate response and close the socket, otherwise just start copying bytes in both directions, simultaneously, until EOS has occurred in both directions. The only difference HTTPS makes is that you have to speak SSL instead of plaintext.
What additional changes are required to make this simple HTTP header to speak to a HTTPS enabled server.
GET /index.php HTTP/1.1
Host: localhost
[CR]
[CR]
EDIT
To add some context, all I'm trying to do is open a TCP port (443) and read the index page but the server seems to return a 400 - Bad request along with a message that goes "You're speaking plain HTTP to an SSL-enabled server port." I thought this probably meant altering the header in some fashion.
HTTP runs on top of secured channel. No adjustments are needed at all on HTTP level. You need to encrypt the whole traffic going to the socket (after it leaves HTTP client code) and decrypt the traffic coming from the socket before it reaches HTTP client.
You encrypt the payload with the information from the server to encrypt. This is done via handshake on a server by server basis, so you can't just fake it once have it work everywhere.
The payload includes the query string, cookies, form, etc.
When you GET
https://encrypted.google.com/search?q=%s
Is the %s query encrypted? Or just the response?
If it is not, why should Google serve its public content also with encryption?
The entire request is encrypted, including the URL, and even the command (GET). The only thing an intervening party such as a proxy server can glean is the destination address and port.
Note, however, that the Client Hello packet of a TLS handshake can advertise the fully qualified domain name in plaintext via the SNI extension (thanks #hafichuk), which is used by all modern mainstream browsers, though some only on newer OSes.
EDIT: (Since this just got me a "Good Answer" badge, I guess I should answer the entire question…)
The entire response is also encrypted; proxies cannot intercept any part of it.
Google serves searches and other content over https because not all of it is public, and you might also want to hide some of the public content from a MITM. In any event, it's best to let Google answer for themselves.
The URL itself is encrypted, so the parameters in the query string do not travel in plain across the wire.
However, keep in mind that URLs including the GET data are often logged by the webserver, whereas POST data seldom is. So if you're planning to do something like /login/?username=john&password=doe, then don't; use a POST instead.
HTTPS Establishes an underlying SSL conenction before any HTTP data is
transferred. This ensures that all URL data (with the exception of
hostname, which is used to establish the connection) is carried solely
within this encrypted connection and is protected from
man-in-the-middle attacks in the same way that any HTTPS data is.
The above is a part of a VERY comprehensive answer from Google Answers located here:
http://answers.google.com/answers/threadview/id/758002.html#answer
The portion of the URL after the host name is sent securely.
For example,
https://somewhere.com/index.php?NAME=FIELD
The /index.php?NAME=FIELD part is encrypted. The somewhere.com is not.
Everything is encrypted, but you need to remember, that your query will stay in server's logs and will be accessible to various log analysers etc (which is usually not the case with POST request).
The connection gets encrypted before the request is transmitted. So yes, the request is encrypted as well, including the query string.
I just connected via HTTPS to a website and passed a bunch of GET parameters. I then used wireshark to sniff the network. Using HTTP, the URL is sent unencrypted, which means I can easily see all the GET parameters in the URL. Using HTTPS, everything is encrypted and I can't even see which packet is the GET command, let alone its contents!
Yes, it is secure. SSL encrypts everything.
Excerpt from POST request:
POST /foo HTTP/1.1
... some other headers
Excerpt from GET request:
GET /foo?a=b HTTP/1.1
... some other headers
In both cases whatever is sent on the socket is encrypted. The fact that the client sees parameters in his browser during a GET request doesn't mean that a man in the middle would see the same.
The SSL takes place before the header parsing, this means:
Client creates Request
Request gets encrypted
Encrypted request gets transmitted to the Server
Server decrypts the Request
Request gets parsed
A Request looks something like this (can't remember the exact syntax, but this should be close enough):
GET /search?q=qwerty HTTP/1.1
Host: www.google.de
This is also why having different SSL Certificates for several hosts on the same IP are problematic, the requested Hostname is not known until decryption.
The GET request is encrypted when using HTTPS - in fact this is why secured websites need to have a unique IP address - there's no way to get the intended hostname (or virtual directory) from the request until after it's been decrypted.
There is a little confusion above:
despite what is said, it is NOT required, anymore, to have unique IP address since the SNI field tells the server which cert to use
everything is encrypted EXCEPT the SNI request which is VERY early. This tells the server which Name and therefore which certificate to use. That is, the Server Name Indicator IS sent unencrypted so that the server and the client can establish and then use a secure connection for everything else.
So, in answer to the original question. Everything EXCEPT the host name (and I guess port) is secured in BOTH directions.