How do I go about looking at an HTTPS header?
we would look inside the HTTPs traffic and extract the headers and full URL path.
Any thoughts on what it takes to do this?
In general your question doesn't make much sense because SSL was designed to be opaque secure transport and prevent peeping into it.
In reality if you control one of the sides of communication you can take some actions. But these actions depend on many details, such as whether you have access to the client or server, what software is used on the server and client side.
Finally, I'd say that this question doesn't look to be programming-related unless you describe your task in more details.
Related
I am trying to find (or write) a caching proxy tool that accepts all traffic from a specific container in my localhost (using Iptables). What I want to do with this traffic is to save it and cache the response, and later, if I see that a request was already sent to a server, return the cached response to the requesting party (and not sending the request to the server again, because a previous similar request was already sent).
Here's a diagram to demonstrate what I'm trying to do:
I'm not sure exactly how big is the problem I'm trying to deal with here. I want to do it for all traffic, including HTTP, TLS and other TCP based traffic (database connections and such). I tried to check mitmproxy, and it seems to deal pretty good with HTTP and the TLS part, but intercepting raw TCP traffic (for databases etc.) is not possible.
Any advices or resources I can use to accomplish that? (Not necessarily in Python). How complex do you think this problem is? Do you think I can find a generic solution?
Thanks in advance!
How can I detect, server side, if the client supports SPDY?
I want my website to be as fast as possible. Here's my thinking: (Note: my website does not need to transmit sensitive data.) If a browser connects to my website with HTTPS but it doesn't support SPDY , it'll be a waste. Unnecessary overhead due to HTTPS right? On the other hand, if the browser connects via HTTP and does support SPDY, that will be a missed opportunity.
It looks like NPN is the technology that the client and server use to negotiate on SPDY or not. That happens in the web server, before it ever hits my application code, right? I suppose then what I'd really need is a modified version of NPN (not even sure if that's really its own thing outside of SPDY) or mod_spdy. Ideally such a version would have an option called use_spdy_if_available_otherwise_redirect_to_http. :-)
Oh, and if all this isn't complicated enough, I'm currently using Cloudflare's CDN service. I'm pretty sure I have no recourse to modify how they operate in this regard, and thus have no chance, right?
All data is sensitive: the sites you visit, the pages you've viewed, etc. By aggregating this data across many pages, you can infer a lot about the user: their intent, interests, and so on. Hence, we need HTTPS everywhere. For more, see our Google I/O talk [1] on the subject.
In terms of detecting SPDY support, yes you want to use NPN/ALPN (ALPN is a successor [2]). The client sends a ProtocolNameList in their handshake, which advertises which protocols they support. Most servers will use this to auto-negotiate SPDY, but if you want to control this decision yourself, you'd have to modify your server implementation to invoke some sort of callback when the secure handshake is being performed.
That said, given what I said earlier about HTTPS everywhere, I would advise you against this altogether. Use HTTPS everywhere and let the browser and server auto-negotiate SPDY if its supported.
[1] https://www.youtube.com/watch?v=cBhZ6S0PFCY
[2] http://chimera.labs.oreilly.com/books/1230000000545/ch04.html#ALPN
I agree with igrigorik's advice: do not redirect users from HTTPS to HTTP. That's just not cool. Regardless, I had this detection problem today and my answer's below.
In NGINX (I'm running 1.7.7), the $spdy variable will be set if the client connects with SPDY connection. Otherwise, $spdy will not have a value. For example, I'm passing a custom URL parameter to a php script:
server {
listen 443 ssl spdy;
...
...
# add SPDY rewrite param
if ($spdy) {
rewrite ^/detect-spdy.js /detect-spdy.js.php?spdy=$spdy last;
}
# fallback to non-SPDY rewrite
rewrite ^/detect-spdy.js /detect-spdy.js.php last;
# add response header if needed later
add_header x-spdy $spdy;
}
I have a https connection from Client to Server and a malware in client. The malware modifies the message and compromises its integrity. I am using a proxy to check the Integrity of the message after the malware has changed the message and before sending it over the internet to the server.
Now, How can I check the Integrity of the message (Sure that it has not been modified by any Man in the Middle) for the second half of my communication channel(Which is from Client to the Server over the internet).
I see few conventional approaches of CRC or Checksum will help. But I am looking for some non traditional or upcoming approaches. I am new to this area and want to take expert advise about the direction I need to search for answer to my question.
Any pointers would be of great help.
Thanks,
As I mentioned in your other question, if you have an https session, you can't do this.
If you could do it, it's possible your proxy could be the "man-in-the-middle", which is exactly what SSL is designed to prevent.
Also, it's not clear how you expect the malware on the client side is changing the message - your software can always validate the message before it is sent via SSL, and after it's sent, the only thing that should be able to decode it is the server.
I strongly recommend spending some time learning about specific well known client server security patterns rather than trying to invent your own (or trying to hack apart SSL). A great starting point would be just picking through some questions on http://security.stackexchange.com. (A personal favorite there is this question about how do to password security). There are likely some questions/links you can follow through there to learn more about client-server security (and eventually understand why I'm confused about what it is you're trying to do).
If you are required to make up your own for some reason, a possible (but still hackable with enough determination) way of doing validation is to include a checksum/hashcode based on all the values, and make sure the same checksum can be generated server side from the values. You don't need a "middle" to somehow crack the SSL to do this though - just do the validation on the server side.
If a program sends a http request, is there a way to spoof the data returned by the request?
For example:
Program that sends name to server to check for permission: http://example.com/test.php?name=Stackoverflow
Actual Response: HI
Response I want to spoof: HELLO
Also, are there good forms of authentication to protect against this (if it is possible).
This question is pretty open-ended, so it's hard to answer it with something terribly specific. Depending on exactly what you're trying to do, a simple proxy like Fiddler (Windows-only), Burp, etc. might do the trick. You could also play tricks with hosts files, iptables (see Otto's comment), etc. It's definitely possible, but depending on exactly what you're trying to do, some methods may be more suitable than others.
As for the second part of your question (authentication to ensure this doesn't happen), this is one of the primary purposes of HTTPS.
In its popular deployment on the internet, HTTPS provides authentication of the web site and associated web server that one is communicating with, which protects against Man-in-the-middle attacks. Additionally, it provides bidirectional encryption of communications between a client and server, which protects against eavesdropping and tampering with and/or forging the contents of the communication. In practice, this provides a reasonable guarantee that one is communicating with precisely the web site that one intended to communicate with (as opposed to an impostor), as well as ensuring that the contents of communications between the user and site cannot be read or forged by any third party.
http://en.wikipedia.org/wiki/HTTP_Secure
This might sound really naive but I would really find a descriptive answer helpful.
So, my question is this:
I can use Firebug to look at AJAX requests made from any website I visit. So, am I right in saying that I wouldn't be able to examine the same communication between the client and the server if the website choses to use Websockets? In other words, does this make it more secure?
No. Not at all. Just because the browser does not (yet) have a tool to show WebSocket traffic, doesn't make it any more secure. You can always run a packet sniffer to monitor the traffic, for example.
No, because there will be other ways beside the browser-build in tools to read your traffic.
Have a try: Install and run Wireshark and you will be able to see all packets you send and receive via Websockets.
Depends on the application. If you are fully Ajax without reloading the document for data then I would think websockets would provide a better authentication for data requests then a cookie session in regards to connection hijack. Given that you are using SSL of course.
Never rely on secrecy of algorithm cause it only gives you false sense of security. Wiki: Security by obscurity
Remember that browser is a program on my computer and I am the one who have a full control over what is send to you, not my browser.
I guess it's only matter of time (up to few months IMO) when developer tools such as Firebug will provide some fancy tool for browsing data send/received by WebSockets.
WebSockets has both an unencrypted (ws://) and encrypted mode (wss://). This is analogous to HTTP and HTTPS. WebSockets protocol payload is simply UTF-8 encoded. From a network sniffing perspective there is no advantage to using WebSockets (use wss and HTTPS for everything at all sensitive). From the browser perspective there is no benefit to using WebSockets for security. Anything running in the browser can be examined (and modified) by a sufficiently knowledgeable user. The tools for examining HTTP/AJAX requests just happen to be better right now.