How can I detect, server side, if the client supports SPDY? - spdy

How can I detect, server side, if the client supports SPDY?
I want my website to be as fast as possible. Here's my thinking: (Note: my website does not need to transmit sensitive data.) If a browser connects to my website with HTTPS but it doesn't support SPDY , it'll be a waste. Unnecessary overhead due to HTTPS right? On the other hand, if the browser connects via HTTP and does support SPDY, that will be a missed opportunity.
It looks like NPN is the technology that the client and server use to negotiate on SPDY or not. That happens in the web server, before it ever hits my application code, right? I suppose then what I'd really need is a modified version of NPN (not even sure if that's really its own thing outside of SPDY) or mod_spdy. Ideally such a version would have an option called use_spdy_if_available_otherwise_redirect_to_http. :-)
Oh, and if all this isn't complicated enough, I'm currently using Cloudflare's CDN service. I'm pretty sure I have no recourse to modify how they operate in this regard, and thus have no chance, right?

All data is sensitive: the sites you visit, the pages you've viewed, etc. By aggregating this data across many pages, you can infer a lot about the user: their intent, interests, and so on. Hence, we need HTTPS everywhere. For more, see our Google I/O talk [1] on the subject.
In terms of detecting SPDY support, yes you want to use NPN/ALPN (ALPN is a successor [2]). The client sends a ProtocolNameList in their handshake, which advertises which protocols they support. Most servers will use this to auto-negotiate SPDY, but if you want to control this decision yourself, you'd have to modify your server implementation to invoke some sort of callback when the secure handshake is being performed.
That said, given what I said earlier about HTTPS everywhere, I would advise you against this altogether. Use HTTPS everywhere and let the browser and server auto-negotiate SPDY if its supported.
[1] https://www.youtube.com/watch?v=cBhZ6S0PFCY
[2] http://chimera.labs.oreilly.com/books/1230000000545/ch04.html#ALPN

I agree with igrigorik's advice: do not redirect users from HTTPS to HTTP. That's just not cool. Regardless, I had this detection problem today and my answer's below.
In NGINX (I'm running 1.7.7), the $spdy variable will be set if the client connects with SPDY connection. Otherwise, $spdy will not have a value. For example, I'm passing a custom URL parameter to a php script:
server {
listen 443 ssl spdy;
...
...
# add SPDY rewrite param
if ($spdy) {
rewrite ^/detect-spdy.js /detect-spdy.js.php?spdy=$spdy last;
}
# fallback to non-SPDY rewrite
rewrite ^/detect-spdy.js /detect-spdy.js.php last;
# add response header if needed later
add_header x-spdy $spdy;
}

Related

Caching proxy for all traffic

I am trying to find (or write) a caching proxy tool that accepts all traffic from a specific container in my localhost (using Iptables). What I want to do with this traffic is to save it and cache the response, and later, if I see that a request was already sent to a server, return the cached response to the requesting party (and not sending the request to the server again, because a previous similar request was already sent).
Here's a diagram to demonstrate what I'm trying to do:
I'm not sure exactly how big is the problem I'm trying to deal with here. I want to do it for all traffic, including HTTP, TLS and other TCP based traffic (database connections and such). I tried to check mitmproxy, and it seems to deal pretty good with HTTP and the TLS part, but intercepting raw TCP traffic (for databases etc.) is not possible.
Any advices or resources I can use to accomplish that? (Not necessarily in Python). How complex do you think this problem is? Do you think I can find a generic solution?
Thanks in advance!

Why do web browsers not support h2c (HTTP/2 without TLS)?

I really search the web, and I can not find the reason why web browsers do not support h2c (http/2 with no TLS). Any idea, appreciated.
A little bit clarification
http/2 with https uses ALPN (this is called h2).
http/2 with http does not need ALPN(this is called h2c), but almost no web browser support it. Why is so?
I feel that for many resources, there is no need for confidentiality though authenticity is always good (the digital signature of the http body is not widely supported though there are some private implementations). Given confidentiality is not needed, then h2c is really a good thing to have.
Technically
There are several technical reasons why HTTP/2 is much better and easier to handle over HTTPS:
Doing HTTP/2 negotiation in TLS with ALPN is much easier and doesn't lose round-trips like Upgrade: in plain HTTP does. And it doesn't suffer from the upgrade problem on POST that you get with plain-text HTTP/2.
N% of the web doesn't support unsolicited Upgrade: h2cheaders in requests and instead respond with 400 errors.
Doing something else than HTTP/1.1 over TCP port 80 breaks in Y% of the cases since the world is full of middle-boxes that "help" out and replace/add things in-stream for such connections. If that then isn't HTTP/1.1, things break (this is also why brotli for example also requires HTTPS).
Ideologically
There's a push for more HTTPS on the web that is shared by and worked on in part by some of the larger web browser developer teams. That makes it considered a bonus if features are implemented HTTPS-only as they then work as yet another motivation for sites and services to move over to HTTPS. Thus, some teams never tried very hard (if at all) to make HTTP/2 work without TLS.
Practically
At least one browser vendor expressed its intention early on to implement and provide HTTP/2 for users done over plain-text HTTP (h2c). They ended up never doing this because of technical obstacles as mentioned above.

HTTPS header and full URL path

How do I go about looking at an HTTPS header?
we would look inside the HTTPs traffic and extract the headers and full URL path.
Any thoughts on what it takes to do this?
In general your question doesn't make much sense because SSL was designed to be opaque secure transport and prevent peeping into it.
In reality if you control one of the sides of communication you can take some actions. But these actions depend on many details, such as whether you have access to the client or server, what software is used on the server and client side.
Finally, I'd say that this question doesn't look to be programming-related unless you describe your task in more details.

Are Websockets more secure for communication between web pages?

This might sound really naive but I would really find a descriptive answer helpful.
So, my question is this:
I can use Firebug to look at AJAX requests made from any website I visit. So, am I right in saying that I wouldn't be able to examine the same communication between the client and the server if the website choses to use Websockets? In other words, does this make it more secure?
No. Not at all. Just because the browser does not (yet) have a tool to show WebSocket traffic, doesn't make it any more secure. You can always run a packet sniffer to monitor the traffic, for example.
No, because there will be other ways beside the browser-build in tools to read your traffic.
Have a try: Install and run Wireshark and you will be able to see all packets you send and receive via Websockets.
Depends on the application. If you are fully Ajax without reloading the document for data then I would think websockets would provide a better authentication for data requests then a cookie session in regards to connection hijack. Given that you are using SSL of course.
Never rely on secrecy of algorithm cause it only gives you false sense of security. Wiki: Security by obscurity
Remember that browser is a program on my computer and I am the one who have a full control over what is send to you, not my browser.
I guess it's only matter of time (up to few months IMO) when developer tools such as Firebug will provide some fancy tool for browsing data send/received by WebSockets.
WebSockets has both an unencrypted (ws://) and encrypted mode (wss://). This is analogous to HTTP and HTTPS. WebSockets protocol payload is simply UTF-8 encoded. From a network sniffing perspective there is no advantage to using WebSockets (use wss and HTTPS for everything at all sensitive). From the browser perspective there is no benefit to using WebSockets for security. Anything running in the browser can be examined (and modified) by a sufficiently knowledgeable user. The tools for examining HTTP/AJAX requests just happen to be better right now.

Any HTTP proxies with explicit, configurable support for request/response buffering and delayed connections?

When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following:
A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part.
The proxy server stops buffering the request when:
A size limit has been reached (say, 4KB), or
The request has been received completely, headers and body
Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed.
The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.)
Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed.
The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources.
I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model?
(Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)
What about using both nginx and Squid (client — Squid — nginx — backend)? When returning data from a backend, Squid does convert it from C-T-E: chunked to a regular stream with Content-Length set, so maybe it can normalize POST also.
Nginx can do everything you want. The configuration parameters you are looking for are
http://wiki.codemongers.com/NginxHttpCoreModule#client_body_buffer_size
and
http://wiki.codemongers.com/NginxHttpProxyModule#proxy_buffer_size
Fiddler, a free tool from Telerik, does at least some of the things you're looking for.
Specifically, go to Rules | Custom Rules... and you can add arbitrary Javascript code at all points during the connection. You could simulate some of the things you need with sleep() calls.
I'm not sure this method gives you the fine buffering control you want, however. Still, something might be better than nothing?
Squid 2.7 can support 1-3 with a patch:
http://www.squid-cache.org/Versions/v2/HEAD/changesets/12402.patch
I've tested this and found it to work well, with the proviso that it only buffers to memory, not disk (unless it swaps, of course, and you don't want this), so you need to run it on a box that's appropriately provisioned for your workload.
Chunked POSTs are a problem for most servers and intermediaries. Are you sure you need support? Usually clients should retry the request when they get a 411.
Unfortunately, I'm not aware of a ready-made solution for this. In the worst case scenario, consider developing it yourself, say, using Java NIO -- it shouldn't take more than a week.

Resources