Does Google QUIC have substantially better performance than TCP? - performance

Google QUIC is a new transport protocol. It uses UDP and provides a very nice set of features:
It doesn't need an initial handshake (0-round-trip)
It has security features by design (combination of TLS and TCP)
It brings the concept of streams, which is great for avoiding the head of the line problem and perfect for HTTP2 (https://community.akamai.com/community/web-performance/blog/2017/08/10/how-does-http2-solve-the-head-of-line-blocking-hol-issue)
The congestion control algorithm is in user space and can be replaced easily
In their SIGCOMM17 publication, they've discussed some performance benefits of QUIC, but my question is:
Do we have a real need to abandon traditional TCP-based technologies and move to QUIC? What is a killer application for QUIC? Is there anyone else apart from Google guys who uses QUIC or at least feel he or she should do that?
My feeling is that we had opportunities to achieve most of those promised benefits by using existing systems like TCP fast open or Multipath TCP.

QUIC is a good alternative for HTTP transport when fetching small objects and TCP's handshake overhead doesn't really pay. Additionally, it may have an advantage when TCP stumbles because of high packet loss.
TCP still pays off when transferring substantial amounts of data as it handles packet loss, congestion, ... by itself (which QUIC also does but in a less well-known/accepted way).
Time will tell if this approach catches.

Related

Microservices and low latency transport

I know only two popular transport protocols in micro-services world: REST/HTTP and AMQP.
And I sense two problems with that:
1)
Do not you think they are pretty slow? If you disagree with that claim (yes, yes, I have no benchmark about AMQP, although HTTP is widely considered as a slow one, you can find in internet articles without my help), then I can tell you that with a scarce choice of 2 you always can imagine a lot of faster protocols are not represented. 2 is a very small number, meaning, in practice - no choice.
2)
HTTP looks like not intended to be a server-to-server protocol, but widely used in this role.
What you think about that and can you suggest some alternative (supported by frameworks, I mean something that I do not need write from scratch myself)?
It all depends on your domain scenario, its requirements and how much you can invest into the development for a lower latency, smaller bandwidth, etc.
Today there is a whole spectrum of options for server communication. Https just happens to be the most common one and good enough for a lot of applications.
Given you have both ends of the communication under control, nothing prevents you from investing more effort and building your own binary protocol based on a UDP socket or go even lower in the OSI layers. For example Google is using QUIC and has proposed to make it a successor to http/2. So http/3 may actually become a lot more efficient.
Or you can try to implement existing standards that are more optimized for latency and even real time applications. One example from the industrial domain is profinet.
A lot of times the payloads are what creates slow connections though. JSON is a great example for a format that takes a lot of time to de-/serialize in large quantities. And to improve that you can use a different transport format, for example flat buffers (another google invention) from the gaming domain.
In general if you do some research about how networking is done in gaming you will find a lot interesting technologies.
First, please isolate architectural topics from implementational topics. One side is architecture and the other side is implementation. Microservices Architecture is talking about a new paradigm in SOA. Now in the implementation phase, you can use several protocols to implement your microservice size service. You can use UDP, TCP, HTTP, etc.
The HTTP protocol used widely in microservices where there are certain concerns like statelessness, this does not necessarily mean that all microservices in implementation phase need to use HTTP. They may/could use HTTP or any other transport protocols like UDP, or even CoAP.
Following are a set of articles that published about microservices in code-project, you can read and comment on your questions if you like.
https://www.codeproject.com/Articles/1264113/Dive-into-Microservices-Architecture-Part-I
https://www.codeproject.com/Articles/1264113/Dive-into-Microservices-Architecture-Part-II
https://www.codeproject.com/Articles/1264113/Dive-into-Microservices-Architecture-Part-III

Should we prefer SSE + REST over websocket when using HTTP/2?

When using websocket, we need a dedicated connection for bidirectionnel communication. If we use http/2 we have a second connection maintained by the server.
In that case, using websocket seems to introduce an unecessary overhead because with SSE and regular http request we can have the advantage of bidirectionnal communication over a single HTTP/2 connection.
What do you think?
Using 2 streams in one multiplexed HTTP/2 TCP connection (one stream for server-to-client communication - Server Sent Events (SSE), and one stream for client-to-server communication and normal HTTP communication) versus using 2 TCP connections (one for normal HTTP communication and one for WebSocket) is not easy to compare.
Probably the mileage will vary depending on applications.
Overhead ? Well, certainly the number of connections doubles up.
However, WebSocket can compress messages, while SSE cannot.
Flexibility ? If the connections are separated, they can use different encryptions. HTTP/2 typically requires very strong encryption, which may limit performance.
On the other hand, WebSocket does not require TLS.
Does clear-text WebSocket work in mobile networks ? In the experience I have, it depends. Antiviruses, application firewalls, mobile operators may limit WebSocket traffic, or make it less reliable, depending on the country you operate.
API availability ? WebSocket is a wider deployed and recognized standard; for example in Java there is an official API (javax.websocket) and another is coming up (java.net.websocket).
I think SSE is a technically inferior solution for bidirectional web communication and as a technology it did not become very popular (no standard APIs, no books, etc - in comparison with WebSocket).
I would not be surprised if it gets dropped from HTML5, and I would not miss it, despite being one of the first to implement it in Jetty.
Depending on what you are interested in, you have to do your benchmarks or evaluate the technology for your particular case.
From the perspective of a web developer, the difference between Websockets and a REST interface is semantics. REST uses a request/response model where every message from the server is the response to a message from the client. WebSockets, on the other hand, allow both the server and the client to push messages at any time without any relation to a previous request.
Which technique to use depends on what makes more sense in the context of your application. Sure, you can use some tricks to simulate the behavior of one technology with the other, but it is usually preferably to use the one which fits your communication model better when used by-the-book.
Server-sent events are a rather new technology which isn't yet supported by all major browsers, so it is not yet an option for a serious web application.
It depends a lot on what kind of application you want to implement. WebSocket is more suitable if you really need a bidirectional communication between server and client, but you will have to implement all the communication protocol and it might not be well supported by all IT infrastructures (some firewall, proxy or load balancers may not support WebSockets). So if you do not need a 100% bidirectional link, I would advise to use SSE with REST requests for additional information from client to server.
But on the other hand, SSE comes with certain caveats, like for instance in Javascript implementation, you can not overwrite headers. The only solution is to pass query parameters, but then you can face an issue with the query string size limit.
So, again, choosing between SSE and WebSockets really depends on the kind of application you need to implement.
A few months ago, I had written a blog post that may give you some information: http://streamdata.io/blog/push-sse-vs-websockets/. Although at that time we didn't consider HTTP2, this can help know what question you need to ask yourself.

Bypassing the TCP-IP stack

I realise this is a somewhat open ended question...
In the context of low latency applications I've heard references to by-passing the TCP-IP stack.
What does this really mean and assuming you have two processes on a network that need to exchange messages what are the various options (and associated trade-offs) for doing so?
Typically the first steps are using a TCP offload engine, ToE, or a user-space TCP/IP stack such as OpenOnload.
Completely skipping TCP/IP means usually looking at InfiniBand and using RDMA verbs or even implementing custom protocols above raw Ethernet.
Generally you have latency due to using anything in the kernel and so user-space mechanisms are ideal, and then the TCP/IP stack is an overhead itself consider all of the layers and the complexity that in can be arranged: IP families, sub-networking, VLANs, IPSEC, etc.
This is not a direct answer to your question but i thought it might give you another view on this topic.
Before trying to bypass TCP-IP stack I would suggest researching proven real-time communication middleware.
One good solution for real-time communication is Data Distribution Service from OMG (Object Management Group)
DDS offers 12 or so quality attributes and has bindings for various languages.
It has LATENCY_BUDGET ,TRANSPORT_PRIORITY and many other quality of service attributes that makes data distribution very easy and fast.
Check out an implementation of DDS standard by PrismTech. It is called OpenSplice and
works well at LAN scale.
Depends on the nature of your protocol really.
If by low-latency applications you mean electronic trading systems, than they normally use IP or UDP multi-cast for market data, such as Pragmatic General Multicast. Mostly because there is one sender and many receivers of the data, so that using TCP would require sending copies of the data to each recipient individually requiring more bandwidth and increasing the latency.
Trading connections traditionally use TCP with application-level heartbeats because the connection needs to be reliable and connection loss must be detected promptly.

Resources and tools for TCP tuning

I'm developing an application where, to satisfy the performance requirements, tuning of low-level network stuff (such as TCP window size etc) seems to be required.
I found the magnitude of my knowledge to be a bit better than "there's TCP and there's UDP", which is far from enough for this task.
What resources might I study to get a better knowledge of which aspects of TCP influence which performance characteristics in which usage scenarios (for example, how to decrease latency while transmitting 100kb packets to 1000 clients simultaneously on a 10gbit LAN), etc.?
What tools might help me? (I already know about Wireshark, but most probably I am not using it to its full potential)
First understand what you're doing:
http://en.wikipedia.org/wiki/Transmission_Control_Protocol
http://www.tcpipguide.com/free/t_TCPWindowSizeAdjustmentandFlowControl.htm
Then understand how to look at what you're about to change:
http://www.wireshark.org/
http://wiki.wireshark.org/TCP_Analyze_Sequence_Numbers
http://www.microsoft.com/downloads/details.aspx?familyid=983b941d-06cb-4658-b7f6-3088333d062f&displaylang=en
Then understand how to change things:
http://msdn.microsoft.com/en-us/library/ms819736.aspx
http://blogs.msdn.com/b/wndp/archive/2007/07/05/receive-window-auto-tuning-on-vista.aspx
Since you're currently on a windows platform and you mention that you're "developing an application"... You probably also want to know about I/O Completion Ports and how you maximise data flow whilst conserving server resources using write completion driven flow control. I've written about these quite a bit and the last link is a link to my free high performance C++ client/server framework which may give you some pointers as to how to use IOCP efficiently.
http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html
http://www.lenholgate.com/blog/2008/07/write-completion-flow-control.html
http://www.serverframework.com/products---the-free-framework.html
And google is your friend...

Why don't browsers let you open a regular connection instead of Ajax or Comet?

If you want to open a two-way connection between the browser and server, the only choice is to poll (hammer the server), or use comet (crufty and prone to disconnects).
Why don't browsers just let you open up a plain TCP connection? Is there any practical benefit to not having this ability?
The underlying protocol HTTP is basically a half duplex communication protocol which is stateless as well and does not supports full duplex communication. However, with HTML 5 websockets things are going to change. Websockets is a new standard which is being considered in HTML 5 specification. Once the specifications have been finalized and all the browser vendors have adapted the standards you can possibly use websockets to establish a dedicated TCP connection through browsers itself.
We must also keep in mind that HTTP was basically designed to deliver documents & share information between the geographically distributed teams and it was not intended to be a communication protocol as such.
Having said that, there are already companies which have built some messaging gateways to enable you to implement full duplex communication.
Given that this functionality is effectively available through flash, there's no real security rationale - but these days no browser wants to be the first to implement a non-standard extension like that. Moreover, there's no easy way to do threads, which could make using a socket rather awkward.
Over the years so many aspects or elements of the web have been hijacked in order to deliver richer experiences. Comet is but one example, where long lived connections were exploited in order to allow server side push. Originally web pages were just meant to be hyperlinked documnts of text and not the rich applications we often see today. Hacks and abuses of what the original thought intended will continue, until one day these things become more standardised.
The answer to your question is essentially no, there is no tangible advantage to not being able to open a two-way connection between client and server in a browser. The reason it can't be done is simply that this was not the intention of web browsers, which were developed to poll/retrieve documents. With the advent of Rich Internet Applications, it has become desirable to have such functionality, but previously this had never been the goal of a browser. Currently there is a void to be filled by an eventual protocol or implementation of an existing protocol which will govern two-way communication between a browser and the server. There are existing techniques used to simulate this behavior to different degrees (AJAX, Comet, etc.) or it can be accomplished with embedded objects (Java, Flash, ActiveX Controls in IE) but these are simply paths around the void, not bridges over it.
We will simply have to wait (or act) for the standard to be written and the implementation to follow. More than likely, the implementation will actually come first, and we will have a fistfull of new cross-browser compatibility issues to enjoy :) Oh, the bleeding edge!
Firewalls. Non-HTTP traffic is often blocked by firewalls, so opening up a random TCP port for communication will often fail.

Resources