I realise this is a somewhat open ended question...
In the context of low latency applications I've heard references to by-passing the TCP-IP stack.
What does this really mean and assuming you have two processes on a network that need to exchange messages what are the various options (and associated trade-offs) for doing so?
Typically the first steps are using a TCP offload engine, ToE, or a user-space TCP/IP stack such as OpenOnload.
Completely skipping TCP/IP means usually looking at InfiniBand and using RDMA verbs or even implementing custom protocols above raw Ethernet.
Generally you have latency due to using anything in the kernel and so user-space mechanisms are ideal, and then the TCP/IP stack is an overhead itself consider all of the layers and the complexity that in can be arranged: IP families, sub-networking, VLANs, IPSEC, etc.
This is not a direct answer to your question but i thought it might give you another view on this topic.
Before trying to bypass TCP-IP stack I would suggest researching proven real-time communication middleware.
One good solution for real-time communication is Data Distribution Service from OMG (Object Management Group)
DDS offers 12 or so quality attributes and has bindings for various languages.
It has LATENCY_BUDGET ,TRANSPORT_PRIORITY and many other quality of service attributes that makes data distribution very easy and fast.
Check out an implementation of DDS standard by PrismTech. It is called OpenSplice and
works well at LAN scale.
Depends on the nature of your protocol really.
If by low-latency applications you mean electronic trading systems, than they normally use IP or UDP multi-cast for market data, such as Pragmatic General Multicast. Mostly because there is one sender and many receivers of the data, so that using TCP would require sending copies of the data to each recipient individually requiring more bandwidth and increasing the latency.
Trading connections traditionally use TCP with application-level heartbeats because the connection needs to be reliable and connection loss must be detected promptly.
Related
Google QUIC is a new transport protocol. It uses UDP and provides a very nice set of features:
It doesn't need an initial handshake (0-round-trip)
It has security features by design (combination of TLS and TCP)
It brings the concept of streams, which is great for avoiding the head of the line problem and perfect for HTTP2 (https://community.akamai.com/community/web-performance/blog/2017/08/10/how-does-http2-solve-the-head-of-line-blocking-hol-issue)
The congestion control algorithm is in user space and can be replaced easily
In their SIGCOMM17 publication, they've discussed some performance benefits of QUIC, but my question is:
Do we have a real need to abandon traditional TCP-based technologies and move to QUIC? What is a killer application for QUIC? Is there anyone else apart from Google guys who uses QUIC or at least feel he or she should do that?
My feeling is that we had opportunities to achieve most of those promised benefits by using existing systems like TCP fast open or Multipath TCP.
QUIC is a good alternative for HTTP transport when fetching small objects and TCP's handshake overhead doesn't really pay. Additionally, it may have an advantage when TCP stumbles because of high packet loss.
TCP still pays off when transferring substantial amounts of data as it handles packet loss, congestion, ... by itself (which QUIC also does but in a less well-known/accepted way).
Time will tell if this approach catches.
When using websocket, we need a dedicated connection for bidirectionnel communication. If we use http/2 we have a second connection maintained by the server.
In that case, using websocket seems to introduce an unecessary overhead because with SSE and regular http request we can have the advantage of bidirectionnal communication over a single HTTP/2 connection.
What do you think?
Using 2 streams in one multiplexed HTTP/2 TCP connection (one stream for server-to-client communication - Server Sent Events (SSE), and one stream for client-to-server communication and normal HTTP communication) versus using 2 TCP connections (one for normal HTTP communication and one for WebSocket) is not easy to compare.
Probably the mileage will vary depending on applications.
Overhead ? Well, certainly the number of connections doubles up.
However, WebSocket can compress messages, while SSE cannot.
Flexibility ? If the connections are separated, they can use different encryptions. HTTP/2 typically requires very strong encryption, which may limit performance.
On the other hand, WebSocket does not require TLS.
Does clear-text WebSocket work in mobile networks ? In the experience I have, it depends. Antiviruses, application firewalls, mobile operators may limit WebSocket traffic, or make it less reliable, depending on the country you operate.
API availability ? WebSocket is a wider deployed and recognized standard; for example in Java there is an official API (javax.websocket) and another is coming up (java.net.websocket).
I think SSE is a technically inferior solution for bidirectional web communication and as a technology it did not become very popular (no standard APIs, no books, etc - in comparison with WebSocket).
I would not be surprised if it gets dropped from HTML5, and I would not miss it, despite being one of the first to implement it in Jetty.
Depending on what you are interested in, you have to do your benchmarks or evaluate the technology for your particular case.
From the perspective of a web developer, the difference between Websockets and a REST interface is semantics. REST uses a request/response model where every message from the server is the response to a message from the client. WebSockets, on the other hand, allow both the server and the client to push messages at any time without any relation to a previous request.
Which technique to use depends on what makes more sense in the context of your application. Sure, you can use some tricks to simulate the behavior of one technology with the other, but it is usually preferably to use the one which fits your communication model better when used by-the-book.
Server-sent events are a rather new technology which isn't yet supported by all major browsers, so it is not yet an option for a serious web application.
It depends a lot on what kind of application you want to implement. WebSocket is more suitable if you really need a bidirectional communication between server and client, but you will have to implement all the communication protocol and it might not be well supported by all IT infrastructures (some firewall, proxy or load balancers may not support WebSockets). So if you do not need a 100% bidirectional link, I would advise to use SSE with REST requests for additional information from client to server.
But on the other hand, SSE comes with certain caveats, like for instance in Javascript implementation, you can not overwrite headers. The only solution is to pass query parameters, but then you can face an issue with the query string size limit.
So, again, choosing between SSE and WebSockets really depends on the kind of application you need to implement.
A few months ago, I had written a blog post that may give you some information: http://streamdata.io/blog/push-sse-vs-websockets/. Although at that time we didn't consider HTTP2, this can help know what question you need to ask yourself.
I am building a set of programs that consist of multiple clients and a single server.
The clients are frequently pushing small packets of data to the server, which will validate the information (returning an error if the data is invalid), and process the received information. The information may then incur the firing of events, which clients will be subscribed to, allowing for clients to be instantly (or as close as possible) notified (along with a small amount of data).
I have some ideas about how to do this, but I am trying to avoid creating a protocol of my own, mainly as I'm sure it would take forever and I would probably make a few errors. So I was wondering if there are any existing protocols that I could implement into my system that would provide such functionality.
The number of clients will initially be quite small, but will be growing over time to potentially include 1000's of clients (with their own subscriptions), and several front end servers (each one handling a subset of subscriptions) parsing the information back and forth with back end servers for improved capability.
So, if anyone knows of any existing protocols that implement these requirements and functionality, that would be fantastic.
EDIT
I am currently looking at the XMPP protocol, and the JXTA protocol suite (for reference, and implement with another language). Both seem quite good and provide the necessary connectivity, but I have not had the opportunity to test each of them out in my environment, or if they are even suitable for what I am attempting.
Additionally, some of the network clients will be outside of the local network and operating over WAN. Security is not so much of an issue, but I need to take into account the increase latency of this, and firewall rules (local to the connection that is hosting the application and ISP firewalls) that could be blocking certain ports or transport protocols (I have read some text that said that some ISPs where blocking UDP packets, but not sure of how wide this goes. I can do it at home, the office, mobile, friends houses, etc and have yet to experience it myself).
I'm sorry if the following is not exactly what you're after but I am slightly confused by your use of the word 'protocol'. I understand a protocol to be a 'communication specification' only, where the implementation is left entirely to you. If that is the case I always find the the following graphic usefull, link.
If on the other hand you are looking for a solution which allows you to easily implement the networking side of your application, helping save time, then checkout the following network libraries, which implement their own custom protocol:
NetworkComms.Net
Lidgren
ZeroMQ
Disclaimer: I'm a developer for NetworkComms.Net
I'm developing an application where, to satisfy the performance requirements, tuning of low-level network stuff (such as TCP window size etc) seems to be required.
I found the magnitude of my knowledge to be a bit better than "there's TCP and there's UDP", which is far from enough for this task.
What resources might I study to get a better knowledge of which aspects of TCP influence which performance characteristics in which usage scenarios (for example, how to decrease latency while transmitting 100kb packets to 1000 clients simultaneously on a 10gbit LAN), etc.?
What tools might help me? (I already know about Wireshark, but most probably I am not using it to its full potential)
First understand what you're doing:
http://en.wikipedia.org/wiki/Transmission_Control_Protocol
http://www.tcpipguide.com/free/t_TCPWindowSizeAdjustmentandFlowControl.htm
Then understand how to look at what you're about to change:
http://www.wireshark.org/
http://wiki.wireshark.org/TCP_Analyze_Sequence_Numbers
http://www.microsoft.com/downloads/details.aspx?familyid=983b941d-06cb-4658-b7f6-3088333d062f&displaylang=en
Then understand how to change things:
http://msdn.microsoft.com/en-us/library/ms819736.aspx
http://blogs.msdn.com/b/wndp/archive/2007/07/05/receive-window-auto-tuning-on-vista.aspx
Since you're currently on a windows platform and you mention that you're "developing an application"... You probably also want to know about I/O Completion Ports and how you maximise data flow whilst conserving server resources using write completion driven flow control. I've written about these quite a bit and the last link is a link to my free high performance C++ client/server framework which may give you some pointers as to how to use IOCP efficiently.
http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html
http://www.lenholgate.com/blog/2008/07/write-completion-flow-control.html
http://www.serverframework.com/products---the-free-framework.html
And google is your friend...
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I understand that IPX and SPX both provide connection services similar to TCP/IP. Here, IPX is similar to IP and SPX is similar to TCP and hence eager to know about this.
How does the performance of IPX/SPX exceed that of TCP in LAN ?
Why is IPX/SPX not used in LAN alone if its performance is superior to that of TCP in the case of LAN ?
I searched the internet and landed up in few links but it did not seem to convey some clear reasons for this - http://en.wikipedia.org/wiki/IPX/SPX . Any ideas ?
IPX was optimized for LANs. For one thing, IPX addresses are formed using an Ethernet MAC addresses and a 32-bit network ID. This design allowed for "zero configuration" of the IPX nodes in most cases - just plug computer in and it's on the network. IPv6 with stateless autoconf has the same properties, btw.
SPX (analogue of TCP) was also highly optimized for LANs. For example, it had per-packet nacks instead of per-octet acks in TCP without any explicit window management functions. That allowed file servers to be very simple - just spew file contents into the Ethernet at the top speed. If a client misses a packet then you can re-read it from disk/cache and re-send it.
In contrast, with TCP you have to buffer all the unacknowledged data and re-send all of the data in the send buffer after a lost packet (in case you don't use selective acknowledgment feature).
However, IPX was not suitable for the WANs at all. For example, it couldn't cope with different frame sizes. I.e. two networks with different frames (say, Ethernet and Ethernet with jumbo frames) couldn't interoperate without a proxy server or some form of encapsulation.
Additionally, packet reordering on WANs is ubiquitous but it plays hell with SPX (at least with Novell's implementation) causing a lot of spurious NAKs.
And of course, IPX addresses were not hierarchical so not very suited for routing. Network ID in theory could be used for this, but even large IPX/SPX deployments were not complex enough to develop rich routing infrastructure.
Right now, IPX is interesting only as a historical curiosity and in maintenance of a small number of VERY legacy systems.
You're missing a critical distinction between SPX/IPX and TCP/IP. TCP/IP is the basis of the Internet. SPX/IPX is not.
SPX/IPX was an interesting protocol, but is now of interest only within a given corporation.
It's often the case in the real world that something technically superior loses due to business reasons. Consider Betamax video tape format vs. VHS. Betamax was considered technically superior, yet you can't buy a Betamax recorder today except maybe on eBay. One may argue that Windows won over Macintosh, despite the fact that the MacOS user interface was much nicer, due entirely to business decisions (mainly the decision by Apple not to permit clones).
Similarly, issues far beyond the control of Xerox destroyed SPX/IPX as a viable protocol - HTTP runs over TCP/IP, not over SPX/IPX. HTTP rules the world, therefore TCP/IP rules the world.
SPX/IPX has been left as an exercise for the reader.
BTW, I've been talking about SPX/IPX as though they were a Xerox protocol - not quite. They are a Novell protocol, but based on the Xerox Network System protocols. Interestingly, I found nothing about this protocol on the web site either of Xerox nor of Novell.
Also, see the Wikipedia article on IPX/SPX.
The disadvantage of the TCP/IP Protocol stack is a lower speed than IPX/SPX. However, the TCP/IP stack is now also used in local networks to simplify the negotiation of local and wide area network protocols. Currently, it is considered the main one in the most common operating systems.
IPX/SPX can coexist on a lan with TCP/IP. PC's that wish to be isolated from the web can still share files/printers by using IPX and not loading TCP. This is more secure than any firewall and second only to cutting wires.
IPX/SPX performed better than TCP/IP back in the day, on systems where you could compare the two. That is no longer true since TCP got all the developer effort from about 1993 onwards because of HTTP.
Essentially, IPX/SPX was obsoleted by TCP/IP, and so it is no longer relevant. Maintaining two sets of protocols is too much effort for network operators, so the less capable one dies out. Eventually this will happen to IPv4.