When does server-side push technology become necessary? - websocket

Right now, I'm pulling some 10 kB sensor data from a server via a single plain old HTTP request every 5 minutes. In the future, I might want to increase the frequency to make one request every 30 seconds.
When does server-side push technology become necessary?
Obviously, the precise answer depends on the server - but what's the general approach to the issue? Using push technology definitely seems advantageous. However, the would have to be some major code rewriting. Additionally, I feel like 30 seconds is still a long enough interval and the overhead (e.g. cookies in HTTP headers, ...) shouldn't cause too much surplus traffic.

Push technology is useful for any of the following situations:
You need the client to have low latency when there is new data (e.g. not wait for the next polling interval, but to find out within seconds or even ms when there is new data).
When you're interested in minimizing overhead on your server or bandwidth usage or power consumption on the client, yet the client needs to know in a fairly timely fashion when new data is available. Doing frequent polling from a mobile client chews up bandwidth and battery.
When data availability is unpredictable and a regular polling would usually result in no data being available. If every poll results in data being collected and the timeliness of a moderate polling interval is sufficient for your application, then there are not big gains by switching to a push notification mechanism. When the poll usually results in an empty request, that's when polling becomes very inefficient.
If you're sending data regularly and you are trying to minimize bandwidth. A single webSocket packet is way more efficient than an HTTP request as the HTTP request includes headers, cookies, etc... that need not be send with a single webSocket packet once the webSocket connection has already been established.
Some other references on the topic:
What are the pitfalls of using Websockets in place of RESTful HTTP?
Ajax vs Socket.io
websocket vs rest API for real time data?
Cordova: Sockets, PushNotifications, or repeatedly polling server?
HTML5 WebSocket: A Quantum Leap in Scalability for the Web

Related

At what point are WebSockets less efficient than Polling?

While I understand that the answer to the above question is somewhat determined by your application's architecture, I'm interested mostly in very simple scenarios.
Essentially, if my app is pinging every 5 seconds for changes, or every minute, around when will the data being sent to maintain the open Web Sockets connection end up being more than the amount you would waste by simple polling?
Basically, I'm interested in if there's a way of quantifying how much inefficiency you incur by using frameworks like Meteor if an application doesn't necessarily need real-time updates, but only periodic checks.
Note that my focus here is on bandwidth utilization, not necessarily database access times, since frameworks like Meteor have highly optimized methods of requesting only updates to the database.
The whole point of a websocket connection is that you don't ever have to ping the app for changes. Instead, the client just connects once and then the server can just directly send the client changes whenever they are available. The client never has to ask. The server just sends data when it's available.
For any type of server-initiated-data, this is way more efficient with bandwidth than http polling. Besides giving you much more timely results (the results are delivered immediately rather than discovered by the client only on the next polling interval).
For pure bandwidth usage, the details would depend upon the exact circumstances. An http polling request has to set up a TCP connection and confirm that connection (even more data if its an SSL connection), then it has to send the http request, including any relevant cookies that belong to that host and including relevant headers and the GET URL. Then, the server has to send a response. And, most of the time all of this overhead of polling will be completely wasted bandwidth because there's nothing new to report.
A webSocket starts with a simple http request, then upgrades the protocol to the webSocket protocol. The webSocket connection itself need not send any data at all until the server has something to send to the client in which case the server just sends the packet. Sending the data itself has far less overhead too. There are no cookies, no headers, etc... just the data. Even if you use some keep-alives on the webSocket, that amount of data is incredibly tiny compared to the overhead of an HTTP request.
So, how exactly much you would save in bandwidth depends upon the details of the circumstances. If it takes 50 polling requests before it finds any useful data, then every one of those http requests is entirely wasted compared to the webSocket scenario. The difference in bandwidth could be enormous.
You asked about an application that only needs periodic checks. As soon as you have a periodic check that results in no data being retrieved, that's wasted bandwidth. That's the whole idea of a webSocket. You consume no bandwidth (or close to no bandwidth) when there's no data to send.
I believe #jfriend00 answered the question very clearly. However, I do want to add a thought.
By throwing in a worst case (and improbable) scenario for Websockets vs. HTTP, you would clearly see that a Websocket connection will always have an advantage in regards to bandwidth (and probably all-round performance).
This is the worst case scenario for Websockets v/s HTTP:
your code uses Websocket connections the exact same way it uses HTTP requests, for polling.
(which isn't something you would do, I know, but it is a worst case scenario).
Every polling event is answered positively - meaning that no HTTP requests were performed in vain.
This is the worst possible situation for Websockets, which are designed for pushing data rather than polling... even in this situation Websockets will save you both bandwidth and CPU cycles.
Seriously, even ignoring the DNS query (performed by the client, so you might not care about it) and the TCP/IP handshake (which is expensive for both the client and the server), a Websocket connection is still more performant and cost-effective.
I'll explain:
Each HTTP request includes a lot of data, such as cookies and other headers. In many cases, each HTTP request is also subject to client authentication... rarely is data given away to anybody.
This means that HTTP connections pass all this data (and possibly perform client authentication) once per request.[Stateless]
However, Websocket connections are stateful. The data is sent only once (instead of each time a request is made). Client authentication occurs only during the Websocket connection negotiation.
This means that Websocket connections pass the same data (and possibly perform client authentication) once per connection (once for all polls).
So even in this worst case scenario, where polling is always positive and Websockets are used for polling instead of pushing data, Websockets will still save your server both bandwidth and other resources (i.e. CPU time).
I think the answer to your question, simply put, is "never". Websockets are never less efficient than polling.

Google C2DM server side performance

My application sends notifications to the customers in bulks. For example at 8am every day a back-end systems generates notifications for 50K customers, and those notifications should be delivered in a reasonable time.
During performance testing I've discovered that sending a single push request to C2DM server takes about 400 millis which is far too long. I was told that a production quota may provide better performance, but will it reduce to 10 millis?
Besides, I need C2DM performance marks before going to production because it may affect the implementation - sending the requests from multiple threads, using asynchronous http client etc.
Does anyone knows about the C2DM server benchmarks or any performance-related server implementation guidelines?
Thanks,
Artem
I use appengine, which makes the delivery of a newsletter to 1 million users a very painful task. Mostly because the c2dm API doesn't support multiple users delivery, which makes it necessary to create one http request per user.
The time to the c2dm server to respond will depend on your latency from the google servers. In my case (appengine) it's very small.
My advice to you is to create as many threads as possible, since the threads will be waiting for the IO over the network. If you every pass the quota, you can always ask permission for more traffic.

What do you get with HTML5 web sockets that you can't have with AJAX?

Ian Hickson says:
I expect the iframe sandboxing feature
will be a big boon to developers if it
takes off. My own personal favorite
feature is probably the Web Sockets
API, which allows two-way
communication with a server so that
you can implement games, chatting,
remote controls, and so forth.
What can you get with web sockets that you can't get with AJAX? Is it just convenience, or is it somehow more efficient? Is it that the server can send data to the client, without having to wait for a message so it can respond?
Yes, it's all about the server being able to push data to the client. Currently, simulating bi-directional communication without Flash/Silverlight/Java/ActiveX takes the form of one of two workarounds:
Traditional polling: Clients make small requests to the server frequently, checking for updates. Even if no update has occurred, the client doesn't know that and must continuously poll for updates. Though each request may be lightweight, constant polling by many clients can add up quickly.
Long polling: Clients make periodic requests for updates, like regular polling, but if there are no updates yet available then the server does not respond immediately and holds the connection open. When an update is finally available, the server pushes that down to the client, which acts on it and then repeats that process. Long polling offers push-like update resolution, but is basically a self-inflicted DDoS attack and can be very resource intensive for many types of web servers.
With WebSockets, you get all of the responsiveness advantages of long polling, with dramatically less server-side overhead.
WebSockets are more efficient (and "more real-time") than AJAX calls because you keep connection open and don't send extra protocol headers and other stuff after each request and response. Look at this article:
During making connection with
WebSocket, client and server exchange
data per frame which is 2 bytes each,
compared to 8 kilo bytes of http
header when you do continuous polling.

Web sockets make ajax/CORS obsolete?

Will web sockets when used in all web browsers make ajax obsolete?
Cause if I could use web sockets to fetch data and update data in realtime, why would I need ajax? Even if I use ajax to just fetch data once when the application started I still might want to see if this data has changed after a while.
And will web sockets be possible in cross-domains or only to the same origin?
WebSockets will not make AJAX entirely obsolete and WebSockets can do cross-domain.
AJAX
AJAX mechanisms can be used with plain web servers. At its most basic level, AJAX is just a way for a web page to make an HTTP request. WebSockets is a much lower level protocol and requires a WebSockets server (either built into the webserver, standalone, or proxied from the webserver to a standalone server).
With WebSockets, the framing and payload is determined by the application. You could send HTML/XML/JSON back and forth between client and server, but you aren't forced to. AJAX is HTTP. WebSockets has a HTTP friendly handshake, but WebSockets is not HTTP. WebSockets is a bi-directional protocol that is closer to raw sockets (intentionally so) than it is to HTTP. The WebSockets payload data is UTF-8 encoded in the current version of the standard but this is likely to be changed/extended in future versions.
So there will probably always be a place for AJAX type requests even in a world where all clients support WebSockets natively. WebSockets is trying to solve situations where AJAX is not capable or marginally capable (because WebSockets its bi-directional and much lower overhead). But WebSockets does not replace everything AJAX is used for.
Cross-Domain
Yes, WebSockets supports cross-domain. The initial handshake to setup the connection communicates origin policy information. The wikipedia page shows an example of a typical handshake: http://en.wikipedia.org/wiki/WebSockets
I'll try to break this down into questions:
Will web sockets when used in all web browsers make ajax obsolete?
Absolutely not. WebSockets are raw socket connections to the server. This comes with it's own security concerns. AJAX calls are simply async. HTTP requests that can follow the same validation procedures as the rest of the pages.
Cause if I could use web sockets to fetch data and update data in realtime, why would I need ajax?
You would use AJAX for simpler more manageable tasks. Not everyone wants to have the overhead of securing a socket connection to simply allow async requests. That can be handled simply enough.
Even if I use ajax to just fetch data once when the application started I still might want to see if this data has changed after a while.
Sure, if that data is changing. You may not have the data changing or constantly refreshing. Again, this is code overhead that you have to account for.
And will web sockets be possible in cross-domains or only to the same origin?
You can have cross domain WebSockets but you have to code your WS server to accept them. You have access to the domain (host) header which you can then use to accept / deny requests. This can, however, be spoofed by something as simple as nc. In order to truly secure the connection you will need to authenticate the connection by other means.
Websockets have a couple of big downsides in terms of scalability that ajax avoids. Since ajax sends a request/response and closes the connection (..or shortly after) if someone stays on the web page it doesn't use server resources when idling. Websockets are meant to stream data back to the browser, and they tie up server resources to do so. Servers have a limit in how many simultaneous connections they can keep open at one time. Not to mention depending on your server side technology, they may tie up a thread to handle the socket. So websockets have more resource intensive requirements for both sides per connection. You could easily exhaust all of your threads servicing clients and then no new clients could come in if lots of users are just sitting on the page. This is where nodejs, vertx, netty can really help out, but even those have upper limits as well.
Also there is the issue of state of the underlying socket, and writing the code on both sides that carry on the stateful conversation which isn't something you have to do with ajax style because it's stateless. Websockets require you create a low level protocol which is solved for you with ajax. Things like heart beating, closing idle connections, reconnection on errors, etc are vitally important now. These are things you didn't have to solve when using AJAX because it was stateless. State is very important to the stability of your app and more importantly the health of your server. It's not trivial. Pre-HTTP we built a lot of stateful TCP protocols (FTP, telnet, SSH), and then HTTP happened. And no one did that stuff much anymore because even with its limitations HTTP was surprisingly easier and more robust. Websockets bring back the good and the bad of stateful protocols. You'll learn soon enough if you didn't get a dose of that last go around.
If you need streaming of realtime data this extra overhead is warranted because polling the server to get streamed data is worse, but if all you are doing is user interaction->request->response->update UI, then ajax is easier and will use less resources because once the response is sent the conversation is over and no additional server resources are used. So I think it's a tradeoff and the architect has to decide which tool fits their problem. AJAX has its place, and websockets have their place.
Update
So the architecture of your server is what matters when we are talking about threads. If you are using a traditionally multi-threaded server (or processes) where a each socket connection gets its own thread to respond to requests then websockets matter a lot to you. So for each connection we have a socket, and eventually the OS will fall over if you have too many of these, and the same goes for threads (more so for processes). Threads are heavier than sockets (in terms of resources) so we try and conserve how many threads we have running simultaneously. That means creating a thread pool which is just a fixed number of threads that is shared among all sockets. But once a socket is opened the thread is used for the entire conversation. The length of those conversations govern how quickly you can repurpose those threads for new sockets coming in. The length of your conversation governs how much you can scale. However if you are streaming this model doesn't work well for scaling. You have to break the thread/socket design.
HTTP's request/response model makes it very efficient in turning over threads for new sockets. If you are just going to use request/response use HTTP its already built and much easier than reimplementing something like that in websockets.
Since websockets don't have to be request/response as HTTP and can stream data if your server has a fixed number of threads in its thread pool and you have the same number of websockets tying up all of your threads with active conversations, you can't service new clients coming in! You've reached your maximum capacity. That's where protocol design is important too with websockets and threads. Your protocol might allow you to loosen the thread per socket per conversation model that way people just sitting there don't use a thread on your server.
That's where asynchronous single thread servers come in. In Java we often call this NIO for non-blocking IO. That means it's a different API for sockets where sending and receiving data doesn't block the thread performing the call.
So traditional in blocking sockets when you call socket.read() or socket.write() they wait until the data is received or sent before returning control to your program. That means your program is stuck waiting for the socket data to come in or go out until you can do anything else. That's why we have threads so we can do work concurrently (at the same time). Send this data to client X while I wait on data from client Y. Concurrencies is the name of the game when we talk about servers.
In a NIO server we use a single thread to handle all clients and register callbacks to be notified when data arrives. For example
socket.read( function( data ) {
// data is here! Now you can process it very quickly without waiting!
});
The socket.read() call will return immediately without reading any data, but our function we provided will be called when it comes in. This design radically changes how you build and architect your code because if you get hung up waiting on something you can't receive any new clients. You have a single thread you can't really do two things at once! You have to keep that one thread moving.
NIO, Asynchronous IO, Event based program as this is all known as, is a much more complicated system design, and I wouldn't suggest you try and write this if you are starting out. Even very Senior programmers find it very hard to build a robust systems. Since you are asynchronous you can't call APIs that block. Like reading data from the DB or sending messages to other servers have to be performed asynchronously. Even reading/writing from the file system can slow your single thread down lowering your scalability. Once you go asynchronous it's all asynchronous all the time if you want to keep the single thread moving. That's where it gets challenging because eventually you'll run into an API, like DBs, that is not asynchronous and you have to adopt more threads at some level. So a hybrid approaches are common even in the asynchronous world.
The good news is there are other solutions that use this lower level API already built that you can use. NodeJS, Vertx, Netty, Apache Mina, Play Framework, Twisted Python, Stackless Python, etc. There might be some obscure library for C++, but honestly I wouldn't bother. Server technology doesn't require the very fastest languages because it's IO bound more than CPU bound. If you are a die hard performance nut use Java. It has a huge community of code to pull from and it's speed is very close (and sometimes better) than C++. If you just hate it go with Node or Python.
Yes, yes it does. :D
The earlier answers lack imagination. I see no more reason to use AJAX if websockets are available to you.

Push or Pull for a near real time automation server?

We are currently developing a server whereby a client requests interest in changes to specific data elements and when that data changes the server pushes the data back to the client. There has vigorous debate at work about whether or not it would be better for the client to poll for this data.
What is considered to be the ideal method, in terms of performance, scalability and network load, of data transfer in a near real time environment?
Update:
Here's a Link that gives some food for thought with regards to UI updates.
There's probably no ideal method for every situation, but push is usually better and used more often. It allows to optimize server caching and data transfers, which helps performance and scalability, and cuts network traffic a bit by avoiding client requests and empty responses. It can be important advantage for a server to operate in it's own pace and supply clients with data when it is ready.
Industry standarts - such as OPC, GID - support both. Server pushes updates to subscribed clients, but client can pull some rarely used data out without bothering with subscription.
As long as the client initiates the connection (to get passed firewall and NAT problems) either way is fine.
If there are several different type of data you need to send, you might want to have the client specify which type he wants, but this is only needed once per connection. Then you can have the server continue to send updates as it has them.
It would be less network traffic to have the server send updates without the client continually asking for updates.
What do you have on the client's side? Many firewalls allow outgoing requests but block incoming requests. In other words, pull may be your only option if you are crossing the Internet unless you are sending out e-mails.

Resources