I have read many articles on real-time push notifications. And the resume is that websocket is generally the preferred technique as long as you are not concerned about 100% browser compatibility. And yet, one article states that
Long polling - potentially when you are exchanging single call with
server, and server is doing some work in background.
This is exactly my case. The user presses a button which initiates some complex calculations on server-side, and as soon as the answer is ready, the server sends a push-notification to the client. The question is, can we say that for the case of one-time responses, long-polling is better choice than websockets?
Or unless we are concerned about obsolete browsers support and if I am going to start the project from scratch, websockets should ALWAYS be preferred to long-polling when it comes to push-protocol ?
The question is, can we say that for the case of one-time responses,
long-polling is better choice than websockets?
Not really. Long polling is inefficient (multiple incoming requests, multiple times your server has to check on the state of the long running job), particularly if the usual time period is long enough that you're going to have to poll many times.
If a given client page is only likely to do this operation once, then you can really go either way. There are some advantages and disadvantages to each mechanism.
At a response time of 5-10 minutes you cannot assume that a single http request will stay alive that long awaiting a response, even if you make sure the server side will stay open that long. Clients or intermediate network equipment (proxies, etc...) just make not keep the initial http connection open that long. That would have been the most efficient mechanism if you could have done that. But, I don't think you can count on that for a random network configuration and client configuration that you do not control.
So, that leaves you with several options which I think you already know, but I will describe here for completeness for others.
Option 1:
Establish websocket connection to the server by which you can receive push response.
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated.
Receive websocket push response some time later.
Close webSocket (assuming this page won't be doing this again).
Option 2:
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated and probably some sort of taskID that can be used for future querying.
Using http "long polling" to "wait" for the answer. Since these requests will likely "time out" before the response is received, you will have to regularly long poll until the response is received.
Option 3:
Establish webSocket connection.
Send message over webSocket connection to initiate the operation.
Receive response some time later that the operation is complete.
Close webSocket connection (assuming this page won't be using it any more).
Option 4:
Same as option 3, but using socket.io instead of plain webSocket to give you heartbeat and auto-reconnect logic to make sure the webSocket connection stays alive.
If you're looking at things purely from the networking and server efficiency point of view, then options 3 or 4 are likely to be the most efficient. You only have the overhead of one TCP connection between client and server and that one connection is used for all traffic and the traffic on that one connection is pretty efficient and supports actual push so the client gets notified as soon as possible.
From an architecture point of view, I'm not a fan of option 1 because it just seems a bit convoluted when you initiate the request using one technology and then send the response via another and it requires you to create a correlation between the client that initiated an incoming http request and a connected webSocket. That can be done, but it's extra bookkeeping on the server. Option 2 is simple architecturally, but inefficient (regularly polling the server) so it's not my favorite either.
There is an alterternative that don't require polling or having an open socket connection all the time.
It's called web push.
The Push API gives web applications the ability to receive messages pushed to them from a server, whether or not the web app is in the foreground, or even currently loaded, on a user agent. This lets developers deliver asynchronous notifications and updates to users that opt in, resulting in better engagement with timely new content.
Some perks are
You need to ask for notification permission
Your site needs to have a service worker running in foreground
having a service worker also means you need to have SSL / HTTPS
Related
I am learning to use SignalR and so far I had success in doing so. I can implement Hubs, I can implement business logic, I can invoke client-side functions from server for whom I want, I can invoke server-side methods from client-side, this stuff is great. The thing puzzling me is the theory.
In fact, I gathered info from this video. SignalR is using WebSockets, which provide a full duplex channel over a single TCP connection. If no WebSockets are available, then the fallback protocol will be EventSource. If that is unavailable, then a Forever Frame will be used. If that is unavailable, then Long Polling will be used. It is rather strange for me that a very hacky solution, like a forever frame is preferred over the older convention and I am interested about the rationale behind SignalR's decision to have forever frames as the third option and polling as fourth option.
I have tried to find out the answer to this question and I found out that it is rumored that there is a 3x max latency time in the case of long polling compared to forever frames. Is this a fact and if so, is it a fact for all browsers, or for a subset?
foreverFrame works a bit like serverSentEvents - there is one long running http request which the server never terminates but uses to push data to the client. longPolling works differently - there is a poll http request which is being closed by the server each time there is data for the client to read (or a timeout expires). If the client wants to read more data it needs to open a new http request which again will be closed by the server as soon as there is any data for the client. In other words, in case of foreverFrame the server is pushing data to the client using an already established channel while in case of longPolling the client is continuously creating http requests to get the data from the server.
I have an API running on a server, which handle users connection and a messaging system.
Beside that, I launched a websocket on that same server, waiting for connections and stuff.
And let's say we can get access to this by an Android app.
I'm having troubles to figure out what I should do now, here are my thoughts:
1 - When a user connect to the app, the API connect to the websocket. We allow the Android app only to listen on this socket to get new messages. When the user want to answer, the Android app send a message to the API. The API writes itself the received message to the socket, which will be read back by the Android app used by another user.
This way, the API can store the message in database before writing it in the socket.
2- The API does not connect to the websocket in any way. The Android app listen and write to the websocket when needed, and should, when writing to the websocket, also send a request to the API so it can store the message in DB.
May be none of the above is correct, please let me know
EDIT
I already understood why I should use a websocket, seems like it's the best way to have this "real time" system (when getting a new message for example) instead of forcing the client to make an HTTP request every x seconds to check if there are new messages.
What I still don't understand, is how it is suppose to communicate with my database. Sorry if my example is not clear, but I'll try to keep going with it :
My messaging system need to store all messages in my API database, to have some kind of historic of the conversation.
But it seems like a websocket must be running separately from the API, I mean it's another program right? Because it's not for HTTP requests
So should the API also listen to this websocket to catch new messages and store them?
You really have not described what the requirements are for your application so it's hard for us to directly advise what your app should do. You really shouldn't start out your analysis by saying that you have a webSocket and you're trying to figure out what to do with it. Instead, lay out the requirements of your app and figure out what technology will best meet those requirements.
Since your requirements are not clear, I'll talk about what a webSocket is best used for and what more traditional http requests are best used for.
Here are some characteristics of a webSocket:
It's designed to be continuously connected over some longer duration of time (much longer than the duration of one exchange between client and server).
The connection is typically made from a client to a server.
Once the connection is established, then data can be sent in either direction from client to server or from server to client at any time. This is a huge difference from a typical http request where data can only be requested by the client - with an http request the server can not initiate the sending of data to the client.
A webSocket is not a request/response architecture by default. In fact to make it work like request/response requires building a layer on top of the webSocket protocol so you can tell which response goes with which request. http is natively request/response.
Because a webSocket is designed to be continuously connected (or at least connected for some duration of time), it works very well (and with lower overhead) for situations where there is frequent communication between the two endpoints. The connection is already established and data can just be sent without any connection establishment overhead. In addition, the overhead per message is typically smaller with a webSocket than with http.
So, here are a couple typical reasons why you might choose one over the other.
If you need to be able to send data from server to client without having the client regular poll for new data, then a webSocket is very well designed for that and http cannot do that.
If you are frequently sending lots of small bits of data (for example, a temperature probe sending the current temperature every 10 seconds), then a webSocket will incur less network and server overhead than initiating a new http request for every new piece of data.
If you don't have either of the above situations, then you may not have any real need for a webSocket and an http request/response model may just be simpler.
If you really need request/response where a specific response is tied to a specific request, then that is built into http and is not a built-in feature of webSockets.
You may also find these other posts useful:
What are the pitfalls of using Websockets in place of RESTful HTTP?
What's the difference between WebSocket and plain socket communication?
Push notification | is websocket mandatory?
How does WebSockets server architecture work?
Response to Your Edit
But it seems like a websocket must be running separately from the API,
I mean it's another program right? Because it's not for HTTP requests
The same process that supports your API can also be serving the webSocket connections. Thus, when you get incoming data on the webSocket, you can just write it directly to the database the same way the API would access the database. So, NO the webSocket server does not have to be a separate program or process.
So should the API also listen to this websocket to catch new messages
and store them?
No, I don't think so. Only one process can be listening to a set of incoming webSocket connections.
I was trying to understand the difference in Websocket and Comet model. As per my understanding,
In comet model, the connection remains opened until the server has something to push to the client. Once server pushes the data to client, the connection is closed and new connection is established for the next request. It is not considered a good approach as the connection may remain open for long time (causing intensive use of server resources) or may timeout.
On the other hand, websockets start with a handshake connection and once both the client and server agree to exchange data, the connection remains open.
So in both the case the connection remains open for long time (especially websocket). So isnt't this a drawback of websocket to keep the connection open. I would like to take the reference of SignalR in asp.net to discuss about this concept.
First, let's clarify that Comet comes in two flavors: HTTP Streaming and HTTP Long Polling. You were referring to Long Polling. (See this other answer for terminology).
In all three cases (WebSocket, HTTP Streaming, and HTTP Long Polling) the underlying TCP socket is kept open. That's actually the main feature of this kind of techniques and not a side effect. You want the socket to stay permanently open (I'm oversimplifying now), so that data can be pushed asynchronously and with low latency.
As you correctly said, this implies that the server must be able to handle a large number of open sockets without wasting resources. And that's one of the key elements in the choice of a good Comet/WebSocket server.
I've developed a web-based application in which a signed in user should send a message to the server telling he is still online every 3 seconds. The message is then processed by the server and a stored procedure is called in Mysql to set the user's status to online.
I've looked in to similar issues in which Comet and Ajax are compared (here or here) but considering that 3 second delay is acceptable and maximum users of 1000 are online in the system, is using Ajax a wise choice or Comet should be used?
For this kind of feature comet is more appropriate:
Your clients send messages (i'm online)
Your server broadcast the processed message (user X is still online)
In an ajax way you are only serving messages to server.
In order to get the "broadcast effect" in an ajax way. You will end up doing something similar to comet but with less efficient bandwidth.
Ajax:
Client send server - i'm in
Server process
Server send back to client list of user in.
In this case every client ask every 3 second the database for the COMPLETE "in" list.
In comet:
Client X send server - i'm in
Server process
Server send back to client S that user X is still online
In this case every client tell the server every 3 second that he is in.
The server send back to every connected client ONLY that x is still in
Comet is just the technique to broadcast back and push messages to client
Ajax is the technique to push client information to the server without having to refresh all the page.
Quoting wikipedia:
http://en.wikipedia.org/wiki/Comet_%28programming%29
Comet is known by several other names, including Ajax Push, Reverse Ajax , Two-way-web, HTTP Streaming,and HTTP server push among others.
So go comet :)
If you do not broadcast anything, then simple Ajax is the best option
In this particular case, since you do not need to send any information from the server to the client(s), I believe Ajax is the more appropriate solution. Every three seconds, the client tells the server it is connected, the database is updated, and you're done.
It could certainly be done using Comet, in which case you would basically ping each registered client to see if it is still connected. But, you would still need to run a query on the database for each client that responds, plus you would still need the client to notify the server on its initial connection. So, it seems to me that Comet would be more trouble than it's worth. The only thing that might make sense is if you could ping each registered client and store the responses in memory, then once all clients have been pinged you can run one single query to update all of their statuses. This would give you the added bonus of knowing as soon as a client disconnects as opposed to waiting for a timeout. Unfortunately, that is beyond the scope of my expertise with Comet so, at this point, I can't help to actually implement it.
Is this chat using "long polling" or "http streaming" ?
http://go-mono.com/moonlight/chat.aspx
It's not anything that simple. It uses http://www.mibbit.com/chat, which is a full IRC client written in Javascript and Java. Blog at http://blog.mibbit.com/.
Edit: Here's your answer.
The first part I got working was the communications between browser and server. That’s done using 2 XMLHttpRequests. The first one is simply to send data from browser to server. It utilizes keep-alive, to minimise new connections.
The second XHR is the ‘receive lazy polling’ one. It connects to the server, and the server holds it open until there are messages available, or a timeout expires. This one is also keep-alive, so the next request goes down the same connection.
What you end up with is 2 connections held open to the server, with packets (json in this case), and some http headers from time to time.
To make sure the server would scale, I wrote a custom webserver in java using nio. It handles all of the connections in a single thread and as I say, scales to tens of thousands of connections.
If the client requests a new connection, it sends a request to the webserver, which then connects out, and starts proxying etc. It also runs an ident server in the case of irc connections so that an irc server can identify individual browsers. I looked at existing frameworks etc to do this sort of thing, but I valued learning how it all works, and thought that my use case may be specific enough to be able to optimise more than general frameworks can.