I've not been able to find a clear answer as to whether or not CometD's long polling mechanism uses a persistent connection, or disconnects and then reconnects after a message is pushed to it.
The reason this is important to me is that I am currently using a long polling push client which disconnects and reconnects after every message (or batch of messages) is sent from the server, and the reconnect time introduces random latency which I am looking to get rid of. I am assuming it does this for compatibility's sake, as it makes every "push" just look like a really long request/response, which should work on any and every browser.
So, does CometD's long polling use a persistent, long-lived http connection? If the answer is yes, is it conditional? That is, are there cases/browsers where it falls back to a "request/response/reconnect" per message sent?
CometD long polling is using HTTP 1.1, and therefore persistent connections.
When CometD is used from a browser, the browser manages the connection pool and the HTTP protocol version, and CometD does not add any Connection header to close the connection after every message: all it is left to the browser, and my experience is that the long poll always stays on the same connection.
When the CometD Java client library is used, the same applies: Jetty's HTTP client manages the connection pool, defaults to HTTP 1.1 and keeps the connections open.
The main difference with browsers is that Jetty HTTP client allows more than few (usually 6) connections per domain, so it is appropriate for load testing simulations.
Check out the CometD performance report.
The updated CometD documentation can be found at http://docs.cometd.org.
It is wrong to say that "Long polling by definition does not use a persistent connection but reconnects". HTTP 1.1 is perfectly capable to send multiple long pollings over the same connection, and CometD does exactly that.
I am not aware of cases where clients like browsers fallback to open/request/response/close behaviour when using HTTP 1.1, unless this is explicitly requested by the application adding a Connection: close header to HTTP requests or responses (CometD does not do this).
With WebSocket, CometD opens 1 connection only, persistent, and all the messages are exchanged over that connection, until the application decides to close the connection by disconnecting the CometD client.
Related
Just starting to use Spring Webflux Webclient,Just wanted to know what is the default KeepAlive time for the Http Connection ? Is there a way to increase the keepAlive Time? In our Rest Service we get a request probably every five minutes,The request takes long time to process .It takes time between 500 seconds-- 10 second. However in load test if I send frequent requests the processing time is less than 250ms.
Spring WebFlux WebClient is an HTTP client API that wraps actual HTTP libraries - so configuration like connection management, timeouts, etc. are configured at the library level directly and behavior might change depending on the chosen library.
The default library with WebClient is Reactor Netty.
Many HTTP clients (and this is the case with Reactor Netty) are maintaining HTTP connections in a connection pool to reuse them. Clients usually acquire a new connection to a remote host, use it to send/receive information and then put it back in the connection pool. This is very useful since sometimes acquiring a new connection can be costly. This seems to be really costly in your case.
HTTP clients leave those unused connections in the pool, but what about keepAlive time?
Most clients leave those connections in the pool as long as possible and test them before acquiring them to see if they're still valid or listen to server events asynchronously to remove them from the pool (I believe Reactor Netty does that). So ultimately, the server is in control and decides when to close connections if they're inactive.
Now your problem description might suggest that connecting to that remote host is very costly, but it could be also the remote host taking a long time to respond to your requests (for example, it might be operating on an empty cache and needs to calculate a lot of things).
I have read many articles on real-time push notifications. And the resume is that websocket is generally the preferred technique as long as you are not concerned about 100% browser compatibility. And yet, one article states that
Long polling - potentially when you are exchanging single call with
server, and server is doing some work in background.
This is exactly my case. The user presses a button which initiates some complex calculations on server-side, and as soon as the answer is ready, the server sends a push-notification to the client. The question is, can we say that for the case of one-time responses, long-polling is better choice than websockets?
Or unless we are concerned about obsolete browsers support and if I am going to start the project from scratch, websockets should ALWAYS be preferred to long-polling when it comes to push-protocol ?
The question is, can we say that for the case of one-time responses,
long-polling is better choice than websockets?
Not really. Long polling is inefficient (multiple incoming requests, multiple times your server has to check on the state of the long running job), particularly if the usual time period is long enough that you're going to have to poll many times.
If a given client page is only likely to do this operation once, then you can really go either way. There are some advantages and disadvantages to each mechanism.
At a response time of 5-10 minutes you cannot assume that a single http request will stay alive that long awaiting a response, even if you make sure the server side will stay open that long. Clients or intermediate network equipment (proxies, etc...) just make not keep the initial http connection open that long. That would have been the most efficient mechanism if you could have done that. But, I don't think you can count on that for a random network configuration and client configuration that you do not control.
So, that leaves you with several options which I think you already know, but I will describe here for completeness for others.
Option 1:
Establish websocket connection to the server by which you can receive push response.
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated.
Receive websocket push response some time later.
Close webSocket (assuming this page won't be doing this again).
Option 2:
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated and probably some sort of taskID that can be used for future querying.
Using http "long polling" to "wait" for the answer. Since these requests will likely "time out" before the response is received, you will have to regularly long poll until the response is received.
Option 3:
Establish webSocket connection.
Send message over webSocket connection to initiate the operation.
Receive response some time later that the operation is complete.
Close webSocket connection (assuming this page won't be using it any more).
Option 4:
Same as option 3, but using socket.io instead of plain webSocket to give you heartbeat and auto-reconnect logic to make sure the webSocket connection stays alive.
If you're looking at things purely from the networking and server efficiency point of view, then options 3 or 4 are likely to be the most efficient. You only have the overhead of one TCP connection between client and server and that one connection is used for all traffic and the traffic on that one connection is pretty efficient and supports actual push so the client gets notified as soon as possible.
From an architecture point of view, I'm not a fan of option 1 because it just seems a bit convoluted when you initiate the request using one technology and then send the response via another and it requires you to create a correlation between the client that initiated an incoming http request and a connected webSocket. That can be done, but it's extra bookkeeping on the server. Option 2 is simple architecturally, but inefficient (regularly polling the server) so it's not my favorite either.
There is an alterternative that don't require polling or having an open socket connection all the time.
It's called web push.
The Push API gives web applications the ability to receive messages pushed to them from a server, whether or not the web app is in the foreground, or even currently loaded, on a user agent. This lets developers deliver asynchronous notifications and updates to users that opt in, resulting in better engagement with timely new content.
Some perks are
You need to ask for notification permission
Your site needs to have a service worker running in foreground
having a service worker also means you need to have SSL / HTTPS
How connect to https://api.push.apple.com using http2 with persistent connection ?
Persistent connection is to avoid rapid connection and disconnection:
APNs treats rapid connection and disconnection as a denial-of-service attack
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/APNsProviderAPI.html
Is writing a client in c using https://nghttp2.org the only solution?
(If that question should be ask in another StackExchange website, please do tell me)
Non-persistent connections are a relic of the past. They were used in HTTP/1.0, but HTTP/1.1 already moved to a model where the connections were persistent by default, and HTTP/2 (also being multiplexed) continues on that model of connections being persistent by default.
Independently on the language you are using to develop your applications, any HTTP/2 compliant client will, by default, use persistent connections.
You only need to use the HTTP/2 client library in a way that you don't explicitly close the connection after every request you make.
Typically these libraries employ a connection pool that keeps the connections open, typically until an idle timeout fires.
When your application makes HTTP requests, the library will pick an open connection and send the request. When the response arrives the library will not close the connection but instead put it back into the pool for the next usage.
Just study how the library you want to use allows you to make multiple requests without closing the connection.
I also met this question!
If the connection be idle for a long time (about 1 hour), then function poll catches no socket status changed. It always returns 0 even as on_frame_send_callback was invoked.
Is there anyone can figure out the problem?
I was trying to understand the difference in Websocket and Comet model. As per my understanding,
In comet model, the connection remains opened until the server has something to push to the client. Once server pushes the data to client, the connection is closed and new connection is established for the next request. It is not considered a good approach as the connection may remain open for long time (causing intensive use of server resources) or may timeout.
On the other hand, websockets start with a handshake connection and once both the client and server agree to exchange data, the connection remains open.
So in both the case the connection remains open for long time (especially websocket). So isnt't this a drawback of websocket to keep the connection open. I would like to take the reference of SignalR in asp.net to discuss about this concept.
First, let's clarify that Comet comes in two flavors: HTTP Streaming and HTTP Long Polling. You were referring to Long Polling. (See this other answer for terminology).
In all three cases (WebSocket, HTTP Streaming, and HTTP Long Polling) the underlying TCP socket is kept open. That's actually the main feature of this kind of techniques and not a side effect. You want the socket to stay permanently open (I'm oversimplifying now), so that data can be pushed asynchronously and with low latency.
As you correctly said, this implies that the server must be able to handle a large number of open sockets without wasting resources. And that's one of the key elements in the choice of a good Comet/WebSocket server.
I'm using uwsgi's websockets support and so far it's looking great, the server detects when the client disconnects and the client as well when the server goes down. But i'm concerned this will not work in every case/browser.
In other frameworks, namely sockjs, the connection is monitored by sending regular messages that work as heartbeats/pings. But uwsgi sends PING/PONG frames (ie. not regular messages/control frames) according to the websockets spec and so from the client side i have no way to know when the last ping was received from the server. So my question is this:
If the connection is dropped or blocked by some proxy will browsers reliably (ie. Chrome, IE, Firefox, Opera) detect no PING was received from the server and signal the connection as down or should i implement some additional ping/pong system so that the connection is detected as closed from the client side?
Thanks
You are totally right. There is no way from client side to track or send ping/pongs. So if the connection drops, the server is able of detecting this condition through the ping/pong, but the client is let hung... until it tries to send something and the underlying TCP mechanism detect that the other side is not ACKnowledging its packets.
Therefore, if the client application expects to be "listening" most of the time, it may be convenient to implement a keep alive system that works "both ways" as Stephen Clearly explains in the link you posted. But, this keep alive system would be part of your application layer, rather than part of the transport layer as ping/pongs.
For example you can have a message "{token:'whatever'}" that the server and client just echoes with a 5 seconds delay. The client should have a timer with a 10 seconds timeout that stops every time that messages is received and starts every time the message is echoed, if the timer triggers, the connection can be consider dropped.
Although browsers that implement the same RFC as uWSGI should detect reliably when the server closes the connection cleanly they won't detect when the connection is interrupted midway (half open connections)t. So from what i understand we should employ an extra mechanism like application level pings.