Does Ajax connection disconnect at some point? - ajax

My Co-worker told me that AJAX connection alive until a user closes his/her browser. As far as I know, ajax connection closes its connection when its request has completed. I tested it with Firebug and a HTTP monitoring tool, I noticed that AJAX connection closes itself.
Is he correct????

Ajax is just like any other request, when it completes the connection is closed. Your colleague is wrong.
Note : There are connection types that allow you to keep the connection open indefinitely

Break down what AJAX is -- and XMLHttpRequest. It's a connection to a URI endpoint for some resource (image, text, whatever). Your browser closes the HTTP connection as soon as it's done.

Ajax connections closed after receiving data or if you close tab, then connections will force closed.
Here described Ajax life cycle.

Late to the party here, but this regards an issue I'm actually dealing with right now...
I have a web server running on a severely resource constrained platform (128k Flash, 48k RAM) so it can only handle one connection at a time. Further connections are not handled until the current one is closed. Also, it does not force Connection: close on certain URLs because of a low latency requirement. In general, only one thing talks to the device at a time.
AJAX connections follow whatever rules the browser sets for other connections. In my case, I'm testing a web page that uses AJAX to read one of the keep-alive-allowed URLs once per second, and the browser does not close the connection until several seconds after the window is closed. As a result, other clients wait indefinitely.
So don't assume that XHR connections are closed when complete. They might not be; Firefox 21 sure isn't closing them.
My current problem is that I want to force my AJAX requests to close the socket on completion, and I'm using jQuery's .ajaxSend() pre-send hook to set the Connection: close header. The AJAX seems to be working, but when another client tries to connect, it gets "connection reset by peer", so I'm wondering if Firefox doesn't notice the "Connection: close" header on the XHR request and keeps its end of the socket open (until it times out after approximately three seconds) even after the server has closed its side.

Using JQuery to make AJAX requests, some connections are maintained until the following AJAX call. This can become a real problem when the server holds streams open until the response's close event fires.

Related

when should a web server do accept to create a new client, or reuse the same client?

In a Webserver for basic static website non-blocking event-driven, I don't understand the mechanics I should implement for a "new client".
When a browser connects to my socket, I get the clientfd from accept and answer with a HTTP response, but when the browser is reloaded, should it create a new connection and answer, or should it reuse the same connection and just send the new response?
I use poll to handle multiple fds, but when I reload the page its the same connection (for me this makes sense) but then I open a new tab, and it's still the same connection (It only does accept once). I'm not finding any documentation on this, and I don't have a way to test with multiple client's if it reuses the same one every time.
You can't reuse a connection from another client, new connections must always be accepted as new connections. It doesn't matter what kind of server application you're writing.
However, if the client passes the header Connection: keep-alive you should not close the connection once the response is finished, but keep the connection open for future requests from the same client.
I hope i understand correctly,
but anyway, What i personally do is create a map of sockets, each socket is a client.
Every time a socket disconnects, it's being removed from that map... and so on...
Whether to use a new connection is the browser's choice. You don't get much of a choice.
However, you can tell the browser that you don't allow it to reuse a connection, if you send Connection: close in the response. In this case, the browser is forced to open a new connection for the next request. This is the only control you have.
If you want to test several connections at the same time, you could open several different browsers, or you could use a different program, such as some HTTP load testing tool (there are many). You could also send it a web page with many images; browsers should try to download all the images using several connections at the same time.
A web server doesn't create clients. A web server has clients -- new clients trying to connect, and existing clients communicating on the sockets that it has already opened.
To handle new clients, a web server should pretty much be calling accept all the time, unless it's already handling the maximum number of clients that it's configured to handle.
As soon as you get a new connection from accept, hand it off to other threads to process and call accept again.

How can I make socketio close connection immediately on page refresh?

I have implemented a chat service but it seems if a user keeps refreshing the page fast enough it can connect to itself. It is not exactly itself but the id of its previous session. Issue happens because on fast refreshes browser does not trigger io.disconnect. I have tried to solve it by attaching disconnect code to onbeforeunload event but it doesn't make much change. I don't want to fiddle with pingtimeout and pinginterval because those might interfere with reconnection abilities. Any ideas?
I believe you are mistaken. Those duplicate connections will be disconnected eventually. For connections that are established with the polling protocol it takes about a minute to detect a disconnected client.

Long-polling vs websocket when expecting one-time response from server-side

I have read many articles on real-time push notifications. And the resume is that websocket is generally the preferred technique as long as you are not concerned about 100% browser compatibility. And yet, one article states that
Long polling - potentially when you are exchanging single call with
server, and server is doing some work in background.
This is exactly my case. The user presses a button which initiates some complex calculations on server-side, and as soon as the answer is ready, the server sends a push-notification to the client. The question is, can we say that for the case of one-time responses, long-polling is better choice than websockets?
Or unless we are concerned about obsolete browsers support and if I am going to start the project from scratch, websockets should ALWAYS be preferred to long-polling when it comes to push-protocol ?
The question is, can we say that for the case of one-time responses,
long-polling is better choice than websockets?
Not really. Long polling is inefficient (multiple incoming requests, multiple times your server has to check on the state of the long running job), particularly if the usual time period is long enough that you're going to have to poll many times.
If a given client page is only likely to do this operation once, then you can really go either way. There are some advantages and disadvantages to each mechanism.
At a response time of 5-10 minutes you cannot assume that a single http request will stay alive that long awaiting a response, even if you make sure the server side will stay open that long. Clients or intermediate network equipment (proxies, etc...) just make not keep the initial http connection open that long. That would have been the most efficient mechanism if you could have done that. But, I don't think you can count on that for a random network configuration and client configuration that you do not control.
So, that leaves you with several options which I think you already know, but I will describe here for completeness for others.
Option 1:
Establish websocket connection to the server by which you can receive push response.
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated.
Receive websocket push response some time later.
Close webSocket (assuming this page won't be doing this again).
Option 2:
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated and probably some sort of taskID that can be used for future querying.
Using http "long polling" to "wait" for the answer. Since these requests will likely "time out" before the response is received, you will have to regularly long poll until the response is received.
Option 3:
Establish webSocket connection.
Send message over webSocket connection to initiate the operation.
Receive response some time later that the operation is complete.
Close webSocket connection (assuming this page won't be using it any more).
Option 4:
Same as option 3, but using socket.io instead of plain webSocket to give you heartbeat and auto-reconnect logic to make sure the webSocket connection stays alive.
If you're looking at things purely from the networking and server efficiency point of view, then options 3 or 4 are likely to be the most efficient. You only have the overhead of one TCP connection between client and server and that one connection is used for all traffic and the traffic on that one connection is pretty efficient and supports actual push so the client gets notified as soon as possible.
From an architecture point of view, I'm not a fan of option 1 because it just seems a bit convoluted when you initiate the request using one technology and then send the response via another and it requires you to create a correlation between the client that initiated an incoming http request and a connected webSocket. That can be done, but it's extra bookkeeping on the server. Option 2 is simple architecturally, but inefficient (regularly polling the server) so it's not my favorite either.
There is an alterternative that don't require polling or having an open socket connection all the time.
It's called web push.
The Push API gives web applications the ability to receive messages pushed to them from a server, whether or not the web app is in the foreground, or even currently loaded, on a user agent. This lets developers deliver asynchronous notifications and updates to users that opt in, resulting in better engagement with timely new content.
Some perks are
You need to ask for notification permission
Your site needs to have a service worker running in foreground
having a service worker also means you need to have SSL / HTTPS

server-sent events Golang

I would like to do some one way streaming of data and am experimenting with SSE vs Websockets.
Using SSE form a golang server I'm finding it confusing on how to notify the client when sessions are finished. (eg the server has finished sending the events or the server suddenly goes offline or client looses connectivity)
One thing I need is to reliably know when these disconnect situations. Without using timeouts etc.
My experiments so far , when I take the server offline the client gets EOF. But I'm having trouble trying to figure out how to signal from the server to the client that a connection is closed / finished and then how to handle / read it? Is EOF a reliable way to determine a closed / error / finished state?
Many of the examples with SSE fail to show client good client connection handling.
Would this be easier with Websockets?
Any experiences suggestions most appreciated.
Thanks
The SSE standard requires that the browser reconnect, automatically, after N seconds, if the connection is lost or if the server deliberately closes the socket. (N defaults to 5 in Firefox, 3 in Chrome and Safari, last time I checked.) So, if that is desirable, you don't need to do anything. (In WebSockets you would have to implement this kind of reconnect for yourself.)
If that kind of reconnect is not desirable, you should instead send a message back to the client, saying "the show is over, go away". E.g. if you are streaming financial data, you might send that on a Friday evening, when the markets shut. The client should then intercept this message and close the connection from its side. (The socket will then disappear, so the server process will automatically get closed.)
In JavaScript, and assuming you are using JSON to send data, that would look something like:
var es = EventSource("/datasource");
es.addEventListener("message", function(e){
var d = JSON.parse(e.data);
if(d.shutdownRequest){
es.close();
es=null;
//Tell user what just happened.
}
else{
//Normal processing here
}
},false);
UPDATE:
You can find out when the reconnects are happening, by listening for the "close" event, then looking at the e.target.readyState
es.addEventListener("error", handleError, false);
function handleError(e){
if(e.target.readyState == 0)console.log("Reconnecting...");
if(e.target.readyState == 2)console.log("Giving up.");
}
No other information is available, but more importantly it cannot tell the difference between your server process deliberately closing the connection, your web server crashing, or your client's internet connection going down.
One other thing you can customize is the retry time, by having the the server send a retry:NN message. So if you don't want quick reconnections, but instead want at least 60 seconds between any reconnect attempts do this have your server send retry:60.

Websockets and uwsgi - detect broken connections client side?

I'm using uwsgi's websockets support and so far it's looking great, the server detects when the client disconnects and the client as well when the server goes down. But i'm concerned this will not work in every case/browser.
In other frameworks, namely sockjs, the connection is monitored by sending regular messages that work as heartbeats/pings. But uwsgi sends PING/PONG frames (ie. not regular messages/control frames) according to the websockets spec and so from the client side i have no way to know when the last ping was received from the server. So my question is this:
If the connection is dropped or blocked by some proxy will browsers reliably (ie. Chrome, IE, Firefox, Opera) detect no PING was received from the server and signal the connection as down or should i implement some additional ping/pong system so that the connection is detected as closed from the client side?
Thanks
You are totally right. There is no way from client side to track or send ping/pongs. So if the connection drops, the server is able of detecting this condition through the ping/pong, but the client is let hung... until it tries to send something and the underlying TCP mechanism detect that the other side is not ACKnowledging its packets.
Therefore, if the client application expects to be "listening" most of the time, it may be convenient to implement a keep alive system that works "both ways" as Stephen Clearly explains in the link you posted. But, this keep alive system would be part of your application layer, rather than part of the transport layer as ping/pongs.
For example you can have a message "{token:'whatever'}" that the server and client just echoes with a 5 seconds delay. The client should have a timer with a 10 seconds timeout that stops every time that messages is received and starts every time the message is echoed, if the timer triggers, the connection can be consider dropped.
Although browsers that implement the same RFC as uWSGI should detect reliably when the server closes the connection cleanly they won't detect when the connection is interrupted midway (half open connections)t. So from what i understand we should employ an extra mechanism like application level pings.

Resources