Pardon me if the question is simple , but I don't know what will happen in this general scenario : If due to some issue or failure / system failure (any kind of failure , etc) the browser crashes , what will happen to a websocket connection that was running ? Will it be closed or what will be its state ?
I am using netty websocket implementation. Now , specifically in netty websocket implementation , in the above scenario , what would happen ? (When I tried , I get this : if I close the browser manually , the connection is closed but if I kill the browser process , this does not happen). If the connection is not closed , how can I detect it / handle it while sending messages , otherwise while sending messages , a ClosedChannelException occurs when trying to send a message.
EDIT : Apart from checking for the above mentioned error , is there a way of knowing that the websocket connection has been closed when using the netty implementation ?
I think you'll have to catch ClosedChannelException and handle that in the same way you would for a connection that was cleanly closed.
If your websocket sessions are expensive and you want to spot killed connections more quickly, you could periodically send a ping request to each client. Netty has a PingWebSocketFrame class which looks like it'd support this for you.
Related
What can be the reasons that cause a socket.io session to be crashed and server returns invalid session or session is disconnected ?
There is a specific situation that causes these problems with the session. When a client fails to send the pings at the expected interval the server declares the client gone and deletes the session. If a client that falls into this situation later tries to send a ping or another request using the now invalidated session id it will receive one of these errors.
Another possible problem with the same outcome is when the client does send the pings at the correct intervals, but the server is blocked or too busy to process these pings in time.
So to summarize, if you think your clients are well behaved, I would look at potential blocking tasks in your server.
Ok, I'll illustrate my problem in this figure project's architecture .
In fact, I have a websocket between the react app and the rasa ( tool for creating chatbots) based on flask. bot response need to access to an external API to retrieve some data. Here where things go wrong. Sometimes, these requests take too long to return a response, and that's when websocket misbehave.
I am trying to connect to elastic search via Jest Client.
Sometimes, the client is not able to connect to the elastic search cluster.
Stack Trace :
org.apache.http.NoHttpResponseException: search-xxx-yyy.ap-southeast-1.es.amazonaws.com:443 failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
The elastic search cluster is in a public domain, so I am not understanding why the client is unable to connect.
Also, the issue happens intermittently, if I retry the request, it connects sometimes.
Any help is appreciated. Thanks
When JestClient initiates the http request, it will call read() on the socket and block. When this read returns -1, this means that the server closed the connection before or during client was waiting for the response.
Why it happens
There's two main causes for NoHttpResponseException:
. The server end of the connection was closed before the client attempts to send a request down it.
. The server end of the connection closes the connection in the middle of a request.
Stale Connection (connection closed before request)
Most often this is a stale connection. When using persistent connections, you may have a connection sit around in the connection pool not being used for a while. If it is idle for longer than the server or load balancer's HTTP keep alive timeout, then the server or load balancer will close the connection due to its idleness. The Jakarta client isn't structured to receive a notification of this happening (it doesn't use NIO), so the connection sits around in a half-closed state. The only way the client can detect this state is by reading from the socket. So when you send a request, the write is successful because the socket is only half closed (writes succeed until you close your end) but then the read indicates the socket was closed. This causes the request to fail.
Connection Closed Mid-Request
The other reason this might occur is the connection was actually closed while the service was working on it. Anything between your client and service may close the connection, including load balancers, proxies, or the HTTP endpoint fronting your service. If your activities are quite long-running or you're transferring a lot of data, the window for something to go wrong is larger and the connection is more likely to be lost in the middle of the request. An example of this happening is a Java server process exiting after an OutOfMemoryException occurs due to trying to return a large amount of data. You can verify whether this is the problem by looking at TCP dumps to see whether the connection is closed while the request is in flight. Also, failures of this type usually occur some time after sending the request, whereas stale connection failures always occur immediately when the request is made.
Diagnosing The Cause
NoHttpResponseException is usually a stale connection (according to problems I've observed and helped people with)
When the failure always occurs immediately after submitting the request, stale connection is almost certainly the problem
When failures occur some non-trivial amount of time after waiting for the response, then the connection wasn't stale when the request was made and the connection is being closed in the middle of the request
TCPDumps can be more conclusive. You can see when the connection is being closed (before or during the request).
What can be done about it
Use a better client
Nonblocking HTTP clients exist that allow the caller to know when a connection is closed without having to try to read from the connection.
Retry failed requests
If your call is safe to retry (e.g. it's idempotent), this is a good option. It also covers all sorts of transient failures besides stale connection failures. NoHttpResponseException isn't necessarily a stale connection and it's possible that the service received the request, so you should take care to retry only when safe.
I have a Spring integration server running on a tcp-inbound-gateway and a client that connects to the server using regular java sockets.
The client connects to the server, the server processes the request and then sends the response. The client reads in the response, then closes the connection using socket.close().
On the server side I have a tcp-connection-event-inbound-channel-adapter configured and I see this:
TcpConnectionExceptionEvent [source=org.springframework.integration.ip.tcp.connection.TcpNetConnection#2294e71d, cause=org.springframework.integration.ip.tcp.serializer.SoftEndOfStreamException: Stream closed between payloads], [factory=crLfServer, connectionId=127.0.0.1:52292:5556:add2ff2a-b4ff-410d-8e60-d6b1a388044e]
Is this normal behavior?
Yes; it's normal - we emit application events whenever an exception occurs on a socket, the SoftEndOfStreamException occurs when the socket is closed "normally" (i.e. between messages).
We could, I suppose, suppress that particular event, but some find it useful.
I would like to do some one way streaming of data and am experimenting with SSE vs Websockets.
Using SSE form a golang server I'm finding it confusing on how to notify the client when sessions are finished. (eg the server has finished sending the events or the server suddenly goes offline or client looses connectivity)
One thing I need is to reliably know when these disconnect situations. Without using timeouts etc.
My experiments so far , when I take the server offline the client gets EOF. But I'm having trouble trying to figure out how to signal from the server to the client that a connection is closed / finished and then how to handle / read it? Is EOF a reliable way to determine a closed / error / finished state?
Many of the examples with SSE fail to show client good client connection handling.
Would this be easier with Websockets?
Any experiences suggestions most appreciated.
Thanks
The SSE standard requires that the browser reconnect, automatically, after N seconds, if the connection is lost or if the server deliberately closes the socket. (N defaults to 5 in Firefox, 3 in Chrome and Safari, last time I checked.) So, if that is desirable, you don't need to do anything. (In WebSockets you would have to implement this kind of reconnect for yourself.)
If that kind of reconnect is not desirable, you should instead send a message back to the client, saying "the show is over, go away". E.g. if you are streaming financial data, you might send that on a Friday evening, when the markets shut. The client should then intercept this message and close the connection from its side. (The socket will then disappear, so the server process will automatically get closed.)
In JavaScript, and assuming you are using JSON to send data, that would look something like:
var es = EventSource("/datasource");
es.addEventListener("message", function(e){
var d = JSON.parse(e.data);
if(d.shutdownRequest){
es.close();
es=null;
//Tell user what just happened.
}
else{
//Normal processing here
}
},false);
UPDATE:
You can find out when the reconnects are happening, by listening for the "close" event, then looking at the e.target.readyState
es.addEventListener("error", handleError, false);
function handleError(e){
if(e.target.readyState == 0)console.log("Reconnecting...");
if(e.target.readyState == 2)console.log("Giving up.");
}
No other information is available, but more importantly it cannot tell the difference between your server process deliberately closing the connection, your web server crashing, or your client's internet connection going down.
One other thing you can customize is the retry time, by having the the server send a retry:NN message. So if you don't want quick reconnections, but instead want at least 60 seconds between any reconnect attempts do this have your server send retry:60.
I'm using uwsgi's websockets support and so far it's looking great, the server detects when the client disconnects and the client as well when the server goes down. But i'm concerned this will not work in every case/browser.
In other frameworks, namely sockjs, the connection is monitored by sending regular messages that work as heartbeats/pings. But uwsgi sends PING/PONG frames (ie. not regular messages/control frames) according to the websockets spec and so from the client side i have no way to know when the last ping was received from the server. So my question is this:
If the connection is dropped or blocked by some proxy will browsers reliably (ie. Chrome, IE, Firefox, Opera) detect no PING was received from the server and signal the connection as down or should i implement some additional ping/pong system so that the connection is detected as closed from the client side?
Thanks
You are totally right. There is no way from client side to track or send ping/pongs. So if the connection drops, the server is able of detecting this condition through the ping/pong, but the client is let hung... until it tries to send something and the underlying TCP mechanism detect that the other side is not ACKnowledging its packets.
Therefore, if the client application expects to be "listening" most of the time, it may be convenient to implement a keep alive system that works "both ways" as Stephen Clearly explains in the link you posted. But, this keep alive system would be part of your application layer, rather than part of the transport layer as ping/pongs.
For example you can have a message "{token:'whatever'}" that the server and client just echoes with a 5 seconds delay. The client should have a timer with a 10 seconds timeout that stops every time that messages is received and starts every time the message is echoed, if the timer triggers, the connection can be consider dropped.
Although browsers that implement the same RFC as uWSGI should detect reliably when the server closes the connection cleanly they won't detect when the connection is interrupted midway (half open connections)t. So from what i understand we should employ an extra mechanism like application level pings.