Why do client websocket close codes not match the server code? - websocket

I have a Spring Boot Tomcat server that is handling websocket connections from clients that are using:
SocketRocket
Tyrus
I find that the close code provided by the server is often not the close code read by the client.
For SocketRocket, I close the websocket at the server with code 1000, and the client often reads 1001.
For Tyrus, I close the websocket with code 1011, and the client reads either 1006 or 1011.
Descriptions of close codes from RFC 6455:
1000 indicates a normal closure, meaning that the purpose for
which the connection was established has been fulfilled.
1001 indicates that an endpoint is "going away", such as a server
going down or a browser having navigated away from a page.
1006 is a reserved value and MUST NOT be set as a status code in a
Close control frame by an endpoint. It is designated for use in
applications expecting a status code to indicate that the
connection was closed abnormally, e.g., without sending or
receiving a Close control frame.
1011 indicates that a server is terminating the connection because
it encountered an unexpected condition that prevented it from
fulfilling the request.
I have verified the outgoing close codes using Wireshark at the server.
Is the close code just unreliable as a means of passing information from the server to the client? Do I need to implement something at the application layer that passes this information before closing websockets?

This is just a guess, but the WebSocket clients you listed may not implement the closing handshake correctly.
Why don't you try nv-websocket-client to see what's happening? onDisconnected method of the library's listener interface (WebSocketListener) is defined as below.
void onDisconnected(
WebSocket websocket,
WebSocketFrame serverCloseFrame,
WebSocketFrame clientCloseFrame,
boolean closedByServer);
The second argument serverCloseFrame is the close frame which the server sent to the client, and the third argument clientCloseFrame is the close frame which the client sent to the server. In normal cases, as required by the specification, payloads of the two close frames are identical.

Related

How to keep long connection in HTTP2?

I am reading the documentation of Alexa Voice Service capabilities and came across the part on managing HTTP2 connection. I don't really understand how this down channel works behind the scenes. Is it using server push? Well, could server push be used to keep a long connection? Or is it just using some tricks to keep the connection alive for a very long time?
As stated on the documentation, the client needs to establish a down channel stream with the server.
Based on what I read here https://www.rfc-editor.org/rfc/rfc7540, From this state diagram:
once the stream sends a HEADER frame, followed by an END STREAM flag, the state will be half-closed(local) on the point of view of the client. So, this is how half-closed state for the device happened, as stated in above image. Correct me that if I am wrong.
For managing the HTTP connection, this is what it says.
Based on my understanding: the client sets a timeout of 60minutes for the GET request. After the request is sent, the server will not send any response. Then the connection will remain open for 60minutes. But once a response is sent from the server, the connection should be closed. Isn't that supposed to happen? Or, is it because when the server sends response through the down channel stream, it did not send an END STREAM flag so the stream will not be closed?
But once a response is sent from the server, the connection should be closed.
HTTP/1.1 and HTTP/2 use persistent connections, which means that a single connection can be used not just for one request/response, but for several request/response cycles.
Only HTTP/1.0 was closing the connection after the response, and so for HTTP/2 this is not the case, the connection will remain open until either peer decides to explicitly close it.
The recommendations about the idle timeouts are exactly to prevent the client to explicitly close the connection too early when it sees no network traffic, independently from requests or responses.

WebSocket When To Use Close Handshake

I'm experimenting with Gorilla WebSocket Package and I would like to know whether there is a way based on the error retrieved from .ReadMessage to determine whether to start the Closing Handshake (ex. 1000 - normal closure) or stop the connection immediately (ex. 1006 - abnormal closure).
Currently what I'm doing is to store the list of error codes that I might use to close the websocket connection and if an error code is equals to one of the codes in my list, I do the Closing Handshake. However I'm not sure whether this complies with the WebSocket Spec.
Is there another way to do this or this is how it is suppose to be done?
Applications should only send close frames when the application decides to close the connection. The Gorilla package handles all other cases.
The Gorilla package sends the closing handshake on read errors. The Gorilla internal method handleProtocolError starts the closing handshake.
Gorilla replies to closing handshakes from the peer application.

Websockets and uwsgi - detect broken connections client side?

I'm using uwsgi's websockets support and so far it's looking great, the server detects when the client disconnects and the client as well when the server goes down. But i'm concerned this will not work in every case/browser.
In other frameworks, namely sockjs, the connection is monitored by sending regular messages that work as heartbeats/pings. But uwsgi sends PING/PONG frames (ie. not regular messages/control frames) according to the websockets spec and so from the client side i have no way to know when the last ping was received from the server. So my question is this:
If the connection is dropped or blocked by some proxy will browsers reliably (ie. Chrome, IE, Firefox, Opera) detect no PING was received from the server and signal the connection as down or should i implement some additional ping/pong system so that the connection is detected as closed from the client side?
Thanks
You are totally right. There is no way from client side to track or send ping/pongs. So if the connection drops, the server is able of detecting this condition through the ping/pong, but the client is let hung... until it tries to send something and the underlying TCP mechanism detect that the other side is not ACKnowledging its packets.
Therefore, if the client application expects to be "listening" most of the time, it may be convenient to implement a keep alive system that works "both ways" as Stephen Clearly explains in the link you posted. But, this keep alive system would be part of your application layer, rather than part of the transport layer as ping/pongs.
For example you can have a message "{token:'whatever'}" that the server and client just echoes with a 5 seconds delay. The client should have a timer with a 10 seconds timeout that stops every time that messages is received and starts every time the message is echoed, if the timer triggers, the connection can be consider dropped.
Although browsers that implement the same RFC as uWSGI should detect reliably when the server closes the connection cleanly they won't detect when the connection is interrupted midway (half open connections)t. So from what i understand we should employ an extra mechanism like application level pings.

How do i know if connection is alive with websockets?

I have a webapp, which is running in a browser. That webapp is connected to a server, which uses websockets. So the communication between the server and my client/browser is based on websockets. If some magic event occurs on the server, some webservice sends a new XML / JSON to my webapp and the new data gets displayed.
But how do i, as the client / browser, know if the connection is stil alive? Lets say i do not get any new XML for about 30 seconds. How would i know if the connection is closed/broken/server offline or everything is fine, but on the server himself no new magic event occured.
A websocket connection object has a readyState field which will tell you if the connection is still active (from the dart documentation). The readyState can be either
0 - connection not yet established
1 - conncetion established
2 - in closing handshake
3 - connection closed or could not open
You can also define an event handler for the websocket close event if this is something you'd like to handle (try to reconnect, etc).
3 ways:
rely on TCP to detect loss of connectivity, which will ultimately pop up in JS onclose event
send WebSocket pings from server .. browsers will reply with WS pongs, loss of connectivity is probably more robustly detected also on client side
send app level heartbeats from browser to server, server need to have logic to reply. you can't trigger WS pings from browsers (in JS)

Why might an EventMachine outbound data buffer stop sending and just fill up forever (while other connections can still send)

I have an EventMachine server sending TCP data down to a Mac client (via GCDAsyncSocket). It always works flawlessly for a while, but inevitably the server suddenly stops sending data on a connection-by-connection basis. The connection is still maintained, and the server still receives data from the client, but it doesn't go the other way.
When this happens, I've discovered via connection#get_outbound_data_size that the connection send buffer is filling up infinitely (via #send_data) and not being sent to the client.
Are there specific (and hopefully fixable) reasons why this might occur? The reactor keeps humming along, and other active connections to the server continue working fine (though they sometimes fall into buffer hell as well).
I see one reason at least: when the remote client no longer read data from its side of the TCP connection (with a recv() call or whatever).
Then, the scenario is: the receiving TCP buffer on the client side becomes full. And the OS can no longer accepts TCP pacquets from its peer, since it cannot store them queue them. As a consequence, the sending TCP buffer on the server side becomes full too as your application continue to send paquets on the socket! Soon your server is no longer able to write into the socket since the send() system call will :
blocks undefinitively. (waiting for buffer to empty enough for the new paquet)
ot returns with an EWOULDBLOCK error. (if you configured your socket as a non-blocking one)
I usually met that kind of use case in TEST environment when I put a breakpoint in my code on the client side.
There was a patch was applied to GCDAsyncSocket on March 23 that prevents the reads from stopping. Did this patch solve your problem?

Resources