Websocket onclose/onerror events does not fire if server crashes - websocket

I have observed the following behavior in Firefox 4 and Chrome 7:
If the server running the websocket daemon crashes, reboots, loses network connectivity, etc then the 'onclose' or 'onerror' events are not fired on the client-side. I would expect one of those events to be fired when the connection is broken for any reason.
If however the daemon is shutdown cleanly first, then the 'onclose' event is fired (as expected).
Why do the clients perceive the websocket connection as open when the daemon is not shutdown properly?
I want to rely on the expected behavior to inform the user that the server has become unavailable or that the client's internet connection has suffered a disruption.

TCP is like that. The most recent WebSockets standard draft (v76) has a clean shutdown message mechanism. But without that (or if it doesn't have a chance to be sent) you are relying on normal TCP socket cleanup which make take several minutes (or hours).
I would suggest adding some sort of signal handler/exit trap to the server so that when the server is killed/shutdown, a clean shutdown message is sent to all connected clients.
You could also add a heartbeat mechanism (ala TCP keep alive) to your application to detect when the other side goes away.

Related

Reconnect websocket/mqtt after client internet disconnection

I am setting up an MQTT/Websockets server, my client is an flutter app, which connects to the broker on main screen, and in other screens it sends and receive messages from the broker. My understanding of keepAlive is how often the client and server should share ping/pong, so they make sure the connection is still alive. being said, if my flutter app, connects to the broker in main screen, of 3600/1 hour keepAlive, and suppose to share and receive messages on other screens, if i disconnect the client from the internet for 2 minutes, and reconnect after that, it will not send/receive messages, maybe my understanding of keepAlive is not correct. Well, How would i structure my app/server to reconnect automatically to the internet as soon as internet connection is back and up again.
I have also tried On.Disconnect method, which i noticed it will never get called, and the app even though still thinks its connected to the broker.
I mentioned websockets, on the tags as i could do mqtt over websockets.
I see that no-one else has responded, so I'll try (however I'm new to this also).
Also, have you looked at the Flutter connectivity package?
From my reading of the Mqtt specification, it seems the Mqtt client ** should** disconnect the TCP/IP connection if it doesn't receive a PINGRESP to its PINGREQ in the keep alive period (ie it's not required to disconnect).
My Flutter + Mqtt app checks the connection state, and reconnects if needed, every time it sends a message. I haven't needed to check for internet dropouts, but I have noticed the connection is lost on some application state changes. The main app widget. is notified of these using didChangeAppLifecycleState() and sends a dummy message if needed.
So this doesn't answer exactly what you asked, but I hope it's useful anyway.

What if server didn't receive fd_close

I have a high performance client server system programmed from the scratch. i am still improving my system. the server using io overlapping to handle connections. the server correctly handles disconnections and resource deallocations. at the client side i used shutdown command with sd_receive to notify the server that the client has no data to receive after final send from the client. this works well. and server detects that as a graceful disconnection. rarely i have observed when the connection is very slow the server doesn't detect this. I feel that the shutdown partial closure doesn't reach the server. how can i handle this. this is important the server shouldn't contain this kind of connections if so the server can not be stopped. and i do not want to close all such connection by force.
at the client side i used shutdown command with sd_receive to notify the server that the client has no data to receive after final send from the client.
It doesn't do that.
this works well
It doesn't work at all. The shutdown command with SD_RECEIVE that you're using is completely pointless. A close, or a shutdown with SD_SEND or SD_BOTH, sends a FIN: shutdown with SD_RECEIVE does exactly nothing on the wire, and specifically it does not 'notify the server' of anything.
I feel that the shutdown partial closure doesn't reach the server.
It never reaches the server. Your code doesn't work the way you think it does. What reaches the server is the FIN, which in turn is the result of the close, not the shutdown SD_RECEIVE.
What you need here is a read timeout at the server end. As you're using select() or whatever is delivering you the events, you will have to implement the timeout manually yourself.

Socket.io data loss when Internet speed drop

I am using socket.io 1.4 and I want to know that what happens in this scenario:
The client Emits like this:
Socket.emit('test',data);
The client does 3 emits to server but suddenly Internet speed drops and those emits may not get to server
But after a while the Internet speed rises again but what will happen to previous failed emits?
They will be emitted again automatically?
How should I handle that
Websockets use TCP, which is in general a reliable protocol. There is not exactly such a thing as "The internet speed dropped and I lost some messages." If some messages are lost they will be automatically retransmitted at the TCP level. If retransmission fails completely, the connection will be reset.
So what you really are asking is how socket.io handles this. And the answer is that it has some amount of reconnecting logic, and you may also want to monitor the connection in case it resets (hook up a listener for the disconnect event on the socket), if you want to take some extra action (like notify the user).

How to drop inactive/disconnected peers in ZMQ

I have a client/server setup in which clients send a single request message to the server and gets a bunch of data messages back.
The server is implemented using a ROUTER socket and the clients using a DEALER. The communication is asynchronous.
The clients are typically iPads/iPhones and they connect over wifi so the connection is not 100% reliable.
The issue I’m concern about is if the client connects to the server and sends a request for data but before the response messages are delivered back the communication goes down (e.g. out of wifi coverage).
In this case the messages will be queued up on the server side waiting for the client to reconnect. That is fine for a short time but eventually I would like to drop the messages and the connection to release resources.
By checking activity/timeouts it would be possible in the server and the client applications to identify that the connection is gone. The client can shutdown the socket and in this way free resources but how can it be done in the server?
Per the ZMQ FAQ:
How can I flush all messages that are in the ZeroMQ socket queue?
There is no explicit command for flushing a specific message or all messages from the message queue. You may set ZMQ_LINGER to 0 and close the socket to discard any unsent messages.
Per this mailing list discussion from 2013:
There is no option to drop old messages [from an outgoing message queue].
Your best bet is to implement heartbeating and, when one client stops responding without explicitly disconnecting, restart your ROUTER socket. Messy, I know, this is really something that should have a companion option to HWM. Pieter Hintjens is clearly on board (he created ZMQ) - but that was from 2011, so it looks like nothing ever came of it.
This is a bit late but setting tcp keepalive to a reasonable value will cause dead sockets to close after the timeouts have expired.
Heartbeating is necessary for either side to determine the other side is still responding.
The only thing I'm not sure about is how to go about heartbeating many thousands of clients without spending all available cpu just on dealing with the heartbeats.

Websockets and uwsgi - detect broken connections client side?

I'm using uwsgi's websockets support and so far it's looking great, the server detects when the client disconnects and the client as well when the server goes down. But i'm concerned this will not work in every case/browser.
In other frameworks, namely sockjs, the connection is monitored by sending regular messages that work as heartbeats/pings. But uwsgi sends PING/PONG frames (ie. not regular messages/control frames) according to the websockets spec and so from the client side i have no way to know when the last ping was received from the server. So my question is this:
If the connection is dropped or blocked by some proxy will browsers reliably (ie. Chrome, IE, Firefox, Opera) detect no PING was received from the server and signal the connection as down or should i implement some additional ping/pong system so that the connection is detected as closed from the client side?
Thanks
You are totally right. There is no way from client side to track or send ping/pongs. So if the connection drops, the server is able of detecting this condition through the ping/pong, but the client is let hung... until it tries to send something and the underlying TCP mechanism detect that the other side is not ACKnowledging its packets.
Therefore, if the client application expects to be "listening" most of the time, it may be convenient to implement a keep alive system that works "both ways" as Stephen Clearly explains in the link you posted. But, this keep alive system would be part of your application layer, rather than part of the transport layer as ping/pongs.
For example you can have a message "{token:'whatever'}" that the server and client just echoes with a 5 seconds delay. The client should have a timer with a 10 seconds timeout that stops every time that messages is received and starts every time the message is echoed, if the timer triggers, the connection can be consider dropped.
Although browsers that implement the same RFC as uWSGI should detect reliably when the server closes the connection cleanly they won't detect when the connection is interrupted midway (half open connections)t. So from what i understand we should employ an extra mechanism like application level pings.

Resources