Determining if a remote subscriber is temporarily disconnected - opentok

This is in the context of reconnection. As a moderator I want to know when a remote subscriber (a client subscriber of the stream that I publish) drops temporarily its connection.
First, the subscriber events disconnect, reconnecting and reconnected are dispatched locally on the remote side, that is the guy that loses its connection. The publisher receives no events about the remote connection that gets lost.
Second, I tried to use the 'signal' JS web jdk as well as the server jdk. The idea was to get an error when sending a signal to a 'temporarily' disconnected client, but all I got is successful responses for both, the server I get 204 status code. So I understand the signals are queue and sent when the remote client gets reconnected.
So far I found no way to determine from a connected client when another client loses its connection 'temporarily entering in the 'reconnecting' state.
The signal API only sends error when the client gets completely disconnected from the session, not temporarily disconnected.
I know can mimic this with signals and timeouts, but this way I need the server to trust my client and I really need to validate this on the server side as well, otherwise it would be 'hackable'.
Thanks.

TokBox Developer Evangelist here.
When any client loses connection, a connectionDestroyed event is dispatched to all clients including the client that lost the connection. There is also a Connection Event object which includes a reason and a Connection property that is dispatched with the event.
The reason for the connection destroyed event varies depending on if the client called the disconnect method on the Session object, was force disconnected, or disconnected due to a network condition. In the context of a client temporarily losing connection, the connectionDestroyed event will fire with the reason property of the event data set to networkDisconnected.
As for the signal queue, by default, any signals you send while the client is temporarily disconnected from a session are queued and sent when (and if) it successfully reconnects. You can set the retryAfterReconnect property to false in the options you pass into the Session.signal() method to prevent signals from being queued while the client is disconnected.
For more information on Automatic Reconnection, please visit: https://tokbox.com/developer/guides/connect-session/js/#automatic_reconnection

Related

What if server didn't receive fd_close

I have a high performance client server system programmed from the scratch. i am still improving my system. the server using io overlapping to handle connections. the server correctly handles disconnections and resource deallocations. at the client side i used shutdown command with sd_receive to notify the server that the client has no data to receive after final send from the client. this works well. and server detects that as a graceful disconnection. rarely i have observed when the connection is very slow the server doesn't detect this. I feel that the shutdown partial closure doesn't reach the server. how can i handle this. this is important the server shouldn't contain this kind of connections if so the server can not be stopped. and i do not want to close all such connection by force.
at the client side i used shutdown command with sd_receive to notify the server that the client has no data to receive after final send from the client.
It doesn't do that.
this works well
It doesn't work at all. The shutdown command with SD_RECEIVE that you're using is completely pointless. A close, or a shutdown with SD_SEND or SD_BOTH, sends a FIN: shutdown with SD_RECEIVE does exactly nothing on the wire, and specifically it does not 'notify the server' of anything.
I feel that the shutdown partial closure doesn't reach the server.
It never reaches the server. Your code doesn't work the way you think it does. What reaches the server is the FIN, which in turn is the result of the close, not the shutdown SD_RECEIVE.
What you need here is a read timeout at the server end. As you're using select() or whatever is delivering you the events, you will have to implement the timeout manually yourself.

WebRTC issue on closing peer connection in have-local-offer state

The scenario i have been testing is trying to cancel a requested session initiation before an answer from the remote. I am using a setup between two Nexus 7 devices running Android 6.0.
I introduced a session initiation cancel option which is available to the user during the period that a session initiation request has been made and the remote user has not yet answered (e.g., in a user alerting state). The RTC peer connection signaling state on the initiating side is "have-local-offer", when the user requests the session cancel. On initiation of the cancel, i invoke a close on the peer connection and see that a signaling state change occurs on the initiation side as the RTC signaling state goes (as i would expect) to 'close'.
On the side receiving the session initiation request, however, the RTC signaling state goes to the state "have-remote-offer" as expected, but the close event from the initiating side does not appear to propagate to the receiving side, and the receiving side remains in the "have-remote-offer" state.
I have reviewed the standards and it appears that the event should be generated on the receiving side (i.e., the close should be sent when closed in state "have local offer").
Obviously, i can build around this case and generate a terminate over the session server signaling channel - but would like to understand whether i am doing something wrong or mis-interpreting the spec?
thanks,
You haven't successfully negotiated a connection yet, so RTCPeerConnection's only means of communication to the other side is through the signaling channel you provide.
For it to communicate something on close, that something would have to be surfaced in the API akin to onicecandidate, and no such thing exists.
Remember, your code is on both ends, so you can easily signal this yourself.
One minor mistake in the specification aside (which will hopefully go away soon), a peer connection does not close itself.

How to drop inactive/disconnected peers in ZMQ

I have a client/server setup in which clients send a single request message to the server and gets a bunch of data messages back.
The server is implemented using a ROUTER socket and the clients using a DEALER. The communication is asynchronous.
The clients are typically iPads/iPhones and they connect over wifi so the connection is not 100% reliable.
The issue I’m concern about is if the client connects to the server and sends a request for data but before the response messages are delivered back the communication goes down (e.g. out of wifi coverage).
In this case the messages will be queued up on the server side waiting for the client to reconnect. That is fine for a short time but eventually I would like to drop the messages and the connection to release resources.
By checking activity/timeouts it would be possible in the server and the client applications to identify that the connection is gone. The client can shutdown the socket and in this way free resources but how can it be done in the server?
Per the ZMQ FAQ:
How can I flush all messages that are in the ZeroMQ socket queue?
There is no explicit command for flushing a specific message or all messages from the message queue. You may set ZMQ_LINGER to 0 and close the socket to discard any unsent messages.
Per this mailing list discussion from 2013:
There is no option to drop old messages [from an outgoing message queue].
Your best bet is to implement heartbeating and, when one client stops responding without explicitly disconnecting, restart your ROUTER socket. Messy, I know, this is really something that should have a companion option to HWM. Pieter Hintjens is clearly on board (he created ZMQ) - but that was from 2011, so it looks like nothing ever came of it.
This is a bit late but setting tcp keepalive to a reasonable value will cause dead sockets to close after the timeouts have expired.
Heartbeating is necessary for either side to determine the other side is still responding.
The only thing I'm not sure about is how to go about heartbeating many thousands of clients without spending all available cpu just on dealing with the heartbeats.

Indy FTP Client OnStatus not getting Disconnect events

I have setup an Indy IdFTP Client to a FileZilla FTP Server.The client tries to Connect on startup of my app and, If it fails, keeps retrying every few seconds for the lifetime of the app. In addition, I need to detect if I lose the connection and, again, keep trying to re-establish the connection. This is where I am having a problem.
I have added an OnStatus event handler which seems to fire for all the event types except hsDisconnecting and hsDisconnected.
I also have an OnDisconnected event handler which only fires when I have locked the Server, in this case, when I try to connect, it fires the OnConnected then immediately fires the OnDisconnected. However, if I set the Server as not Active after the initial successful connection, the server tells me it has disconnected me but I do not get an event in my code so I don't know I need to start trying to connect again? Am I wrong in expecting these events in this scenario, is there something else I should be listening for?
Thank you in advance for your help.

Websocket onclose/onerror events does not fire if server crashes

I have observed the following behavior in Firefox 4 and Chrome 7:
If the server running the websocket daemon crashes, reboots, loses network connectivity, etc then the 'onclose' or 'onerror' events are not fired on the client-side. I would expect one of those events to be fired when the connection is broken for any reason.
If however the daemon is shutdown cleanly first, then the 'onclose' event is fired (as expected).
Why do the clients perceive the websocket connection as open when the daemon is not shutdown properly?
I want to rely on the expected behavior to inform the user that the server has become unavailable or that the client's internet connection has suffered a disruption.
TCP is like that. The most recent WebSockets standard draft (v76) has a clean shutdown message mechanism. But without that (or if it doesn't have a chance to be sent) you are relying on normal TCP socket cleanup which make take several minutes (or hours).
I would suggest adding some sort of signal handler/exit trap to the server so that when the server is killed/shutdown, a clean shutdown message is sent to all connected clients.
You could also add a heartbeat mechanism (ala TCP keep alive) to your application to detect when the other side goes away.

Resources