WebRTC issue on closing peer connection in have-local-offer state - session

The scenario i have been testing is trying to cancel a requested session initiation before an answer from the remote. I am using a setup between two Nexus 7 devices running Android 6.0.
I introduced a session initiation cancel option which is available to the user during the period that a session initiation request has been made and the remote user has not yet answered (e.g., in a user alerting state). The RTC peer connection signaling state on the initiating side is "have-local-offer", when the user requests the session cancel. On initiation of the cancel, i invoke a close on the peer connection and see that a signaling state change occurs on the initiation side as the RTC signaling state goes (as i would expect) to 'close'.
On the side receiving the session initiation request, however, the RTC signaling state goes to the state "have-remote-offer" as expected, but the close event from the initiating side does not appear to propagate to the receiving side, and the receiving side remains in the "have-remote-offer" state.
I have reviewed the standards and it appears that the event should be generated on the receiving side (i.e., the close should be sent when closed in state "have local offer").
Obviously, i can build around this case and generate a terminate over the session server signaling channel - but would like to understand whether i am doing something wrong or mis-interpreting the spec?
thanks,

You haven't successfully negotiated a connection yet, so RTCPeerConnection's only means of communication to the other side is through the signaling channel you provide.
For it to communicate something on close, that something would have to be surfaced in the API akin to onicecandidate, and no such thing exists.
Remember, your code is on both ends, so you can easily signal this yourself.
One minor mistake in the specification aside (which will hopefully go away soon), a peer connection does not close itself.

Related

What happens with WebSocket connections when a phone's screen locks?

When a phone browser has an open connection, and the user locks the screen, then at a certain point they will no longer have a WebSocket connection.
What events are fired when this happens? Is the WebSocket.onerror or WebSocket.onclose handler called, and if so, does this happen when the screen locks/the app is suspended, or when the app comes back up again?
(And bonus question: is this standardised, or do browsers behave differently, and if so, how?)
I've done some testing myself, and the answer seems to be: no events are fired. Although the connection does drop, no error or close events are fired, not even when the browser comes back up. Therefore, the main way to deal with this appears to be to periodically check the connection status, and reconnect if need be - with exponential back-off in case the connection drops server-side. (Or to have a library do this for you, though I haven't found a properly maintained client-side browser-based WebSocket library that does this yet.)
This seems corroborated by the author of this article:
Mobile devices introduce a new category of connection issues; if a mobile device is locked, goes to sleep or the application is moved to the background, an active WebSocket connection may become unresponsive and not close itself properly.

Determining if a remote subscriber is temporarily disconnected

This is in the context of reconnection. As a moderator I want to know when a remote subscriber (a client subscriber of the stream that I publish) drops temporarily its connection.
First, the subscriber events disconnect, reconnecting and reconnected are dispatched locally on the remote side, that is the guy that loses its connection. The publisher receives no events about the remote connection that gets lost.
Second, I tried to use the 'signal' JS web jdk as well as the server jdk. The idea was to get an error when sending a signal to a 'temporarily' disconnected client, but all I got is successful responses for both, the server I get 204 status code. So I understand the signals are queue and sent when the remote client gets reconnected.
So far I found no way to determine from a connected client when another client loses its connection 'temporarily entering in the 'reconnecting' state.
The signal API only sends error when the client gets completely disconnected from the session, not temporarily disconnected.
I know can mimic this with signals and timeouts, but this way I need the server to trust my client and I really need to validate this on the server side as well, otherwise it would be 'hackable'.
Thanks.
TokBox Developer Evangelist here.
When any client loses connection, a connectionDestroyed event is dispatched to all clients including the client that lost the connection. There is also a Connection Event object which includes a reason and a Connection property that is dispatched with the event.
The reason for the connection destroyed event varies depending on if the client called the disconnect method on the Session object, was force disconnected, or disconnected due to a network condition. In the context of a client temporarily losing connection, the connectionDestroyed event will fire with the reason property of the event data set to networkDisconnected.
As for the signal queue, by default, any signals you send while the client is temporarily disconnected from a session are queued and sent when (and if) it successfully reconnects. You can set the retryAfterReconnect property to false in the options you pass into the Session.signal() method to prevent signals from being queued while the client is disconnected.
For more information on Automatic Reconnection, please visit: https://tokbox.com/developer/guides/connect-session/js/#automatic_reconnection

Socket.io data loss when Internet speed drop

I am using socket.io 1.4 and I want to know that what happens in this scenario:
The client Emits like this:
Socket.emit('test',data);
The client does 3 emits to server but suddenly Internet speed drops and those emits may not get to server
But after a while the Internet speed rises again but what will happen to previous failed emits?
They will be emitted again automatically?
How should I handle that
Websockets use TCP, which is in general a reliable protocol. There is not exactly such a thing as "The internet speed dropped and I lost some messages." If some messages are lost they will be automatically retransmitted at the TCP level. If retransmission fails completely, the connection will be reset.
So what you really are asking is how socket.io handles this. And the answer is that it has some amount of reconnecting logic, and you may also want to monitor the connection in case it resets (hook up a listener for the disconnect event on the socket), if you want to take some extra action (like notify the user).

Websockets and uwsgi - detect broken connections client side?

I'm using uwsgi's websockets support and so far it's looking great, the server detects when the client disconnects and the client as well when the server goes down. But i'm concerned this will not work in every case/browser.
In other frameworks, namely sockjs, the connection is monitored by sending regular messages that work as heartbeats/pings. But uwsgi sends PING/PONG frames (ie. not regular messages/control frames) according to the websockets spec and so from the client side i have no way to know when the last ping was received from the server. So my question is this:
If the connection is dropped or blocked by some proxy will browsers reliably (ie. Chrome, IE, Firefox, Opera) detect no PING was received from the server and signal the connection as down or should i implement some additional ping/pong system so that the connection is detected as closed from the client side?
Thanks
You are totally right. There is no way from client side to track or send ping/pongs. So if the connection drops, the server is able of detecting this condition through the ping/pong, but the client is let hung... until it tries to send something and the underlying TCP mechanism detect that the other side is not ACKnowledging its packets.
Therefore, if the client application expects to be "listening" most of the time, it may be convenient to implement a keep alive system that works "both ways" as Stephen Clearly explains in the link you posted. But, this keep alive system would be part of your application layer, rather than part of the transport layer as ping/pongs.
For example you can have a message "{token:'whatever'}" that the server and client just echoes with a 5 seconds delay. The client should have a timer with a 10 seconds timeout that stops every time that messages is received and starts every time the message is echoed, if the timer triggers, the connection can be consider dropped.
Although browsers that implement the same RFC as uWSGI should detect reliably when the server closes the connection cleanly they won't detect when the connection is interrupted midway (half open connections)t. So from what i understand we should employ an extra mechanism like application level pings.

Websocket onclose/onerror events does not fire if server crashes

I have observed the following behavior in Firefox 4 and Chrome 7:
If the server running the websocket daemon crashes, reboots, loses network connectivity, etc then the 'onclose' or 'onerror' events are not fired on the client-side. I would expect one of those events to be fired when the connection is broken for any reason.
If however the daemon is shutdown cleanly first, then the 'onclose' event is fired (as expected).
Why do the clients perceive the websocket connection as open when the daemon is not shutdown properly?
I want to rely on the expected behavior to inform the user that the server has become unavailable or that the client's internet connection has suffered a disruption.
TCP is like that. The most recent WebSockets standard draft (v76) has a clean shutdown message mechanism. But without that (or if it doesn't have a chance to be sent) you are relying on normal TCP socket cleanup which make take several minutes (or hours).
I would suggest adding some sort of signal handler/exit trap to the server so that when the server is killed/shutdown, a clean shutdown message is sent to all connected clients.
You could also add a heartbeat mechanism (ala TCP keep alive) to your application to detect when the other side goes away.

Resources