Cowboy websocket handler: How to set timeout for first message? - websocket

I have websocket connection between my server and client that should be kept alive until client close it. When opens connection client should authenticate himself by sending first message with token. If for some period of time after spawning websocket handler will not receive such message it should terminates. How to implement such timeout?
NOTE: idle_timeout option is not suitable as I need timeout only for first message.

Just start timer in your websocket_init callback on your own.
If user authenticate before timeout, just ignore it, you can achieve this by updating the state when user authenticate himself.
erlang:start_timer documentation:
http://erlang.org/doc/man/erlang.html#start_timer-3

Related

In AWS WebSocket API (API Gateway), can I send a message to the client in the $connect route?

When my lambda gets ivoked due to $connect route getting invoked, can I safely send a message to the connection id at this point. Or is the connection not yet fully established? Is it a better idea to use a HTTP response header?
Yes, you should be able to as you will have the client's connectionID. The docs say the connection is established when your integration execution completes.

Determining if a remote subscriber is temporarily disconnected

This is in the context of reconnection. As a moderator I want to know when a remote subscriber (a client subscriber of the stream that I publish) drops temporarily its connection.
First, the subscriber events disconnect, reconnecting and reconnected are dispatched locally on the remote side, that is the guy that loses its connection. The publisher receives no events about the remote connection that gets lost.
Second, I tried to use the 'signal' JS web jdk as well as the server jdk. The idea was to get an error when sending a signal to a 'temporarily' disconnected client, but all I got is successful responses for both, the server I get 204 status code. So I understand the signals are queue and sent when the remote client gets reconnected.
So far I found no way to determine from a connected client when another client loses its connection 'temporarily entering in the 'reconnecting' state.
The signal API only sends error when the client gets completely disconnected from the session, not temporarily disconnected.
I know can mimic this with signals and timeouts, but this way I need the server to trust my client and I really need to validate this on the server side as well, otherwise it would be 'hackable'.
Thanks.
TokBox Developer Evangelist here.
When any client loses connection, a connectionDestroyed event is dispatched to all clients including the client that lost the connection. There is also a Connection Event object which includes a reason and a Connection property that is dispatched with the event.
The reason for the connection destroyed event varies depending on if the client called the disconnect method on the Session object, was force disconnected, or disconnected due to a network condition. In the context of a client temporarily losing connection, the connectionDestroyed event will fire with the reason property of the event data set to networkDisconnected.
As for the signal queue, by default, any signals you send while the client is temporarily disconnected from a session are queued and sent when (and if) it successfully reconnects. You can set the retryAfterReconnect property to false in the options you pass into the Session.signal() method to prevent signals from being queued while the client is disconnected.
For more information on Automatic Reconnection, please visit: https://tokbox.com/developer/guides/connect-session/js/#automatic_reconnection

Which timeout has TIBCO EMS while waiting for acknowledge?

We are developing a solution using TIBCO-EMS, and we have a question about its behaviour.
When using CLIENT_ACKNOWLEDGE mode to connect, the client acknowledges the received message. We'd like to know for how long does TIBCO wait for acknowledgement, and if this time is configurable by the system admin.
By default, the EMS server waits forever for the acknowledgement of the message.
As long as the session is still alive, the transaction will not be discarded and the server waits for an acknowledgement or rollback.
There is however a setting within the server disconnect_non_acking_consumers where the client will be disconnected, if there are more pending messages (not acknowledged) then the queue limit does allow to store (maxbytes, maxmsgs). In this case, the server sends a connection reset to get rid of the client.
Sadly the documentation doesn't state this explicitly and the only public record I found was a knowledge base entry: https://support.tibco.com/s/article/Tibco-KnowledgeArticle-Article-33925

Understanding timeouts in websocket sessions

websocket session is wrapped in a http session and so when the http session timesout the websocket session also times out.
However, when only the first call is a http call which is based on session cookie and the rest of the time it is a direct established connection,
how does the connection ever close in case of a timeout?
Scenario - We have a reverse proxy that manages the validation check on the sessions. This means it intercepts each call and checks for the validity of the session.
In case the cookies have expired, it returns a 401.
Since I have integrated websockets to this system, the initial websocket call goes through this reverse proxy with a valid cookie, upgrades the request to websocket and thereafter
keeps sending messages directly. The reverse proxy is not aware of these direct messages sent over WS.
Now when the http session expires, the other calls being made to the system get a 401. However the WS connection above does not know about it at all and continues to send/receive messages.
In case of a logout, an invalidate is called on the http session and so all the bound objects are notified and I get a SessionDisconnectEvent. However in case of timeouts I have no indication at all.
How should I terminate the WS connection in such cases?
Stack - spring + sockJS + basic stomp
My observations are that all the websocket sessions that are bound to the http session are not terminated in case of logout. Only the one that initiated the logout gets the SessionDisconnectMessage.
In case of timeouts, there are no indications at all.
To handle timeouts, I make a call to the server soon after I get a message at the client side and look for a 401 on this call. If it returns a 401, I initiate a session close on the client side.
To handle logout, I maintain a map of the http session id and all the websocket sessions associated with it. When I receive a disconnect on any of the websocket sessions, I terminate all the other ws sessions associated with that http session.

Indy FTP Client OnStatus not getting Disconnect events

I have setup an Indy IdFTP Client to a FileZilla FTP Server.The client tries to Connect on startup of my app and, If it fails, keeps retrying every few seconds for the lifetime of the app. In addition, I need to detect if I lose the connection and, again, keep trying to re-establish the connection. This is where I am having a problem.
I have added an OnStatus event handler which seems to fire for all the event types except hsDisconnecting and hsDisconnected.
I also have an OnDisconnected event handler which only fires when I have locked the Server, in this case, when I try to connect, it fires the OnConnected then immediately fires the OnDisconnected. However, if I set the Server as not Active after the initial successful connection, the server tells me it has disconnected me but I do not get an event in my code so I don't know I need to start trying to connect again? Am I wrong in expecting these events in this scenario, is there something else I should be listening for?
Thank you in advance for your help.

Resources