I have setup an Indy IdFTP Client to a FileZilla FTP Server.The client tries to Connect on startup of my app and, If it fails, keeps retrying every few seconds for the lifetime of the app. In addition, I need to detect if I lose the connection and, again, keep trying to re-establish the connection. This is where I am having a problem.
I have added an OnStatus event handler which seems to fire for all the event types except hsDisconnecting and hsDisconnected.
I also have an OnDisconnected event handler which only fires when I have locked the Server, in this case, when I try to connect, it fires the OnConnected then immediately fires the OnDisconnected. However, if I set the Server as not Active after the initial successful connection, the server tells me it has disconnected me but I do not get an event in my code so I don't know I need to start trying to connect again? Am I wrong in expecting these events in this scenario, is there something else I should be listening for?
Thank you in advance for your help.
Related
I am setting up an MQTT/Websockets server, my client is an flutter app, which connects to the broker on main screen, and in other screens it sends and receive messages from the broker. My understanding of keepAlive is how often the client and server should share ping/pong, so they make sure the connection is still alive. being said, if my flutter app, connects to the broker in main screen, of 3600/1 hour keepAlive, and suppose to share and receive messages on other screens, if i disconnect the client from the internet for 2 minutes, and reconnect after that, it will not send/receive messages, maybe my understanding of keepAlive is not correct. Well, How would i structure my app/server to reconnect automatically to the internet as soon as internet connection is back and up again.
I have also tried On.Disconnect method, which i noticed it will never get called, and the app even though still thinks its connected to the broker.
I mentioned websockets, on the tags as i could do mqtt over websockets.
I see that no-one else has responded, so I'll try (however I'm new to this also).
Also, have you looked at the Flutter connectivity package?
From my reading of the Mqtt specification, it seems the Mqtt client ** should** disconnect the TCP/IP connection if it doesn't receive a PINGRESP to its PINGREQ in the keep alive period (ie it's not required to disconnect).
My Flutter + Mqtt app checks the connection state, and reconnects if needed, every time it sends a message. I haven't needed to check for internet dropouts, but I have noticed the connection is lost on some application state changes. The main app widget. is notified of these using didChangeAppLifecycleState() and sends a dummy message if needed.
So this doesn't answer exactly what you asked, but I hope it's useful anyway.
This is in the context of reconnection. As a moderator I want to know when a remote subscriber (a client subscriber of the stream that I publish) drops temporarily its connection.
First, the subscriber events disconnect, reconnecting and reconnected are dispatched locally on the remote side, that is the guy that loses its connection. The publisher receives no events about the remote connection that gets lost.
Second, I tried to use the 'signal' JS web jdk as well as the server jdk. The idea was to get an error when sending a signal to a 'temporarily' disconnected client, but all I got is successful responses for both, the server I get 204 status code. So I understand the signals are queue and sent when the remote client gets reconnected.
So far I found no way to determine from a connected client when another client loses its connection 'temporarily entering in the 'reconnecting' state.
The signal API only sends error when the client gets completely disconnected from the session, not temporarily disconnected.
I know can mimic this with signals and timeouts, but this way I need the server to trust my client and I really need to validate this on the server side as well, otherwise it would be 'hackable'.
Thanks.
TokBox Developer Evangelist here.
When any client loses connection, a connectionDestroyed event is dispatched to all clients including the client that lost the connection. There is also a Connection Event object which includes a reason and a Connection property that is dispatched with the event.
The reason for the connection destroyed event varies depending on if the client called the disconnect method on the Session object, was force disconnected, or disconnected due to a network condition. In the context of a client temporarily losing connection, the connectionDestroyed event will fire with the reason property of the event data set to networkDisconnected.
As for the signal queue, by default, any signals you send while the client is temporarily disconnected from a session are queued and sent when (and if) it successfully reconnects. You can set the retryAfterReconnect property to false in the options you pass into the Session.signal() method to prevent signals from being queued while the client is disconnected.
For more information on Automatic Reconnection, please visit: https://tokbox.com/developer/guides/connect-session/js/#automatic_reconnection
I have a high performance client server system programmed from the scratch. i am still improving my system. the server using io overlapping to handle connections. the server correctly handles disconnections and resource deallocations. at the client side i used shutdown command with sd_receive to notify the server that the client has no data to receive after final send from the client. this works well. and server detects that as a graceful disconnection. rarely i have observed when the connection is very slow the server doesn't detect this. I feel that the shutdown partial closure doesn't reach the server. how can i handle this. this is important the server shouldn't contain this kind of connections if so the server can not be stopped. and i do not want to close all such connection by force.
at the client side i used shutdown command with sd_receive to notify the server that the client has no data to receive after final send from the client.
It doesn't do that.
this works well
It doesn't work at all. The shutdown command with SD_RECEIVE that you're using is completely pointless. A close, or a shutdown with SD_SEND or SD_BOTH, sends a FIN: shutdown with SD_RECEIVE does exactly nothing on the wire, and specifically it does not 'notify the server' of anything.
I feel that the shutdown partial closure doesn't reach the server.
It never reaches the server. Your code doesn't work the way you think it does. What reaches the server is the FIN, which in turn is the result of the close, not the shutdown SD_RECEIVE.
What you need here is a read timeout at the server end. As you're using select() or whatever is delivering you the events, you will have to implement the timeout manually yourself.
I start the server and the clients.
Then I close the server, and after few minutes reopen it.
I get too many messages (from the client).
How do I reset it, to start clean.
The server code:
var io = require('socket.io').listen(3001);
var games=[];
console.log('start1 server.js');
setTimeout(clear_games,30000);
io.sockets.on('connection', function(socket)
{
When your active socket.io clients lose the connection to the server, they will constantly try to reconnect until your server comes back online and they successfully get a reconnection. This is the normal and expected behavior and is generally a very useful features since the client will then automatically restablish a connect after a temporary server restart or interrupted network connection.
If you just want to start over for testing purposes, then close all the client web pages, shut down your server, then restart your server, then open the client web pages again. Because the client web pages will not be open when you start your server, it should not immediately get a bunch of reconnects.
If you never want the client to automatically reconnect, then you can specify that in an option in the client code that connects to the server with an option of {reconnection: false}, (though that is generally a bad idea because now the client won't auto-recover if the connection is temporarily interrupted).
What I did is:
In the client, I added a monitor function that is invoked (setTimeout) every second, and checks the socket.connected.
If it is not connected, I don't send any messages to the server till I see that it is true (connected) again.
This solved the problem.
I have observed the following behavior in Firefox 4 and Chrome 7:
If the server running the websocket daemon crashes, reboots, loses network connectivity, etc then the 'onclose' or 'onerror' events are not fired on the client-side. I would expect one of those events to be fired when the connection is broken for any reason.
If however the daemon is shutdown cleanly first, then the 'onclose' event is fired (as expected).
Why do the clients perceive the websocket connection as open when the daemon is not shutdown properly?
I want to rely on the expected behavior to inform the user that the server has become unavailable or that the client's internet connection has suffered a disruption.
TCP is like that. The most recent WebSockets standard draft (v76) has a clean shutdown message mechanism. But without that (or if it doesn't have a chance to be sent) you are relying on normal TCP socket cleanup which make take several minutes (or hours).
I would suggest adding some sort of signal handler/exit trap to the server so that when the server is killed/shutdown, a clean shutdown message is sent to all connected clients.
You could also add a heartbeat mechanism (ala TCP keep alive) to your application to detect when the other side goes away.