I wan to make a chat app with AutobahnAndroid. My problem is that after opening a websocket connection, wifi icon in top left corner of mobile is in transferring data. Is there a way to solve this problem? For example when a new message received it show transfer icon and when there is no message it does not use network like Telegram?
WebSocket connections are persistent. They do not actually use networking resources when there are no messages being sent, but there is still an open socket. Maybe this triggers the transmission icon. And as pietro909 stated in one of the comments, to prevent the system or intermediaries from closing the connection, you may be sending keep alive pings.
In the end, you can't have both: A push channel from the server and network activity only when there are messages.
Related
I am setting up an MQTT/Websockets server, my client is an flutter app, which connects to the broker on main screen, and in other screens it sends and receive messages from the broker. My understanding of keepAlive is how often the client and server should share ping/pong, so they make sure the connection is still alive. being said, if my flutter app, connects to the broker in main screen, of 3600/1 hour keepAlive, and suppose to share and receive messages on other screens, if i disconnect the client from the internet for 2 minutes, and reconnect after that, it will not send/receive messages, maybe my understanding of keepAlive is not correct. Well, How would i structure my app/server to reconnect automatically to the internet as soon as internet connection is back and up again.
I have also tried On.Disconnect method, which i noticed it will never get called, and the app even though still thinks its connected to the broker.
I mentioned websockets, on the tags as i could do mqtt over websockets.
I see that no-one else has responded, so I'll try (however I'm new to this also).
Also, have you looked at the Flutter connectivity package?
From my reading of the Mqtt specification, it seems the Mqtt client ** should** disconnect the TCP/IP connection if it doesn't receive a PINGRESP to its PINGREQ in the keep alive period (ie it's not required to disconnect).
My Flutter + Mqtt app checks the connection state, and reconnects if needed, every time it sends a message. I haven't needed to check for internet dropouts, but I have noticed the connection is lost on some application state changes. The main app widget. is notified of these using didChangeAppLifecycleState() and sends a dummy message if needed.
So this doesn't answer exactly what you asked, but I hope it's useful anyway.
I have a active websocket connection on each tab of the browser.
when there is a message received by each of the tabs, they are all ringing the bells which is annoying.
Is there a way i can ring the bell only once even multiple tabs open(Facebook is doing this)
If you control the server, you can unicast the "ring" event to a specific connection (I would suggest the latest connection) while broadcasting, as you do now, the actual message to all the websocket connections.
If you don't control the server, I doubt there is an easily applicable solution.
I have a client/server setup in which clients send a single request message to the server and gets a bunch of data messages back.
The server is implemented using a ROUTER socket and the clients using a DEALER. The communication is asynchronous.
The clients are typically iPads/iPhones and they connect over wifi so the connection is not 100% reliable.
The issue I’m concern about is if the client connects to the server and sends a request for data but before the response messages are delivered back the communication goes down (e.g. out of wifi coverage).
In this case the messages will be queued up on the server side waiting for the client to reconnect. That is fine for a short time but eventually I would like to drop the messages and the connection to release resources.
By checking activity/timeouts it would be possible in the server and the client applications to identify that the connection is gone. The client can shutdown the socket and in this way free resources but how can it be done in the server?
Per the ZMQ FAQ:
How can I flush all messages that are in the ZeroMQ socket queue?
There is no explicit command for flushing a specific message or all messages from the message queue. You may set ZMQ_LINGER to 0 and close the socket to discard any unsent messages.
Per this mailing list discussion from 2013:
There is no option to drop old messages [from an outgoing message queue].
Your best bet is to implement heartbeating and, when one client stops responding without explicitly disconnecting, restart your ROUTER socket. Messy, I know, this is really something that should have a companion option to HWM. Pieter Hintjens is clearly on board (he created ZMQ) - but that was from 2011, so it looks like nothing ever came of it.
This is a bit late but setting tcp keepalive to a reasonable value will cause dead sockets to close after the timeouts have expired.
Heartbeating is necessary for either side to determine the other side is still responding.
The only thing I'm not sure about is how to go about heartbeating many thousands of clients without spending all available cpu just on dealing with the heartbeats.
I'm using uwsgi's websockets support and so far it's looking great, the server detects when the client disconnects and the client as well when the server goes down. But i'm concerned this will not work in every case/browser.
In other frameworks, namely sockjs, the connection is monitored by sending regular messages that work as heartbeats/pings. But uwsgi sends PING/PONG frames (ie. not regular messages/control frames) according to the websockets spec and so from the client side i have no way to know when the last ping was received from the server. So my question is this:
If the connection is dropped or blocked by some proxy will browsers reliably (ie. Chrome, IE, Firefox, Opera) detect no PING was received from the server and signal the connection as down or should i implement some additional ping/pong system so that the connection is detected as closed from the client side?
Thanks
You are totally right. There is no way from client side to track or send ping/pongs. So if the connection drops, the server is able of detecting this condition through the ping/pong, but the client is let hung... until it tries to send something and the underlying TCP mechanism detect that the other side is not ACKnowledging its packets.
Therefore, if the client application expects to be "listening" most of the time, it may be convenient to implement a keep alive system that works "both ways" as Stephen Clearly explains in the link you posted. But, this keep alive system would be part of your application layer, rather than part of the transport layer as ping/pongs.
For example you can have a message "{token:'whatever'}" that the server and client just echoes with a 5 seconds delay. The client should have a timer with a 10 seconds timeout that stops every time that messages is received and starts every time the message is echoed, if the timer triggers, the connection can be consider dropped.
Although browsers that implement the same RFC as uWSGI should detect reliably when the server closes the connection cleanly they won't detect when the connection is interrupted midway (half open connections)t. So from what i understand we should employ an extra mechanism like application level pings.
I have observed the following behavior in Firefox 4 and Chrome 7:
If the server running the websocket daemon crashes, reboots, loses network connectivity, etc then the 'onclose' or 'onerror' events are not fired on the client-side. I would expect one of those events to be fired when the connection is broken for any reason.
If however the daemon is shutdown cleanly first, then the 'onclose' event is fired (as expected).
Why do the clients perceive the websocket connection as open when the daemon is not shutdown properly?
I want to rely on the expected behavior to inform the user that the server has become unavailable or that the client's internet connection has suffered a disruption.
TCP is like that. The most recent WebSockets standard draft (v76) has a clean shutdown message mechanism. But without that (or if it doesn't have a chance to be sent) you are relying on normal TCP socket cleanup which make take several minutes (or hours).
I would suggest adding some sort of signal handler/exit trap to the server so that when the server is killed/shutdown, a clean shutdown message is sent to all connected clients.
You could also add a heartbeat mechanism (ala TCP keep alive) to your application to detect when the other side goes away.