Would switching access points drop a websocket connection? - websocket

My question says it all. I am walking around a site with a websocket app that works on my phone and there are numerous wire less access points around the area.
Still though, I have a very hard time with maintaining a connection. Is it possible that when I go from one area to another, switching access points (not wi fi networks) would cause my websocket connection to break.
If it does, is there a way to immediately detect this?
Thanks

If the WebSocket library used in your app does not detect a dropped connection and retry the connection, then your app will be out of luck. That's why any good WebSocket library must be able to recover from a broken connection.
You can detect an outage in two ways: either wait for OS to tell you the underlying TCP connection is broken (which can be many minutes), or come up with a "are we still connected" protocol where you send something to the server every so often and require a response (if no response, you assume you are disconnected).

Related

ZeroMQ connect to physically non connected socket

I'm trying to understand if ZeroMQ can connect pub or sub socket to non existing (yet) ip address. Will it automatically connect when this IP address will appear in the future?
Or should I check up existance first before connecting?
Is the behavior same for PUB and SUB sockets?
The answer is buried somewhat in the manual, here:
for most transports and socket types the connection is not performed immediately but as needed by ØMQ. Thus a successful call to zmq_connect() does not mean that the connection was or could actually be established. Because of this, for most transports and socket types the order in which a server socket is bound and a client socket is connected to it does not matter. The ZMQ_PAIR sockets are an exception, as they do not automatically reconnect to endpoints.
As that quote says, the order of binding and connecting does not matter. This is extremely useful, as you don't then have to worry about start-up order; the client will be quite happy waiting for a server to come online, able to run other things without blocking on the connect.
Other Things That Are Useful
The direction of bind/connect is independent of the pattern used on top; thus a PUB socket can be connected to a SUB socket that has been bound to an interface (whereas the other way round might feel more natural).
The other thing that I think a lot of people don't realise is that you can bind (or connect) sockets more than once, to different transports. So a PUB socket can quite happily send to SUB clients that are both local in-process threads, other processes on the same machine via ipc, and to clients on remote machines via tcp.
There are other things that you can do. If you use the ZMQ_FD option from here, you can get ZMQ_EVENT notifcations in some way or other (I can't remember the detail) which will tell you when the underlying connection has been successfully made. Using the file descriptor allows you to include that in a zmq_poll() (or some other reactor like epoll() or select()). You can also exploit the heartbeat functionality that a socket can have, which will tell you if the connection dies for some reason or other (e.g. crashed process at the other end, or network cable fallen out). Use of a reactor like zmq_poll(), epoll() or select() means that you can have a pure actor model event-driven system, with no need to routinely check up on status flags, etc.
Using these facilities in ZMQ allows for the making of very robust distributed applications/system that know when various bits of themselves have died, come back to life, taken a network-out holiday, etc. For example, just knowing that a link is dead perhaps means that a node in your distributed app changes its behaviour somehow to adapt to that.

Pinging client with Websocket server

I have a Websocket connection being served from http-kit (Clojure, and it works great). I send pings from the client to make sure we're still connected, and everything works fine there. My question is, do people bother pinging client from server in these cases?
I was trying to set something up to remove the channel from the server if I didn't get a response, but it's not very functional-friendly to set up timed processes and alter state to track the ping-pong cycle, so it was getting a little ugly. Then I thought, the server can handle hundreds of thousands of simultaneous connections, should I just not worry about a few broken threads? How do people typically handle (or not handle) this?
The WebSocket protocol itself has heart beating to keep the connection alive. If you wanted an additional layer on top of that you could use the STOMP protocol, which coordinates heartbeats between client/server.
The one STOMP implementation I know of for the JVM is Stampy. There’s one for JS too, stompjs. Note: the heartbeat implementation differs between these libs, I believe the Stampy one is incorrect. You’d have to roll your own.

Is it possible for a websocket frame to fail to arrive?

As I understand it, Websockets use a Ping to detect that they are still connected. Except of course Chrome which leaves it to apps to do the ping themselves.
I'd like to understand if its possible for a connection to become unstable between pings such that a frame of data is not received... but to stabilize again by the time the next ping is sent. In other words: is it possible to have an apparently good websocket connection, but for data to fail to arrive?
Question relates to Is it possible to miss websocket events which remains unanswered and side-tracked into long-polling and socket-io.
Thanks!
This is heavily dependent on the client software (browser) that you use.
The websockets depend on a TCP connection which will make sure the message arrives to destination. Except if the network connection is down, of course.
However, some clients (browsers) will suspend the inactive tabs and will not process the events. If your page is inactive, it "may" fail to send data to the server because it will not be executed at all. On the other hand, it "may" also fail to receive data because the handler will not be executed at all.
Meanwhile, even if inactive, the machine will still receive the ping packets. So it is really about whether or not your client software gives it back to your code or not.

Socket.io data loss when Internet speed drop

I am using socket.io 1.4 and I want to know that what happens in this scenario:
The client Emits like this:
Socket.emit('test',data);
The client does 3 emits to server but suddenly Internet speed drops and those emits may not get to server
But after a while the Internet speed rises again but what will happen to previous failed emits?
They will be emitted again automatically?
How should I handle that
Websockets use TCP, which is in general a reliable protocol. There is not exactly such a thing as "The internet speed dropped and I lost some messages." If some messages are lost they will be automatically retransmitted at the TCP level. If retransmission fails completely, the connection will be reset.
So what you really are asking is how socket.io handles this. And the answer is that it has some amount of reconnecting logic, and you may also want to monitor the connection in case it resets (hook up a listener for the disconnect event on the socket), if you want to take some extra action (like notify the user).

WinSock best accept() practices

Imagine you have a server which can handle only one client at a time. The server uses WSAAsyncSelect to be notified of new connections. In this case, what is the best way of handling FD_ACCEPT messages:
A > Accept the connection attempt right away but queue the client until its turn?
B > Do not accept the next connection attempt until we are done serving the currently connected client?
What do you guys think is the most efficient?
Here I describe the cons that I'm aware for both options. Hopefully this might help you decide.
A)
Upon a new client connection, it could send tons of data making your receive buffer become full, which causes unnecessary packets to be transmitted (see this). If you don't plan to receive any data from the client, shutdown receiving on that socket, thus if the client sends any data after that, the connection is reset. Moreover, if your protocol has strict rules, disconnect the client.
If the connection stays idle for too long, the system might disconnect it. To solve this, use setsockopt to set SO_KEEPALIVE on each client socket.
B)
If you don't accept the connection after a certain period (I guess the default is 60 seconds), it will timeout. In a normal (or most common) situation this indicates the server is overloaded, thus unable to answer in time. However, if the client is also designed by you, make the socket non-blocking, try to connect, then manage the timeout as you wish.
Ask yourself: what do you want the user experience to be at the other end? Do you want them to be stuck? Do you want them to time out? Do you want them to get a polite message?

Resources