ZeroMQ connect to physically non connected socket - zeromq

I'm trying to understand if ZeroMQ can connect pub or sub socket to non existing (yet) ip address. Will it automatically connect when this IP address will appear in the future?
Or should I check up existance first before connecting?
Is the behavior same for PUB and SUB sockets?

The answer is buried somewhat in the manual, here:
for most transports and socket types the connection is not performed immediately but as needed by ØMQ. Thus a successful call to zmq_connect() does not mean that the connection was or could actually be established. Because of this, for most transports and socket types the order in which a server socket is bound and a client socket is connected to it does not matter. The ZMQ_PAIR sockets are an exception, as they do not automatically reconnect to endpoints.
As that quote says, the order of binding and connecting does not matter. This is extremely useful, as you don't then have to worry about start-up order; the client will be quite happy waiting for a server to come online, able to run other things without blocking on the connect.
Other Things That Are Useful
The direction of bind/connect is independent of the pattern used on top; thus a PUB socket can be connected to a SUB socket that has been bound to an interface (whereas the other way round might feel more natural).
The other thing that I think a lot of people don't realise is that you can bind (or connect) sockets more than once, to different transports. So a PUB socket can quite happily send to SUB clients that are both local in-process threads, other processes on the same machine via ipc, and to clients on remote machines via tcp.
There are other things that you can do. If you use the ZMQ_FD option from here, you can get ZMQ_EVENT notifcations in some way or other (I can't remember the detail) which will tell you when the underlying connection has been successfully made. Using the file descriptor allows you to include that in a zmq_poll() (or some other reactor like epoll() or select()). You can also exploit the heartbeat functionality that a socket can have, which will tell you if the connection dies for some reason or other (e.g. crashed process at the other end, or network cable fallen out). Use of a reactor like zmq_poll(), epoll() or select() means that you can have a pure actor model event-driven system, with no need to routinely check up on status flags, etc.
Using these facilities in ZMQ allows for the making of very robust distributed applications/system that know when various bits of themselves have died, come back to life, taken a network-out holiday, etc. For example, just knowing that a link is dead perhaps means that a node in your distributed app changes its behaviour somehow to adapt to that.

Related

How to properly manage sockets for simultaneous battles in a multiplayer strategy game

I'm building a strategy game, where players can battle each other. As for now, I'm focusing on making 1v1 PvP battles, but I also want to build an architecture, that will allow for further extension by up to 3v3 battles.
The game I create is based on socket Client/Server architecture. Every player, that will enter the game and press the "Find match" button, will be placed in a separate battle against one of the other players.
However, I have so many questions about how to structure the sockets:
Do I need a separate socket ("room socket") for each simultaneous battle?
Who should create and bind the room socket? If it's a client, how the server can connect to this socket if the client's ports are closed? If it's a server, see p. 3
Is it possible to bind all of these sockets to one port? How the client can connect to "his" socket if the addresses and the ports are the same?
When and how to open "room sockets" so that each client will get a corresponding endpoint? How to write it on server-side?
How many sockets do I need for matchmaking queue ("welcome sockets")?
Am I to use multithreaded programming, or it is possible to go without it?
I will be grateful for any help with it
P. S. Since the language I'm writing my server on isn't too prevalent, I can't use any ready solutions
From your question I suspect you could benefit from reviewing the Beej Guide to Network Programming.
Do I need a separate socket ("room socket") for each simultaneous battle?
I'm not sure what you mean by a "room" socket. If you mean that a different listening socket will be assigned per game, than that wouldn't be practical.
The normal way to go about is to have the server listen on a single socket (address / port) and each client will connect to the server's socket.
This means that the server will have a socket per active client + a listening socket and each client implementation will have a single socket (the connecting socket).
For a 1:1 game, the sockets can be "matched" to a couplet by the server, making that "couplet" into a room.
For a 1:many game you might consider using a pub/sub pattern by implementing "channels" and "subscriptions"... However, since (I'm assuming) a player can only enter a single game at a time, you might consider making an array or linked list of players per game.
Is it possible to bind all of these sockets to one port? How the client can connect to "his" socket if the addresses and the ports are the same?
Yes, it's possible and this is how servers actually work.
A listening socket behaves slightly different than a connection socket, in the sense that a listening socket can "accept" connections and create a new socket per connection.
When and how to open "room sockets" so that each client will get a corresponding endpoint? How to write it on server-side?
This is language dependent. Most languages have some kind of variation on the functions listen in the Beej Guide to Network Programming.
Generally a server will call listen and than create new client sockets using accept. A client will call connect and have a single socket.
How many sockets do I need for matchmaking queue ("welcome sockets")?
For a 1:1 game you will need a single socket "queued" as it waits for the next available connection.
Of course, this might be more complex. If a client has a game requirement (i.e., only players level 10 and up), you might require an ordered list or another data-store to manage the queue.
Am I to use multithreaded programming, or it is possible to go without it?
You can probably run thousands of games on a single machine with a single thread if you use an evented (non-blocking) design.
This really depends on how much work is performed on the server vs. how much work is performed on the client's computer.

Pinging client with Websocket server

I have a Websocket connection being served from http-kit (Clojure, and it works great). I send pings from the client to make sure we're still connected, and everything works fine there. My question is, do people bother pinging client from server in these cases?
I was trying to set something up to remove the channel from the server if I didn't get a response, but it's not very functional-friendly to set up timed processes and alter state to track the ping-pong cycle, so it was getting a little ugly. Then I thought, the server can handle hundreds of thousands of simultaneous connections, should I just not worry about a few broken threads? How do people typically handle (or not handle) this?
The WebSocket protocol itself has heart beating to keep the connection alive. If you wanted an additional layer on top of that you could use the STOMP protocol, which coordinates heartbeats between client/server.
The one STOMP implementation I know of for the JVM is Stampy. There’s one for JS too, stompjs. Note: the heartbeat implementation differs between these libs, I believe the Stampy one is incorrect. You’d have to roll your own.

Design/Architecture: web-socket one connection vs multiple connections

During a designing of a client/server architecture, is there any advantage to multiplexing multiple WEBSOCKET connections from the same process to the server (i.e. sharing one connection) vs opening one WEBSOCKET connection per thread/session in the client (as is typically done when connecting to memcached or database servers.)
I'm aware about the overhead associated with each connection (e.g. RAM ...). But expect to have less than 1K-10K at the most in each client side.
Specific use case:
Lets assume, I have a remote server with multiple sessions on one side, and on the other side I have multiple clients, each client will connect to a different session through the websocket server.
In the remote server, there are 2 ways to implement it: (1) each session create its own websocket connection (2) all sessions will use same websocket connection.
From connection point of view, I like the sharing solution (one websocket connection to all sessions), because websocket server is limited by #of connections available (saving servers/scaling).
But from traffic/data speed/performance point of view, if a sessions will send lots of small packages through the connection, then, if we use one sharing connection, we will not be able to utilize the bandwidth (payload..../collect few small packages into one or split big package into small packages), because we may have to send different packages to different clients from different sessions, in this case, we will not be able to collect few packages (small packages) since they have different destination and from different sources!!, unless we will create "virtual connection" that manage each session data to maximize the speed, but this would create much implementation complexity!!!
Any other opinions?
Thanks,
JB.
I think you should consider using a limited connection pool, like they do with Database connection architecture.
Another solution I would consider is a Pub/Sub database middleman such as Redis. This allows you to use existing solutions as well as easier scalability.
To the best of my understanding, both having a single connection and using a multitude of connections have their issues.
For example, one connection can send only one message at a time. A big enough message could block the connection... are you moving big data?
Many connections can cause an overhead that could be very expensive as well as introduce more chances for errors. Consider the following:
Creating new connections is very expensive, uses bandwidth, suffers from longer network delays and requires local resources and this is exactly what websockets allows us to avoid.
You will run into scalability issues. For instance, Heroku limits websocket connections to 600 per server, or at least they did so a short while back (and I think it's reasonable)... How will you connect all the servers together to one data-store?
Remember every OS has an open file limit and that websockets use the IO architecture (each one is an 'open-file', so that websockets are a limited resource).
Regarding traffic/data speed/performance, it is a question of server architecture... but I believe you will actually see a slight speed increase by using one connection (or a small pool of connections). It's important to remember that there isn't any effective multi-tasking when you need to send TCP/IP packets.
Also, with a limited number of connections (even with one connection), you will be able to benefit from the OS's packet joining feature that will allow you to send a number of websocket frames over one TCP/IP packet (unless you constantly flush the TCP/IP socket). You will actually waste more bandwidth with more connections - even disregarding the bandwidth used to open each new connection.
Just my 5 cents, we will all think differently, I'm sure.
Good Luck!

Will XMPP work in a NAT environment?

An XMMP server sends push notifications to a client behind a NAT using a public endpoint( IP + Port) supplied by NAT to client. But how long this endpoint is assigned to this specific client by NAT, what will happen if NAT assigns same endpoint to another client ? How this problem can be solved?
XMPP uses a standard TCP connection. NATs will keep the association for as long as the connection is alive (unless they are horribly broken).
Update: The last part of my statement could have been expanded a bit. Horribly broken NAT implementations do exist. Generally these are a small percentage, but many (most?) popular XMPP clients do ensure they send some kind of keepalive over idle connections.
There are three kinds of keepalive you can use I'll list them here in order of bandwidth/processing requirements:
TCP keepalives are a good lightweight option, especially as once they are enabled, they are automatically handled by the OS. How to enable them will depend on your language and framework, but at the lowest level, you need to enable the SO_KEEPALIVE option on the socket.
There are two problems with TCP keepalives. One is that you can't control them from your application (unless you write platform-specific code). The second problem is that some NAT implementations are so broken that they will ignore TCP keepalives too! But you're hopefully down to a very small percentage now.
So another option is whitespace keepalives. Since these involve data going across the stream, you should be safe from even the broken NATs that ignore keepalives.
Whitespace keepalives simply involve sending the space character (' ') across the XMPP stream at any time it is idle. XML and XMPP allow unlimited whitespace between elements, and it is simply ignored by the recipient.
Finally, you can use fully-fledged XMPP pings (XEP-0199). These involve ending an actual <iq/> 'get' stanza to the server, which then must reply. This ensures data flows in both directions, and should make even the most broken NAT implementations keep your connection alive.
Ok, I should mention that there is an even worse class of NAT. I have seen NATs that will simply 'forget' about your mapping for a range of reasons, including their mapping table being full, or just after a timer. There is nothing you can do to work around these, they don't work with any long-lived TCP connections. The best you could probably do at that point is use BOSH (essentially XMPP over HTTP).
Conclusion: If you are concerned that your application may run behind some of these devices, I suggest something like the following algorithm (exact times may be tweaked, but I recommend these as minimum values):
If you have not sent any data for 60s, send a single space character.
If you have not received any data for 120s, send an XMPP ping to your server.
If the server doesn't reply to the ping within a reasonable amount of time, reconnect.
Because the behaviour of broken NAT devices is beyond any standard protocol specification, it is naturally impossible to devise a perfect solution that will work with all of them, all of the time. You just have to accept that these are a small minority, and none of this matters for working NAT devices (though there are other kinds of network breakages that may make regular keepalives/pings a good idea, depending on the needs of your application).
The Solution is sending keep alive messages to maintain the NAT entry. XMPP whitespace is typically used. Send it eg every Ten minutes to preserve reachability of the nated client.
You have to keep in mind that NAT is no standardized technique. Thus there are different implementations. The provided RFCs in the comment above is from the BEHAVE working group.

WinSock best accept() practices

Imagine you have a server which can handle only one client at a time. The server uses WSAAsyncSelect to be notified of new connections. In this case, what is the best way of handling FD_ACCEPT messages:
A > Accept the connection attempt right away but queue the client until its turn?
B > Do not accept the next connection attempt until we are done serving the currently connected client?
What do you guys think is the most efficient?
Here I describe the cons that I'm aware for both options. Hopefully this might help you decide.
A)
Upon a new client connection, it could send tons of data making your receive buffer become full, which causes unnecessary packets to be transmitted (see this). If you don't plan to receive any data from the client, shutdown receiving on that socket, thus if the client sends any data after that, the connection is reset. Moreover, if your protocol has strict rules, disconnect the client.
If the connection stays idle for too long, the system might disconnect it. To solve this, use setsockopt to set SO_KEEPALIVE on each client socket.
B)
If you don't accept the connection after a certain period (I guess the default is 60 seconds), it will timeout. In a normal (or most common) situation this indicates the server is overloaded, thus unable to answer in time. However, if the client is also designed by you, make the socket non-blocking, try to connect, then manage the timeout as you wish.
Ask yourself: what do you want the user experience to be at the other end? Do you want them to be stuck? Do you want them to time out? Do you want them to get a polite message?

Resources