ZeroMQ, async blocking sockets - zeromq

I'm building a distributed system and I would like asynchronous send and recv from both sides with blocking after high water mark.
PUSH/PULL sockets works great, but I wasn't able to bind a PUSH socket. Meaning I can't have a client-PUSH to server-PULL and a server-PUSH to client-PULL, if the client is behind a firewall, since the server can't connect to the client.
In the book, the following is written, but I can't find an example of it.
"REQ to DEALER: you could in theory do this, but it would break if you added a second REQ because DEALER has no way of sending a reply to the original peer. Thus the REQ socket would get confused, and/or return messages meant for another client." http://zguide.zeromq.org/php:chapter3
I only need a one-to-one connection, so this would in theory work for me.
My question is, what is the best practice to obtain asynchronous send and recv with ZeroMQ without dropping packets?

Most ZeroMQ sockets can both bind (listen on a specific port, acting as a server) and connect (acting as a client). It is usually not related to the data flow. See the guide for more info.
Try to bind on your servers PUSH socket and connect from your clients PULL socket.

Related

Using ZeroMQ to send replies to specific clients and queue if client disconnects

I'm new to ZeroMQ and trying to figure out a design issue. My scenario is that I have one or more clients sending requests to a single server. The server will process the requests, do some stuff, and send a reply to the client. There are two conditions:
The replies must go to the clients that sent the request.
If the client disconnects, the server should queue messages for a period of time so that if the client reconnects, it can receive the messages it missed.
I am having a difficult time figuring out the simplest way to implement this.
Things I've tried:
PUB/SUB - I could tag replies with topics to ensure only the subscribers that sent their request (with their topic as their identifier) would receive the correct reply. This takes care of the routing issue, but since the publisher is unaware of the subscribers, it knows nothing about clients that disconnect.
PUSH/PULL - Seems to be able to handle the message queuing issue, but looks like it won't support my plan of having messages sent to specific clients (based on their ID, for example).
ROUTER/DEALER - Design seemed like the solution to both, but all of the examples seem pretty complex.
My thinking right now is continuing with PUB/SUB, try to implement some sort of heartbeat on the client end (allowing the server to detect the client's presence), and when the client no longer sends a heartbeat, it will stop sending messages tagged with its topic. But that seems sub-optimal and would also involve another socket.
Are there any ideas or suggestions on any other ways I might go about implementing this? Any info would be greatly appreciated. I'm working in Python but any language is fine.
To prepare the best proposition for your solution, more data about your application requirements. I have made a little research about your conditions and connnect it with my experience about ZMQ, here I present two possibilities:
1) PUSH/PULL pattern in two direction, bigger impact on scalability, but messages from server will be cached.
Server has one PULL socket to register each client and get all messages from clients. Each message should have client ID to for server knowledge where send response.
For each client - server create PUSH socket to send responses. Socket configuration was sent in register message. You can use also REQ/REP pattern for register clients (assign socket number).
Each client has own PULL socket, which configuration was sent to server in register message.
It means that server with three clients required to (example port numbers in []):
server: 1 x PULL[5555] socket, 3 x PUSH[5560,5561,5562] sockets (+ optional 1 X REQ[5556] socket for registrations, but I think it depends how you prepare client identity)
client: 1 x PUSH[5555] socket, 1 x PULL[5560|5561|5562] (one per client) (+ optional 1 X REP[5556])
You have to connect server to multiple client sockets to send responses but if client disconnects, messages will not lost. Client will get their own messages when it reconnect to their PULL socket. The disadvantage is requirements of creating few PUSH sockets on server side (number of clients).
2) PUB/SUB + PUSH/PULL or REQ/REP, static cocket configuration on server side (only 2), but server has to prepare some mechanism for retransmit or cache messages.
Server create PUB socket and PULL or REQ. Client register it identity by PULL or REQ socket. server will publish all messages to client with this identity as filter. Server use monitor() function on PUB socket to count number of connected and disconnected clients (actions: 'accept' and 'disconnect'). After 'disconnect' action server publish message to all clients to register again. For clients which not re-register, server stop publish messages.
Client create SUB socket and PUSH or REQ to register and send requests.
This solution requires maybe some cache on server side. Client could confirm each message after get it from SUB socket. It is more complicated and have to be connected with your requirement. If you just would like to know that client lost message. Client could send timestamps of last message received from server during registration. If you need guarantee that clients get all messages, you need some cache implementation. Maybe other process which subscribe all messages and delete each confirmed by client.
In this solution server with three clients required to (example port numbers in []):
server: 1 x PUB[5555] socket, 1 x REP or PULL[5560] socket + monitoring PUB socket
client: 1 x SUB[5555] socket and own identity for filter, 1 x REQ or PUSH[5560] socket
About monitoring you could read here: https://github.com/JustinTulloss/zeromq.node#monitoring (NodeJS implementation, but Python will be similar)
I think about other patterns, but I am not sure that ROUTER/DEALER or REQ/REP will cover your requirements. You should read more about patterns, because each of it is better for some solutions. Look here:
official ZMQ guide (a lot of examples and pictures)
easy ROUTER/DEALER example: http://blog.scottlogic.com/2015/03/20/ZeroMQ-Quick-Intro.html

How to drop inactive/disconnected peers in ZMQ

I have a client/server setup in which clients send a single request message to the server and gets a bunch of data messages back.
The server is implemented using a ROUTER socket and the clients using a DEALER. The communication is asynchronous.
The clients are typically iPads/iPhones and they connect over wifi so the connection is not 100% reliable.
The issue I’m concern about is if the client connects to the server and sends a request for data but before the response messages are delivered back the communication goes down (e.g. out of wifi coverage).
In this case the messages will be queued up on the server side waiting for the client to reconnect. That is fine for a short time but eventually I would like to drop the messages and the connection to release resources.
By checking activity/timeouts it would be possible in the server and the client applications to identify that the connection is gone. The client can shutdown the socket and in this way free resources but how can it be done in the server?
Per the ZMQ FAQ:
How can I flush all messages that are in the ZeroMQ socket queue?
There is no explicit command for flushing a specific message or all messages from the message queue. You may set ZMQ_LINGER to 0 and close the socket to discard any unsent messages.
Per this mailing list discussion from 2013:
There is no option to drop old messages [from an outgoing message queue].
Your best bet is to implement heartbeating and, when one client stops responding without explicitly disconnecting, restart your ROUTER socket. Messy, I know, this is really something that should have a companion option to HWM. Pieter Hintjens is clearly on board (he created ZMQ) - but that was from 2011, so it looks like nothing ever came of it.
This is a bit late but setting tcp keepalive to a reasonable value will cause dead sockets to close after the timeouts have expired.
Heartbeating is necessary for either side to determine the other side is still responding.
The only thing I'm not sure about is how to go about heartbeating many thousands of clients without spending all available cpu just on dealing with the heartbeats.

ZMQ REP, knowing who send the request

I m currently using zmq with python. Server is using REP socket.
Do I have a way, when recv a message, to know who send it ? If a receive 2 messages, I just need to know if they come from the same user or not, so an uid for example would be enough.
It looks like you want to implement async request handling on the server side: you let the server accept requests, process them asynchronously, and send the responses back to clients whenever the response data is available for each request. Now of course: how would you know, after you're done processing a request, which client to send it back to?
With simple REP sockets, ZMQ makes sure you won't run into this kind of problem by enforcing a recv() -> send(), recv() -> send() sequentiality. In other words, after you do a recv() on a REP socket, you must do a send() before recv()ing from it again. The response will be sent back to the client you got the message from, and there's no doubt about client's address because it's only one client at a time.
But this doesn't really help when you want to parallelize the request handling, does it? There are many cases when the behavior of REP is too restrictive, and that's exactly what Multipart messages and ROUTER (or XREP) sockets are for. XREP breaks the recv() -> send() lockstep of REP, but that causes a problem as we saw earlier - how do you know which client to send the reply back to, if multiple clients are connected? In order to make this work, XREP in ZMQ adds a message part to the front of a message, like an envelope, that includes the identity of the connection that it recv()'d the request from.
There's a whole chapter in the ZMQ Guide about the advanced Request-Reply patterns. You can also find an example for handling async requests here and a good short explanation of the ZMQ connection handling here.
Reading http://zguide.zeromq.org/page%3aall#Transient-vs-Durable-Sockets, you can only get the identity of the socket you're working with... not the socket of any peers you're connected to.
This being said, just build the sender information into the message. This should be trivial to do (either with a UUID or specific name per client).

Combining pub/sub with req/rep in zeromq

How can a client both subscribe and listen to replies with zeromq?
That is, on the client side I'd like to run a loop which only receives messages and selectively sends requests, and on the server side I'd like to publish most of the time, but to sometimes receive requests as well.
It looks like I'll have to have two different sockets - one for each mode of communication. Is it possible to avoid that and on the server side receive "request notifications" from the socket on a zeromq callback thread while pushing messages to the socket in my own thread?
I am awfully new to ZeroMQ, so I'm not sure if what you want is considered best-practice or not. However, a solution using multiple sockets is pretty simple using zmq_poll.
The basic idea would be to have both client and server:
open a socket for pub/sub
open a socket for req/rep
multiplex sends and receives between the two sockets in a loop using zmq_poll in an infinite loop
process req/rep and pub/sub events within the loop as they occur
Using zmq_poll in this manner with multiple sockets is nice because it avoids threads altogether. The 0MQ guide has a good example here. Note that in that example, they use a timeout of -1 in zmq_poll, which causes it to block until at least one event occurs on any of the multiplexed sockets, but it's pretty common to use a timeout of x milliseconds or something if your loop needs to do some other work as well.
You can use 2 threads to handle the different sockets. The challenge is that if you need to share data between threads, you need to synchronize it in a safe way.
The alternative is to use the ZeroMQ Poller to select the sockets that have new data on them. The process would then use a single loop in the way bjlaub explained.
This could be accomplished using a variation/subset of the Majordomo Protocol. Here's the idea:
Your server will be a router socket, and your clients will be dealer sockets. Upon connecting to the server, the client needs to send some kind of subscription or "hello" message (of your design). The server receives that packet, but (being a router socket) also receives the ID of that client. When the server needs to send something to that client (through your design), it sends it to that ID. The client can send and receive at will, since it is a dealer socket.

UDP Server to client communication - UDP being stateless, how to by-pass router?

In a recent series of question I have asked alot about UDP, boost::asio and c++ in general.
My latest question, which doesn't seem to have an answer here at Stackoverflow, is this:
In a client/server application, it is quite okay to require that the server open a port in any firewall, so that messages are allowed in. However, doing the same for clients is definately not a great user experience.
TCP-connections typically achieve this due to the fact that most routers support stateful packet inspection, allowing response packets through if the original request originated from the local host.
It is not quite clear to me how this would work with UDP, since UDP is stateless, and there is no such thing as "response packets" (to my knowledge). How should I account for this in my client application?
Thanks for any answers!
UDP itself is stateless, but the firewall typically is not. The convention on UDP is that if a request goes out from client:port_A to server:port_B, then the response will come back from server:port_B to client:port_A.
The firewall can take advantage of this. If it sees a UDP request go out from the client, it adds an entry to its state table that lets it recognise the response(s), to allow them in. Because UDP is stateless and has no indication of connection termination, the firewall will typically implement a timeout - if no traffic occurs between that UDP address pair for a certain amount of time, the association in the firewall's state table is removed.
So - to take advantage of this in your client application, simply ensure that your server sends responses back from the same port that it uses to receive the requests.

Resources