How do I do multiple publishers with a single endpoint in ZeroMQ? - zeromq

I'm attempting to do a pub/sub architecture where multiple publishers and multiple subscribers exist on the same bus. According to what I've read on the internet, only one socket should ever call bind(), and all others (whether pub or sub) should call connect().
The problem is, with this approach I'm finding that only the publisher that actually calls bind() on the socket ever publishes messages. All of my publishers that call connect() seem to fail silently and don't actually publish any messages to the bus. I've confirmed this isn't a subscriber key issue, as I've written a simple "sniffer" app that subscribes to all messages on the bus, and it is only showing the publisher that called bind().
If I attempt multiple binds with the publisher, the "expected" zmq behavior of silently stealing the bus occurs with ipc, and a port in use error is thrown with tcp.
I've verified this behavior with ipc and tcp endpoints, but ultimately the full system will be using epgm. I assume (though of course may be wrong) that in this situation I wouldn't need a broker since there's no dynamic discovery occurring (endpoints are known, whether ipc, tcp, or epgm multicast).
Is there something I'm missing, perhaps a socket setting, that would be causing the connecting publishers to not actually send their data? According to the literature I've seen on the internet, I'm doing things the "correct" way but it still doesn't work.
For reference, my publisher class has the following methods for setting up the endpoint:
ZmqPublisher::ZmqPublisher()
: m_zmqContext(1), m_zmqSocket(m_zmqContext, ZMQ_PUB)
{}
void ZmqPublisher::bindEndpoint(std::string ep)
{
m_zmqSocket.bind(ep.c_str());
}
void ZmqPublisher::connect(std::string ep)
{
m_zmqSocket.connect(ep.c_str());
}
So ultimately, my question is this: What is the proper way to handle multiple publishers on the same endpoint, and why am I not seeing messages from more than one publisher?

It may or may not be relevant, but The 0MQ Guide has the following slightly enigmatic remark:
In theory with ØMQ sockets, it does not matter which end connects and which end binds. However, in practice there are undocumented differences that I'll come to later. For now, bind the PUB and connect the SUB, unless your network design makes that impossible.
I've not yet discovered where the "come to later" actually happens, but I don't use pub/sub so much, and haven't read the "Advanced Pub-Sub Patterns" part of the guide in great detail.
However, the idea of multiple publishers on a single end-point, to me, suggests the need for an XPUB/XSUB style broker; it's not about dynamic discovery, it's about single point of contact and routing. Ultimately, I think a broker-based topology would simplify your application, and make it easier to identify problems.

Your mistake was that you call a single publisher with bind and others with connect. This is not supported with plain PUB-SUB pattern.
Plain PUB-SUB in ZeroMQ supports only two scenarios (see the image below):
single publisher, multiple subscribers
single subscriber, multiple publishers
In both cases, the party that is "single" must bind and the party that is "multiple" must connect. Otherwise, if you want many-to-many, you can use XPUB-XSUB or some other pattern.

Related

Send, Publish and Request/Response in MasstTransit

Recently I am trying to use MassTransit in our microservice ecosystem.
According to MassTransit vocabulary and from documents my understanding is :
Publish: Sends a message to 1 or many subscribers (Pub/Sub Pattern) to propagate the message.
Send: Used to send messages in fire and forget fashion like publish, but instead It is just used for one receiver. The main difference with Publish is that in Send if your destination didn't receive a message, it would return an exception.
Requests: uses request/reply pattern to just send a message and get a response in a different channel to be able to get response value from the receiver.
Now, my question is according to the Microservice concept, to follow the event-driven design, we use Publish to propagate messages(Events) to the entire ecosystem. but what is exactly the usage (use case) of Send here? Just to get an exception if the receiver doesn't exist?
My next question is that is it a good approach to use Publish, Send and Requests in a Microservices ecosystem at the same time? like publish for propagation events, Send for command (fire and forget), and Requests for getting responses from the destination.
----- Update
I also found here which Chris Patterson clear lots of things. It also helps me a lot.
Your question is not related to MassTransit. MassTransit implements well-known messaging patterns thoughtfully described on popular resources such as Enterprise Integration Patterns
As Eben wrote in his answer, the decision of what pattern to use is driven by intent. There are also technical differences in the message delivery mechanics for each pattern.
Send is for commands, you tell some other service to do something. You do not wait for a reply (fire and forget), although you might get a confirmation of the action success or failure by other means (an event, for example).
It is an implementation of the point-to-point channel, where you also can implement competing consumers to scale the processing, but those will be instances of the same service.
With MassTransit using RabbitMQ it's done by publishing messages to the endpoint exchange rather than to the message type exchange, so no other endpoints will get the message even though they can consume it.
Publish is for events. It's a broadcast type of delivery or fan-out. You might be publishing events to which no one is listening, so you don't really know who will be consuming them. You also don't expect any response.
It is an implementation of the publish-subscribe channel.
MassTransit with RabbitMQ creates exchanges for each message type published and publishes messages to those exchanges. Consumers create bindings between their endpoint exchanges and message exchanges, so each consumer service (different apps) will get those in their independent queues.
Request-response can be used for both commands that need to be confirmed, or for queries.
It is an implementation of the request-reply message pattern.
MassTransit has nice diagrams in the docs explaining the mechanics for RabbitMQ.
Those messaging patterns are frequently used in a complex distributed system in different combinations and variations.
The difference between Send and Publish has to do with intent.
As you stated, Send is for commands and Publish is for events. I worked on a large enterprise system once running on webMethods as the integration engine/service bus and only events were used. I can tell you that it was less than ideal. If the distinction had been there between commands and events it would've made a lot more sense to more people. Anyway, technically one needs a message enqueued and on that level it doesn't matter, which is why a queueing mechanism typically would not care about such semantics.
To illustrate this with a silly example: Facebook places and Event on my timeline that one of my friends is having a birthday on a particular day. I can respond directly (send a message) or I could publish a message on my timeline and hope my friend sees it. Another silly example: You send an e-mail to PersonA and CC 4 others asking "Please produce report ABC". PersonA would be expected to produce the report or arrange for it to be done. If that same e-mail went to all five people as the recipient (no CC) then who gets to do it? I know, even for Publish one could have a 1-1 recipient/topic but what if another endpoint subscribed? What would that mean?
So the sender is responsible, still configurable as subscriptions are, to determine where to Send the message to. For my own service bus I use an implementation of an IMessageRouteProvider interface. A practical example in a system I once developed was where e-mails received had to have their body converted to an image for a content store (IBM FileNet P8 if memory serves). For reasons I will not go into the systems were stopped each night at 20h00 and restarted at 6h00 in the morning. This led to a backlog of usually around 8000 e-mails that had to be converted. The conversion endpoint would process a conversion in about 2 seconds but that still takes a while to work through. In the meantime the web front-end folks could request PDF files for conversion to paged TIFF files. Now, these ended up at the end of the queue and they would have to wait hours for that to come back. The solution was to implement another conversion endpoint, with its own queue, and have the web front-end configured to send the same message type, e.g. ConvertDocumentCommand to that "priority" queue for processing. Pretty easy to do. Now, if that had been a publish how would I do that split? The same event going to 2 different endpoints under different circumstances? Well, you could have another subscription store for your system but now you'd need to maintain both. There could be another answer such as coding this logic into the send bit but that is a design choice and would require coding changes.
In my own Shuttle.Esb service bus I only have Send and Publish. For request/response both the sender and receiver have an inbox and a request would be sent (Send) to the receiver and it in turn could reply (also a Send but uses the sender's URI).

Detecting socket connection using ZeroMQ STREAM sockets

I am building a new application that receives data from a number of external devices and needs to make it available to a number of different components. ZeroMQ seems purpose-built for the "data bus" aspect of my architecture.
I recently became aware that zmq STREAM sockets can connect to native TCP sockets and send/received messages. Using zmq throughout has a lot of appeal, but I have one problem that I don't know how to get around.
One of my devices needs to be set up. That is, I connect a socket to it, send it some configuration information, then sit back and wait for it to send me data. The device also has a "reset" capability (useful in some contexts), that requires re-sending the configuration information. Doing this depends upon having visibility to the setup/tear-down stage of the socket interface. I need to know when a new connection is established, so I can send the necessary configuration messages.
It seems that zmq is purposely designed to shield me from that knowledge. Is there a way to do what I want? Or should I just use regular sockets for this interface?
Well, it turns out that reading (the right version of) the fine manual can be instructive.
When a connection is made, a zero-length message will be received by the application. Similarly, when the peer disconnects (or the connection is lost), a zero-length message will be received by the application.
I guess all that remains is to disambiguate between connect and disconnect. Still looking for advice from the community, if others have dealt with this situation before.
Following up on your own answer, I would hesitate to rely on that zero length connect/disconnect message as your whole strategy - that seems needlessly fragile. It's not clear to me from your question which end is persistent and which end needs configuration information, but I expect that one end knows it's resetting and reconnecting, and that end needs configuration information from the peer, so it should ask for it with a message when it needs it, to which the peer responds with the requested information.
If the peer does not yet have the required configuration information before it receives some other message, it could either queue up that work or it could respond back with the need for the config, and then have the rest of the network handle that need appropriately.
You shouldn't need stream/tcp sockets to make that work, it should work with more standard ZMQ socket types, you just need to build the robustness into your application rather than trying to get it for free from TCP/socket actions.
If I've missed your point, and what I'm suggesting won't work for some reason, you will have to give more specific information about your network topology for anyone else to understand what a suitable solution might be.

Dealer/Router for asynchronous client server?

Using ZeroMQ, I am building a client/server application that requires asynchronous message - at some point my server might send 2 messages in a row and then the client sends 10, or continuous exchange of messages.
Does this qualify for use of the dealer/router setup or if this is not something ZeroMQ is setup for?
Thanks.
ZeroMQ can out of question provide means for doing this,
yet, this is not the very case for using the original ROUTER/DEALER smart Scalable Formal Communication Pattern archetype.
Given an unspecified message ordering is required, may be fine to use PAIR/PAIR where any side may send whatever amount of messages, whenever it decides to ( there is no formal ordering pre-wired ).
Hope this helps.

ZeroMQ - Multiple Publishers and Listener

I'm just starting understanding and trying ZeroMQ.
It's not clear to me how could I have a two way communication between more than two actors (publisher and subscriber) so that each component is able both to read and write on the MQ.
This would allow to create event-driven architecture, because each component could be listening for an event and reply with another event.
Is there a way to do this with ZeroMQ directly or I should implement my own solution on top of that?
If you want simple two-way communication then you simply set up a publishing socket on each node, and let each connect to the other.
In an many to many setup this quickly becomes tricky to handle. Basically, it sounds like you want some kind of central node that all nodes can "connect" to, receive messages from and, if some conditions on the subscriber are met, send messages to.
Since ZeroMq is a simple "power-socket", and not a message queue (hence its name, ZeroMQ - Zero Message Queue) this is not feasible out-of-the-box.
A simple alternative could be to let each node set up an UDP broadcast socket (not using ZeroMq, just regular sockets). All nodes can listen in to whatever takes place and "publish" its own messages back on the socket, effectively sending it to any nodes listening. This setup works on a LAN and in a setting where it is ok for messages to get lost (like periodical state updates). If the messages needs to be reliable (and possibly durable) you need a more advanced full-blown message queue.
If you can do without durable message queues, you can create a solution based on a central node, a central message handler, to which all nodes can subscribe to and send data to. Basically, create a "server" with one REP (Response) socket (for incoming data) and one PUB (Publisher) socket (for outgoing data). Each client then publishes data to the servers REP socket over a REQ (Request) socket and sets up a SUB (Subscriber) socket to the servers PUB socket.
Check out the ZeroMq guide regarding the various message patterns available.
To spice it up a bit, you could add event "topics", including server side filtering, by splitting up the outgoing messages (on the servers PUB socket) into two message parts (see multi-part messages) where the first part specifies the "topic" and the second part contains the payload (e.g. temp|46.2, speed|134). This way, each client can register its interest in any topic (or all) and let the server filter out only matching messages. See this example for details.
Basically, ZeroMq is "just" an abstraction over regular sockets, providing a couple of messaging patterns to build your solution on top of. However, it relieves you of a lot of tedious work and provides scalability and performance out of the ordinary. It takes some getting used to though. Check out the ZeroMq Guide for more details.

Messaging Middleware - how to avoid reentrance with wildcard subscription?

Messaging middleware solutions (JMS, Tibco, etc.) allow publish/subscribe with "topic" filtering using wildcards to subscribe to all messages of a certain "topic", e.g. SUBSCRIBE("ACCOUNT.*") topic allows you to subscribe to both "ACCOUNT.WITHDRAW" message and "ACCOUNT.CHECKBALANCE" message.
The problem is that such subscription also receives my own published messages.
I'm looking for a mechanism, similar to, say, UDP multicast loopback which can be turned ON or OFF by the transport layer without messing with the data being sent.
Is there a common, declarative (no custom code, configuration only) way to configure the middleware not to receive messages which that very same service instance has published? Ideally, this should also be able to filter out everything published by ALL servers (nodes) of the same "kind".
Thanks in advance.
The JMS API contains this option for TopicSubscribers, e.g. TIBCO EMS let's you create a consumer with the "noLocal" property. That means no messages published over the same connection, get consumed by clients on the same connection.
e.g. take a look here how to create a topic subscriber with the "noLocal" option:
https://docs.tibco.com/pub/enterprise_message_service/7.0.1-march-2013/doc/html/tib_ems_api_reference/api/javadoc/javax/jms/TopicSession.html
No one is answering, so I'll chime in (in a hand-wavey way).
I believe there's nothing in the JMS spec around controlling whether you get your own sent messages back on a topic receiver. So any capability like this would be a non-portable vendor feature. Especially for your second requirement (based on "kind" of JMS client versus some control based on the same connection doing the sending/receiving).
If you've got no flexibility to modify code or message content (properties), I think you've got no portable solutions. And likely no solution at all for that second "kind" requirement.
If you want to investigate vendor-specific options, you'll need to tell us which vendor you're interested in. You may get nothing, but there's no way to know without asking.

Resources