Subscribe to the same inbound more than once? - reactor-netty

In reactor netty, is it possible to subscribe to the same inbound more than once? I notice in startReceiver method variable once is set to 1 and no other code paths would ever set it back to 0, so any new subscriber trying to subscribe to the inbound would never get the chance to call onSubscribe. Is it possible for different subscribers to subscribe to the same inbound? (I understand this is a very rookie or even an invalid question, if it's not even worth asking please let me know.)

It is not possible to subscribe more than once for the incoming data.
Reactor Netty does not cache it.
You can use the cache Reactor operator or some other mechanism for caching the incoming data and then manipulate it many times if that's needed.

Related

ZeroMQ PUSH / PULL - how to know which events are pending in SEND BUFFER queue?

We have a service pair doing PUSH/PULL pattern of message communication. As mentioned in the docs, if the PULL service is down or not running, then a sender will queue up to high water mark number of events and by default a .send() after that will block.
Now, while an app is in the blocking state, the app could be killed or something else may happen, leading up to loosing those messages in the queue.
I understand PUSH/PULL is not the best method if we want that kind of reliability and should probably use some of the other method listed at: https://zguide.zeromq.org/docs/chapter4/ but is there a way in PUSH/PULL method to get event call back on the events still on queue on say app exit/periodic callbacks/signals?
I also understand, that I could use NOBLOCK or ZMQ_IMMEDIATE or ZMQ_SNDTIMEO in such situation and catch the error and use application level recovery (similar to DLQ pattern) but I was looking into things available from the ZeroMQ library.
Q : "... how to know which events are pending in SEND BUFFER queue ?"
A :Well,having used ZeroMQ since v2.1, v3.x, till v4.x in 2022-Q1, there has never been a way, how a user-level code may interact with ZeroMQ internal queues and/or state(s) as there was no such method in c-API to do so.
Q : "... is there a way in PUSH/PULL method to get event call back on the events still on queue on say app exit/periodic callbacks/signals?"
A :Well, let's solve this by using a concurrently operated signalling-socket, for receiving POSACK-messages from "live"-clients, i.e. those, that can and do receive messages - thus being able to back-throttle messages for those, that did not respond in reasonable TAT. Using a mix of several, properly selected Scalable Formal Communications Patterns archetypes to work in cooperation, helps solve this "soft"-signalling control. Without an ambition to solve all details, a set of one-PUB.bind() / many-SUB.connect()-sockets for selectively directed payload-transport with subscription-based controls and one-PULL.bind() / many-PUSH.connect()-s for "soft"-control signalling of still-alive-heartbeats, traffic back-throttling and similar services

bind destinations dynamically for producers and consumers (Spring)

I'm trying to send and receive messages to channels/topics whose destination names are in a database, so they can be added/modified/deleted at runtime, but I'm surprised I have found little on the web. I'm using Spring Cloud Streams to allow to change the underlying broker.
To send messages to dynamically bound destinations I'm going with BinderAwareChannelResolver.resolveDestination(target).send(message), but I haven't found something that works like it to receive messages.
My questions are:
1. Is there something similar?
2. how can the message be processed periodically as #StreamListener does?
3. And not as important, but can you create a subscriber automatically in case there is none?
Thanks for any help!
This is a bit out of scope of the original design of the framework. But I would further question your architecture. . . If you truly desire to subscribe to unlimited amount of destinations I wonder why? What is the underlying business requirement?
Keep in mind that even if we were to do it somehow that would require creation of a message listener container dynamically for each new destination which would raise more questions, such as, how long would such container have to live since eventually you would run out of resources.
If, however, you simply asking about possibility of mapping multiple destinations to a single channel so all messages go to the same message handler (e.g., StreamListener), then you can simply use input destination property and define multiple destination delimited by comas.

Getting a queue without providing its all properties

I am trying to write a consumer for an existing queue.
RabbbitMQ is running in a separate instance and queue named "org-queue" is already created and binded to an exchange. org-queue is a durable queue and it has some additional properties as well.
Now I need to receive messages from this queue.
I have use the below code to get instance of the queue
conn = Bunny.new
conn.start
ch = conn.create_channel
q = ch.queue("org-queue")
It throws me an error stating different durable property. It seems by default the Bunny uses durable = false. So I've added durable true as parameter. Now it states the difference between other parameters. Do I need to specify all the parameters, to connect to it? As rabbitMQ is maintained by different environment, it is hard for me to get all the properties.
Is there a way to get list of queues and listening to the required queue in client instead of connecting to a queue by all parameters.
Have you tried the :passive=true parameter on queue()? A real example is the rabbitmq plugin of logstash. :passive means to only check queue existence rather than to declare it when consuming messages from it.
Based on the documentation here http://reference.rubybunny.info/Bunny/Queue.html and
http://reference.rubybunny.info/Bunny/Channel.html
Using the ch.queues() method you could get a hash of all the queues on that channel. Then once you find the instance of the queue you are wanting to connect to you could use the q.options() method to find out what options are on that rabbitmq queue.
Seems like a round about way to do it but might work. I haven't tested this as I don't have a rabbitmq server up at the moment.
Maybe there is way to get it with rabbitmqctl or the admin tool (I have forgotten the name), so the info about queue. Even if so, I would not bother.
There are two possible solutions that come to my mind.
First solution:
In general if you want to declare an already existing queue, it has to be with ALL correct parameters. So what I'm doing is having a helper function for declaring a specific queue (I'm using c++ client, so the API may be different but I'm sure concept is the same). For example, if I have 10 subscribers that are consuming queue1, and each of them needs to declare the queue in the same way, I will simply write a util that declares this queue and that's that.
Before the second solution a little something: Maybe here is the case in which we come to a misconception that happens too often :)
You don't really need a specific queue to get the messages from that queue. What you need is a queue and the correct binding. When sending a message, you are not really sending to the queue, but to the exchange, sometimes with routing key, sometimes without one - let's say with. On the receiving end you need a queue to consume a message, so naturally you declare one, and bind it to an exchange with a routing key. You don't need even need the name of the queue explicitly, server will provide a generic one for you, so that you can use it when binding.
Second solution:
relies on the fact that
It is perfectly legal to bind multiple queues with the same binding
key
(found here https://www.rabbitmq.com/tutorials/tutorial-four-java.html)
So each of your subscribers can delcare a queue in whatever way they want, as long as they do the binding correctly. Of course these would be different queues with different names.
I would not recommend this. This implies that every message goes to two queues for example and most likely a message (I am assuming the use case here needs to be processed only once by one subscriber).

Approach for taking action on reception of two different JMS messages

Say I have one JMS message FooCompleted
{"businessId": 1,"timestamp": "20140101 01:01:01.000"}
and another JMS message BazCompleted
{"businessId": 1,"timestamp": "20140101 01:02:02.000"}
The use case is that I want some action triggered when both messages have been received for the business id in question - essentially a join point of reception of the two messages. The two messages are published on two different queues and order between reception of FooCompleted and BazCompleted may change. In reality, I may need to have join of reception of several different messages for the businessId in question.
The naive approach was that to store the reception of the message in a db and check if message(s) its dependent join arm(s) have been received and only then kick off the action desired. Given that the problem seems generic enough, we were wondering if there is a better way to solve this.
Another thought was to move messages from these two queues into a third queue on reception. The listener on this third queue will be using a special avataar of DefaultMessageListenerContainer which overrides the doReceiveAndExecute to call receiveMessage for all outstanding messages in the queue and adding messages back to the queue whose all dependent messages have not yet arrived - the remaining ones will be acknowledged and hence removed. Given that the quantum of messages will be low, probing the queue over and adding messages again should not be a problem. The advantage would be avoiding the DB dependency and the associated scaffolding code. Wanted to see if there is something glaringly bad with this
Gurus, please critique and point out better ways to achieve this.
Thanks in advance!
Spring Integration with a JMS message-driven adapter and an aggregator with custom correlation and release strategies, and a peristent (JDBC) message store will provide your first solution without writing much (or any) code.

What is the right approach for an async work queue with results?

I have a REST server on heroku. It will have N-dynos for the REST service and N-dynos for workers.
Essentially, I have some long running rest requests. When these come in I want to delegate them to one of the workers and give the client a redirect to poll the operation and eventually return the result of the operation.
I am going to use JEDIS/REDIS from RedisToGo for this. As far as I can tell there are two ways I can do this.
I can use the PUB/SUB functionality. Have the publisher create unique identities for the work results and return these in a redirect URI to the REST client.
Essentially the same thing but instead of PUB/SUB use RPUSH/BLPOP.
I'm not sure what the advantage is to #1. For example, if I have a task called LongMathOperation it seems like I can simply have a list for this. The list elements are JSON objects that have the math operation arguments as well as a UUID generated by the REST server for where the results should be placed. Then all the worker dynos will just have blocking BLPOP calls and the first one there will get the job, process it, and put the results in REDIS using the key of the UUID.
Make sense? So my question is "why would using PUB/SUB be better than this?" What does PUB/SUB bring to the table here that I am missing?
Thanks!
I would also use lists because pubsub messages are not persistent. If you have no subscribers then the messages are lost. In other words, if for whatever reason you do not have any workers listening then the client won't get served properly. Lists are persistent on the other hand. But pubsub does not take as much memory as lists obviously for the same reason: there is nothing to store.

Resources