Queue with search ability and receiving at specific time - events

I need to send some DTO's (events) to queue. Event can be handled now or in the future (defines by eventTime filed in DTO which is Timestamp (LocalDateTime)).
I am looking for queue with ability to:
search events in queue (e.g get all events with DateTime > NOW() && eventType= 'ACTIVE') without taking event from queue.
manage requeue of events or recive time
In our company we use RabbitMQ only for events that have to execute now (without evens in the future). I was reading about RabbitMQ and I found that I can reject messages (and then message is requeue) but I found nothing about searching through queue without taking message from it. Is it possible in RabbitMQ to do that ? Or what queue / tool should I use ?

The RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Is it possible in RabbitMQ to do that ? Or what queue / tool should I use ?
It is not possible. You would have to consume all messages to search for the one(s) you need, and then reject / requeue others.

Related

ActiveMQ - Competing Consumers with Selector - messages starve in the queue

ActiveMQ 5.15.13
Context: I have a single queue with multiple Consumers. I want to stop some consumers from processing certain messages. This has to be dynamic, I don't want to create separate queues for this. This works without any problems. e.g. Consumer1 ignores Stocks -> Consumer1 can process all invoices and Consumer2 can process all Stocks
But if there is a large number of messages already in the Queue (of one type, e.g. stocks) and I send a message of another type (e.g. invoices), Consumer1 won't process the message of type invoices. It will instead be idle until Consumer2 has processed all Stocks messages. It does not happen every time, but quite often.
Is there any option to change the order of the new messages coming into the queue, such that an idle consumer with matching selector picks up the new message?
Things I've already tried:
using a PendingMessageLimitStrategy -> it seems like it does not work for queues
increasing the maxPageSize and maxBrowsePageSize in the hope that once all Messages are in RAM, the Consumers will search for their messages.
Exclusive Consumers aren't an option since I want to be able to use more than one Consumer per message type.
Im pretty sure that there is some configuration which allows this type of usage. I'm aware that there are better solutions for this issue, but sadly I can't use them easily due to other constraints.
Thanks a lot in advance!
EDIT: I noticed that when I'm refreshing on the localhost queue browser, the stuck messages get executed immediately. It seems like this action performs some sort of queue refresh where the messages get filtered based on their selector again. So I just need this action whenever a new message enters the queue...
This is a 'window' problem where the next set of 'stocks' data needs to be processed before the 'invoicing' data can be processed.
The gotcha with window problems like this is that you need to account for the fact that some messages may never come through, or a consumer may never come back online either. Also, eventually you will be asked 'how many invoices or stocks are left to be processed'-- aka observability.
ActiveMQ has you covered-- check out wild-card destinations and consumers.
Produce 'stocks' to:
queue://data.stocks.input
Produce 'invoices' to:
queue://data.invoices.input
You then setup consumes to connect:
queue://data.*.input
note: the wildard '*'.
ActiveMQ will match queues based on the wildcard pattern, and then process data accordingly. As a bonus, you can still use a selector.

how to use same rabbitmq queue in different java microservice [duplicate]

I have implemented the example from the RabbitMQ website:
RabbitMQ Example
I have expanded it to have an application with a button to send a message.
Now I started two consumer on two different computers.
When I send the message the first message is sent to computer1, then the second message is sent to computer2, the thrid to computer1 and so on.
Why is this, and how can I change the behavior to send each message to each consumer?
Why is this
As noted by Yazan, messages are consumed from a single queue in a round-robin manner. The behavior your are seeing is by design, making it easy to scale up the number of consumers for a given queue.
how can I change the behavior to send each message to each consumer?
To have each consumer receive the same message, you need to create a queue for each consumer and deliver the same message to each queue.
The easiest way to do this is to use a fanout exchange. This will send every message to every queue that is bound to the exchange, completely ignoring the routing key.
If you need more control over the routing, you can use a topic or direct exchange and manage the routing keys.
Whatever type of exchange you choose, though, you will need to have a queue per consumer and have each message routed to each queue.
you can't it's controlled by the server check Round-robin dispatching section
It decides which consumer turn is. i'm not sure if there is a set of algorithms you can pick from, but at the end server will control this (i think round robin algorithm is default)
unless you want to use routing keys and exchanges
I would see this more as a design question. Ideally, producers should create the exchanges and the consumers create the queues and each consumer can create its own queue and hook it up to an exchange. This makes sure every consumer gets its message with its private queue.
What youre doing is essentially 'worker queues' model which is used to distribute tasks among worker nodes. Since each task needs to be performed only once, the message is sent to only one node. If you want to send a message to all the nodes, you need a different model called 'pub-sub' where each message is broadcasted to all the subscribers. The following link shows a simple pub-sub tutorial
https://www.rabbitmq.com/tutorials/tutorial-three-python.html

jms:message-driven-channel-adapter should not poll for messages older than 30 mins

I want to poll for messages in a queue which are not older than 30 mins.
How do I do that with jms:message-driven-channel-adapter ?
Please help.
Such functionality is not supported by the JMS specification.
On the producer side, you can set a time to live on the message which will cause the message to be removed if not consumed within that time.
You could use a selector to query messages based on a timestamp header. But I have to say that selectors usualy don't have good performance.
A topic would be more apropriated for this kind of logic (message that expires after a while) but I don't know if it would be suitable to your business logic because a message in a topic is received by every consumer/listener subscribed.

Spring integration - Keep messages after delivery

1) I'm interested to learn if it is possible to keep the messages that were delivered using Spring Integration. I'm already using the mongo persistent storage (ConfigurableMongoDbMessageStore), but only failed messages remain in the collection. Ideally, I want all messages to remain with the functionality to list them and retry them.
I would use a field "status" or similar to identify queued, succesful or failed messages. Not sure if this field exists already, but I'm guessing something similar must be in place.
2) Also, when a message fails and is persited, there is a lot more data in the message. This data is serialised, so I'm curious how I can extract the original message and retry it.
3) The goal is to create an interface in the webapp where all queued messages can be seen, and retried. Not only failed messages, but also succesful deliveries (useful for testing).
I looked everywhere for an answer to this, but could not find it.
Thanks
I'd say it isn't good design for queue component.
Right it returns failed messages to the queue back for the future redelivery, but good message should be removed from the queue to avoid duplication on the next poll from queue.
No, there is no "status" field on the message, because you use store as a queue.
BTW Spring Integration provides separete implementation for queue channels: MongoDbChannelMessageStore.
You can achieve it with separate parallel Mongo collection and store your message twice: for the queue and for the future analysis. Here you can introduce "status" field and control it, when message successful or not.
From here you can introduce you UI to manage that collection and provide actions like send, retry. Remove the message from here and send it again to those two collections.
HTH

Configure a JMS (ActiveMQ) queue so that it only contains the last message

We have quartz process that polls a ActiveMQ JMS queue.
We know that we could get several messages a minute would like to only respond to the most current message at a configured polling rate of a minute or more.
We don't need to process any of the previous messages.
Is there a way to configure the queue to get this behavior?
Its seems like a topic has the ability to do this via the subscription recovery policy using a count of 1. We would like to do this using a queue to guarantee (more or less) a single delivery of the message.
Or is there a conceptual flaw in our assumptions...
Thanks
In my opinion there is no standard operation for this, so you will have to write some code....
One possible solution would be to use a QueueBrowser together with a QueueReceiver:
Through the QueueReceiver you would get an Enumeration of the messages in the queue. For each message you can now perform a receive with a MessageSelector on the JMSMessageID as long as hasMoreElements() returns true. The last message will be the one you want to have....
When using activemq, you can use "image caching" on topics. One of the settings there is to always keep the last mesage sent..
Take a look at the Subscription recovery Policy settings:
http://activemq.apache.org/subscription-recovery-policy.html

Resources