I know that JMS messages are immutable. But I have a task to solve, which requires rewrite message in queue by entity id. Maybe there is a problem with system design, help me please.
App A sends message (with entity id = 1) to JMS. App B checks for new messages every minute.
App A might send many messages with entity id = 1 in a minute, but App B should see just the last one.
Is it possible?
App A should work as fast as possible, so I don't like the idea to perform removeMatchingMessages(String selector) before new message push.
IMO the approach is flawed.
Even if you did accept clearing off the queue by using a message selector to remove all messages where entity id = 1 before writing the new message, timing becomes an issue: it's possible that whichever process writes the out-dated messages would need to complete before the new message is written, some level of synchronization.
The other solution I can think of is reading all messages before processing them. Every minute, the thread takes the messages and bucketizes them. An earlier entity id = 1 message would be replaced by a later one, so that at the end you have a unique set of messages to process. Then you process them. Of course now you might have too many messages in memory at once, and transactionality gets thrown out the window, but it might achieve what you want.
In this case you could actually be reading the messages as they come in and bucketizing them, and once a minute just run your processing logic. Make sure you synchronize your buckets so they aren't changed out from under you as new messages come in.
But overall, not sure it's going to work
Related
ActiveMQ 5.15.13
Context: I have a single queue with multiple Consumers. I want to stop some consumers from processing certain messages. This has to be dynamic, I don't want to create separate queues for this. This works without any problems. e.g. Consumer1 ignores Stocks -> Consumer1 can process all invoices and Consumer2 can process all Stocks
But if there is a large number of messages already in the Queue (of one type, e.g. stocks) and I send a message of another type (e.g. invoices), Consumer1 won't process the message of type invoices. It will instead be idle until Consumer2 has processed all Stocks messages. It does not happen every time, but quite often.
Is there any option to change the order of the new messages coming into the queue, such that an idle consumer with matching selector picks up the new message?
Things I've already tried:
using a PendingMessageLimitStrategy -> it seems like it does not work for queues
increasing the maxPageSize and maxBrowsePageSize in the hope that once all Messages are in RAM, the Consumers will search for their messages.
Exclusive Consumers aren't an option since I want to be able to use more than one Consumer per message type.
Im pretty sure that there is some configuration which allows this type of usage. I'm aware that there are better solutions for this issue, but sadly I can't use them easily due to other constraints.
Thanks a lot in advance!
EDIT: I noticed that when I'm refreshing on the localhost queue browser, the stuck messages get executed immediately. It seems like this action performs some sort of queue refresh where the messages get filtered based on their selector again. So I just need this action whenever a new message enters the queue...
This is a 'window' problem where the next set of 'stocks' data needs to be processed before the 'invoicing' data can be processed.
The gotcha with window problems like this is that you need to account for the fact that some messages may never come through, or a consumer may never come back online either. Also, eventually you will be asked 'how many invoices or stocks are left to be processed'-- aka observability.
ActiveMQ has you covered-- check out wild-card destinations and consumers.
Produce 'stocks' to:
queue://data.stocks.input
Produce 'invoices' to:
queue://data.invoices.input
You then setup consumes to connect:
queue://data.*.input
note: the wildard '*'.
ActiveMQ will match queues based on the wildcard pattern, and then process data accordingly. As a bonus, you can still use a selector.
My team is considering if we can use mass transit as a primary solution for sagas in RabbitMq (vs NServiceBus). I admit that our experience which solution like masstransit and nserviceBus are minimal and we have started to introduce messaging into our system. So I sorry if my question will be simple or even stupid.
However, when I reviewed the mass transit documentation I noticed that I am not sure if that is possible to solve one of our cases.
The case looks like:
One of our components will produce up to 100 messages which will be "sent" to queue. These messages are a result of a single operation in a system. All of the messages will have the same Correlated Id and our internal publication id (same too).
1) is it possible to define a single instance saga (by correlated id) which will wait until it receives all messages from a queue and then process them as a single batch?
2) otherwise, is there any solution to ensure all of the sent messages was processed? (Consistency batch?) I assume that correlated Id will serve as a way to found an existing saga instance (singleton). In the ideal case, I would like to complete an instance of a saga When the system will process every message which belongs to a single group (to one publication)
I look at CompositeEvent too but I do not sure if I could use it to "ensure" that every message was processed and then I would let to complete saga for specific correlated Id.
Can you explain how could it be achieved? And into what mechanism I should look at in order to correlated id a lot of messages with the same id to the single saga and then complete if all of msg will be consumed?
Thank you in advance for any response
What you describe is how correlation by id works. It is like that out of the box.
So, in short - when you configure correlation for your messages correctly, all messages with the same correlation id will be handled by the same saga instance.
Concerning the second question - unless you publish a separate event that would inform the saga about how messages it should expect, how would it know that? You can definitely schedule a long timeout, attempting and assuming that within the timeout all the messages will be received by the saga, but it's not reliable.
Composite events won't help here since they are for messages with different types to be handled as one when all of them arrive and it doesn't count for the number of messages of each type. It just waits for one message of each type.
The ability to receive a series of messages and then operate on them in a batch is a common case, so much so that there is a sample showing how to do just that:
Batch Sample
Each saga instance has a unique correlation identifier, and as long as those messages can be correlated to that single instance, MassTransit will manage the concurrency (either optimistic or pessimistic, and depending upon the saga storage engine).
I'd suggest reviewing the state machine in the sample, and seeing how that compares to your scenario.
Problem
When my web application updates an item in the database, it sends a message containing the item ID via Camel onto an ActiveMQ queue, the consumer of which will get an external service (Solr) updated. The external service reads from the database independently.
What I want is that if the web application sends another message with the same item ID while the old one is still on queue, that the new message be dropped to avoid running the Solr update twice.
After the update request has been processed and the message with that item ID is off the queue, new request with the same ID should again be accepted.
Is there a way to make this work out of the box? I'm really tempted to drop ActiveMQ and simply implement the update request queue as a database table with a unique constraint, ordered by timestamp or a running insert id.
What I tried so far
I've read this and this page on Stackoverflow. These are the solutions mentioned there:
Idempotent consumers in Camel: Here I can specify an expression that defines what constitutes a duplicate, but that would also prevent all future attempts to send the same message, i.e. update the same item. I only want new update requests to be dropped while they are still on queue.
"ActiveMQ already does duplicate checks, look at auditDepth!": Well, this looks like a good start and definitely closest to what I want, but this determines equality based on the Message ID which I cannot set. So either I find a way to make ActiveMQ generate the Message ID for this queue in a certain way or I find a way to make the audit stuff look at my item ID field instead of the Message ID. (One comment in my second link even suggests using "a well defined property you set on the header", but fails to explain how.)
Write a custom plugin that redirects incoming messages to the deadletter queue if they match one that's already on the queue. This seems to be the most complete solution offered so far, but it feels so overkill for what I perceive as a fairly mundane and every-day task.
PS: I found another SO page that asks the same thing without an answer.
What you want is not message broker functionality, repeat after me, "A message broker is not a database, A message broker is not a database", repeat as necessary.
The broker's job is get messages reliably from point A to point B. The client offers some filtering capabilities via message selectors but this is minimal and mainly useful in keeping only specific messages that a single client is interested in from flowing there and not others which some other client might be in charge of processing.
Your use case calls for a more stateful database centric solution as you've described. Creating a broker plugin to walk the Queue to check for a message is reinventing the wheel and prone to error if the Queue depth is large as ActiveMQ might not even page in all the messages for you based on memory constraints.
I am using camel to integrate with ActiveMQ JMS. I am receiving prices for products on this queue. I am using JMSXGroupID on productId to ensure ordering across a productId. Now if I fail to process this message I move it to a DeadLetterQueue. This could be because of a connection error on a dependent service or because of error with the message itself.
In case of the former I would have to manually remove it from the DLQ and put it back into the JMS queue.
Now the problem is that I dont know if any other message on that groupId has been received and processed or not. And hence unsidelining from DLQ will disrupt the order. On the other hand if I dont unsideline it and no other message has been received the product Id will not get the correct price.
1 solution that I have in mind is to use a fast key-value store(Redis) to store the last messageId or JMSTimestamp against a productId(message group). This is updated everytime I dequeue a message. Any other solution for this?
Relying on message order in JMS is a risky business - at best.
The best thing to do is to make the receiver handle messages out of sequence as a special case (but may take advantage message order during normal operation).
You may also want to distinguish between two errors: posion messages and temporary connection problems, maybe even use two different error queues for them. In the case of a posion message (invalid payload etc.) then there is nothing you can really do about it except starting a bug investigation. In such cases, you can probably send along "something else", such as dummy message to not interfere with order.
For the issues with connection problems, you can have another strategy - ActiveMQ Redelivery Policies. If there is network trouble, it's usually no use in trying to process the second message until the first has been handled. A Redelivery Policy ensures that (given you have a single consumer, that is). There is another question at SO where the poster actually has a solution to your problem and wants to avoid it. Read it. :)
I already have a few ideas, but I'd like to hear some differing opinions and alternatives from everyone if possible.
I have a Windows console app that uses Exchange web services to connect to Exchange and download e-mail messages. The goal is to take each individual message object, extract metadata, parse attachments, etc. The app is checking the inbox every 60 seconds. I have no problems connecting to the inbox and getting the message objects. This is all good.
Here's where I am accepting input from you: When I get a message object, I immediately want to process the message and do all of the busy work explained above. I was considering a few different approaches to this:
Queuing the e-mail objects up in a table and processing them one-by-one.
Passing the e-mail object off to a local Windows service to do the busy work.
I don't think db queuing would be a good approach because, at times, multiple e-mail objects need to be processed. It's not fair if a low-priority e-mail with 30 attachments is processed before a high-priority e-mail with 5 attachments is processed. In other words, e-mails lower in the stack shouldn't need to wait in line to be processed. It's like waiting in line at the store with a single register for the bonehead in front of you to scan 100 items. It's just not fair. Same concept for my e-mail objects.
I'm somewhat unsure about the Windows service approach. However, I'm pretty confident that I could have an installed service listening, waiting on demand for an instruction to process a new e-mail. If I have 5 separate e-mail objects, can I make 5 separate calls to the Windows service and process without collisions?
I'm open to suggestions or alternative approaches. However, the solution must be presented using .NET technology stack.
One option is to do the processing in the console application. What you have looks like a standard producer-consumer problem with one producer (the thread that gets the emails) and multiple consumers. This is easily handled with BlockingCollection.
I'll assume that your message type (what you get from the mail server) is called MailMessage.
So you create a BlockingCollection<MailMessage> at class scope. I'll also assume that you have a timer that ticks every 60 seconds to gather messages and enqueue them:
private BlockingCollection<MailMessage> MailMessageQueue =
new BlockingCollection<MailMessage>();
// Timer is created as a one-shot and re-initialized at each tick.
// This prevents the timer proc from being re-entered if it takes
// longer than 60 seconds to run.
System.Threading.Timer ProducerTimer = new System.Threading.Timer(
TimerProc, null, TimeSpan.FromSeconds(60), TimeSpan.FromMilliseconds(-1));
void TimerProc(object state)
{
var newMessages = GetMessagesFromServer();
foreach (var msg in newMessages)
{
MailMessageQueue.Add(msg);
}
ProducerTimer.Change(TimeSpan.FromSeconds(60), TimeSpan.FromMilliseconds(-1));
}
Your consumer threads just read the queue:
void MessageProcessor()
{
foreach (var msg in MailMessageQueue.GetConsumingEnumerable())
{
ProcessMessage();
}
}
The timer will cause the producer to run once per minute. To start the consumers (say you want two of them):
var t1 = Task.Factory.StartNew(MessageProcessor, TaskCreationOptions.LongRunning);
var t2 = Task.Factory.StartNew(MessageProcessor, TaskCreationOptions.LongRunning);
So you'll have two threads processing messages.
It makes no sense to have more processing threads than you have available CPU cores. The producer thread presumably won't require a lot of CPU resources, so you don't have to dedicate a thread to it. It'll just slow down message processing briefly whenever it's doing its thing.
I've skipped over some detail in the description above, particularly cancellation of the threads. When you want to stop the program, but let the consumers finish processing messages, just kill the producer timer and set the queue as complete for adding:
MailMessageQueue.CompleteAdding();
The consumers will empty the queue and exit. You'll of course want to wait for the tasks to complete (see Task.Wait).
If you want the ability to kill the consumers without emptying the queue, you'll need to look into Cancellation.
The default backing store for BlockingCollection is a ConcurrentQueue, which is a strict FIFO. If you want to prioritize things, you'll need to come up with a concurrent priority queue that implements the IProducerConsumerCollection interface. .NET doesn't have such a thing (or even a priority queue class), but a simple binary heap that uses locks to prevent concurrent access would suffice in your situation; you're not talking about hitting this thing very hard.
Of course you'd need some way to prioritize the messages. Probably sort by number of attachments so that messages with no attachments are processed quicker. Another option would be to have two separate queues: one for messages with 0 or 1 attachments, and a separate queue for those with lots of attachments. You could have one of your consumers dedicated to the 0 or 1 queue so that easy messages always have a good chance of being processed first, and the other consumers take from the 0 or 1 queue unless it's empty, and then take from the other queue. It would make your consumers a little more complicated, but not hugely so.
If you choose to move the message processing to a separate program, you'll need some way to persist the data from the producer to the consumer. There are many possible ways to do that, but I just don't see the advantage of it.
I'm somewhat a novice here, but it seems like an initial approach could be to have a separate high-priority queue. Every time a worker is available to obtain a new message, it could do something like:
If DateTime.Now - lowPriorityQueue.Peek.AddedTime < maxWaitTime Then
ProcessMessage(lowPriorityQueue.Dequeue())
Else If highPriorityQueue.Count > 0 Then
ProcessMessage(highPriorityQueue.Dequeue())
Else
ProcessMessage(lowPriorityQueue.Dequeue())
End If
In a single thread, while you can still have one message blocking the others, higher priority messages could be processed sooner.
Depending on how fast most messages get processed, the application could create a new worker on a new thread if the queues are getting too big or too old.
Please tell me if I'm completely off-base here though.