Get the most out of high performance MDB - performance

Application server creates a new transaction before calling MDB's onMessage method. Also I am processing database update in onMessage method. Transactions create additional overhead and processing several message in one transaction could increase performance.
Is it possible to make App server to use one transaction for several messages. Or maybe there are other approaches to this problem?
And, by the way, I can't use multiple instances, cause I need to preserve the sequence order.

I guess you can store the messages in a list and depending upon how many messages you want to process in one transaction you can check the size of the list and process the messages.

Related

Spring Boot #kafkaListner with blocking queue

I am new to Spring Boot #kafkaListener. My application receiving almost 200K message per second on topic. I want to separate message listener and processing of the message.
How can I use java.util.concurrent.BlockingQueue with #kafkaListener? Can I use it by using CompletableFuture?
Any sample code will help more.
I believe you want to have your consumer with pipelining implemented. Its not uncommon for one to implement this in a scenario like yours. Why? Well, the KafkaConsumer lacks in that decompressing / deserializing can be time consuming without considering the time it takes to do processing. Since these operations are stacked behind one thread, it would be ideal to separate the polling from the processing, which is achieved through a couple of buffers.
One way to do this: your EventReceiver spins up a thread for the polling. That thread would do the same thing you always do, but instead of firing off the listeners for each event, you'd pass the event to a receivedEvents buffer which could be BlockingQueue<RecieveEvent>. So in the for loop, you pass each record to the blocking queue. This thread would leverage another buffer once the for loop is over, like Queue<Map<TopicPartition, OffsetAndMetadata>> -- and it would commit the offsets that the processingThread has successfully processed.
Next, your EventReceiver spins up another thread - processingThread. This would handle pulling records from the buffer, firing the event to all the listeners for this receiver, and then update the Queues state for the pollingThread to commit.
Why doesn't the processingThread just commit the events instead of passing it back to the pollingThread? This is bc KafkaConsumer requires that the same thread that calls .poll() should be the one that calls consumer.commitAsync(...) or else you'll get a concurrency exception.
This approach doesn't work with auto commit enabled.
In terms of how one can do this using Spring Kafka, I'm not completely sure. However, I do know Spring Kafka separates EventReceiver from EventListener (#KafkaListener) which is separating the low-level kafka work from the business logic. In theory, you'd have to tune their implementation, but I think implementing this one without Spring Kafka library would be easier.

ActiveMQ - Cost of creating temporary queues

I would like to use queues dynamically generated in ActiveMQ to serialize the handling of events generated by multiple sources.
I need this to be sure that updates on the same record are never in conflicts.
The problem is that I need a different queue for each set of updates that relate to the same record.
There could be in theory millions of records and, of course, I do not want to create millions of queues.
Ideally, a queue should be created when necessary and destroyed when all the updates are completed.
The events that fire the updates are asynchronous but are still correlated. I know that when something happens, several events will be fired in the same time.
It is practically a small burst of asynchronous but correlated updates.
After some time, the queue generated could be deleted.
I understand that there is a cost in creating and deleting queues, but am I right thinking that the cost of generated and deleting these queues with a rate that, during a peak, won't be higher than a few queues per seconds, won't create performance issues ?
There is a cost of temporary queues but generally not that high unless you have high network latency between app server and broker and you should be fine.
Temporary queues, though, have some limits. Such as they are deleted once the created connection goes down. So, if you want your job to resume after a system restart, don't depend on temp-queues. I advice against dynamically creating regular queues at multiple/sec rate. The system is not designed for that.
Generally what you want to do while processing a group of related messages is to utilize message groups. That way, you can use a single queue that does not depend on the producer/temp queue creator connection.

How to create unique messages to rabbitmq queue - spring-amp

I am putting a message containing string data to rabbitmq queue.
Message publishing is called as a part of a service and the service can be called with same data (data goes to the queue) multiple times, thus chances for having duplicated data in the queue is very likely.
We have issues with this as the consumer code is inserting this data to table where this data is primary key. Consumer will be called from 4 different nodes simultaneously thus chances for having consumers consuming same data (from different messages) can happen.
I want to know if rabbitMQ publishing has any way to avoid message duplication.
Read "define a property "x-unique-message-code" to compare them is an easy and simple way" , but don't know how to do it.
I am using spring-amqp
Any help is highly appreciated.
Thank you
There is a good article from RabbitMQ about reliability: https://www.rabbitmq.com/reliability.html
There is a note like:
In the event of network failure (or a node crashing), messages can be duplicated, and consumers must be prepared to handle them. If possible, the simplest way to handle this is to ensure that your consumers handle messages in an idempotent way rather than explicitly deal with deduplication.
For this purpose the message to produce can be supplied with a messageId property.

How to handle side effects based on multiple events in a message driven microservice system?

we are currently working in a message driven Microservice environment and some of our messages/events are event sourced (using Apache Kafka). Now we are struggling with implementing more complex business requirements, were we have to take multiple events into account to create new events and side effects.
In the current situation we are working with devices that can produce errors and we already process them and have a single topic which contains ERROR_OCCURRED and ERROR_RESOLVED events (so they are in order). We also make sure, that all messages regarding a specific device always go onto the same partition. And both messages share an ID that identifies that specific error incident. We already have a projection that consumes those events and provides an API for our customers, s.t. they can see all occurred errors and their current state.
Now we have to deal with the following requirement:
Reporting Errors
We need a push system that reports errors of devices to our external partners, but only after 15 minutes and if they have not been resolved in that timeframe. Our first approach was to consume all ERROR_RESOLVED events, store the IDs and have another consumer that is handling the ERROR_OCCURRED events in a delayed fashion (e.g. by only consuming the next ERROR_OCCURRED event on the topic if its timestamp is at least 15 minutes old). We would then be able to know if that particular error has already been resolved and does not need to be reported (since they share a common ID with the corresponding ERROR_RESOLVED event). Otherwise we send an HTTP request to our external partner and create an ERROR_REPORTED event on a new topic. Is there any better approach for delayed and conditional message processing?
We also have to take the following special use cases into account:
Service restarts: currently we are planning to keep the list of resolved errors in memory, so if a service restarts, that list has to be created from scratch. We could just replay the ERROR_RESOLVED messages, but that may take some time and in that time no ERROR_OCCURRED events should be processed because that may result in reporting errors that have been resolved in less then 15 minutes, but we are just not aware of it. Are there any good practices regarding replay vs. "normal" processing?
Scaling: we may increase or decrease the number of instances of our service at any time, so the partition assignment may change during runtime. That should not be a problem if we create a consumer group for each service instance when consuming the ERROR_RESOLVED events, s.t. every instance knows all resolved errors while still only handling the ERROR_OCCURRED events of its assigned partitions (in another consumer group which is shared by all instances). Is there a better approach for handling partition reassignment and internal state?
Thanks in advance!
For side effects, I would record all "side" actions in the event store. In your particular example, when it is time to send a notification, I would call SEND_NOTIFICATION command that emit NOTIFICATION_SENT event. These events would be processed by some worker process that does actual HTTP request.
Actually I would elaborate this even furter, since notifications could fail, so I would have, say, two events NOTIFICATION_REQUIRED, and NORIFICATION_SENT, so we can retry failed notifications.
And finally your logic would be "if error was not resolved in 15 minutes and notification was not sent - send a notification (or just discard if it missed its timeframe)"

MDB CLIENT_ACKNOWLEDGEMENT mode with max-messages-in-transaction >1

I have a need where I want to group messages received from a system based on certain criterion. For performance reasons, I want to avoid persisting these individual messages before I can group them. I've seen that JMS implementations provide transaction batching over a set of messages as given in
Document 1
Document 2
But I also want the acknowledgement of batch to be controlled by my code; as in case there is some issue in grouping, I should be able to rollback the batch I am reading, to be able to process the message in next try.
From above links, as the transaction is managed by container over a set of onMessage calls, I would not control the transaction commit and rollback.
Can someone let me know if I misreading it and what would be the way to achieve this?

Resources