Redelivery of JMS message in microserices - microservices

I want to know the redelivery of JMS in a microservices.
For example, if I have a microservices system. And I have 2 instances of User service. And have a listener on a destination in user service. It means I have 2 listeners. The listener is like this:
#JmsListener(destination = "order:new", containerFactory = "orderFactory")
#Transactional
public void create(OrderDTO orderDTO) {
Order order = new Order(orderDTO);
orderRepository.save(order);
jmsTemplate.convertAndSend("order:need_to_pay", order);
}
So my question is, how many times a message will be delivered. And if there is some error in this function, and the message will be re-delivered. But I have 2 instances of the service. And on which this message will be delivered?

It's not part of the spec; it depends on the broker configuration how many times it will be delivered; many brokers can be configured to send the message to a dead-letter queue after some number of attempts.
There is no guarantee the redelivery will go to the same instance.

Related

Spring integration messages queue

I have jms message endpoint like:
#Bean
public JmsMessageDrivenEndpoint fsJmsMessageDrivenEndpoint(ConnectionFactory fsConnectionFactory,
Destination fsInboundDestination,
MessageConverter fsMessageConverter) {
return Jms.messageDrivenChannelAdapter(fsConnectionFactory)
.destination(fsInboundDestination)
.jmsMessageConverter(fsMessageConverter)
.outputChannel("fsChannelRouter.input")
.errorChannel("fsErrorChannel.input")
.get();
}
So, my questions is did I get next message before current message will be processed? If it will...Did it will get all messages in mq queue until it fills up all the memory? How to avoid it?
The JmsMessageDrivenEndpoint is based on the JmsMessageListenerContainer, its threading model and MessageListener callback for pulled messages. As long as your MessageListener blocks, it doesn't go to the next message in the queue to pull. When we build an integration flow starting with JmsMessageDrivenEndpoint, it becomes as a MessageListener callback. As long as we process the message downstream in the same thread (DirectChannel by default in between endpoints), we don't pull the next message from JMS queue. If you place a QueueChannel or an ExecutorChannel in between, you shift a processing to a different thread. The current one (JMS listener) gets a control back and it is ready to pull the next message. And in this case your concern about the memory is correct. You can still use QueueChannel with limited size or your ExecutorChannel can be configured with limited thread pool.
In any way my recommendation do not do any thread shifting in the flow when you start from JMS listener container. It is better to block for the next message and let the current transaction to finish its job. So you won't lose a message when something crashes.

ActiveMQ messageId not working to stop duplication

I am using ActiveMQ for messaging and there is one requirement that if message is duplicate then it should handled by AMQ automatically.
For that I generate unique message key and set to messageproccessor.
following is code :
jmsTemplate.convertAndSend(dataQueue, event, messagePostProccessor -> {
LocalDateTime dt = LocalDateTime.now();
long ms = dt.get(ChronoField.MILLI_OF_DAY) / 1000;
String messageUniqueId = event.getResource() + event.getEntityId() + ms;
System.out.println("messageUniqueId : " + messageUniqueId);
messagePostProccessor.setJMSMessageID(messageUniqueId);
messagePostProccessor.setJMSCorrelationID(messageUniqueId);
return messagePostProccessor;
});
As it can be seen code generates unique id and then set it to messagepostproccessor.
Can somehelp me on this, is there any other configuration that I need do.
A consumer can receive duplicate messages mainly for two reasons: a producer sent the same message more times or a consumer receive the same message more times.
Apache ActiveMQ Artemis includes powerful automatic duplicate message detection, filtering out messages sent by a producer more times.
To prevent a consumer from receiving the same message more times, an idempotent consumer must be implemented, ie Apache Camel provides an Idempotent consumer component that would work with any JMS provider, see: http://camel.apache.org/idempotent-consumer.html

How to consume multiple kafka message at same topic with multiple ack?

I am trying to consume multiple message from a topic with manual ack but ack work if all message only by ack one time.
#KafkaListener(
id = "${kafka.buyers.product-sales-pricing.id}",
topics = "${kafka.buyers.product-sales-pricing.topic}",
groupId = "${kafka.buyers.group-id}",
concurrency = "${kafka.buyers.concurrency}"
)
public void listen( List<String> message, Acknowledgment ack ){}
In above code i am getting 5 message per consume if i put following configuration in spring boot property file:
kafka:
max-poll-records: 5 # Maximum number of records returned in a single call to poll()
but if i ack that listen then it ack all 5 message at same time.
Actually i want to ack separately for each message(means 5 message with 5 ack).
How can i do this in spring boot project?
When using a batch listener, the entire batch is acked when Acknowledgment.acknowledge() is called.
I would recommend using a single record listener rather than a batch listener for this use case.
listen(String msg, Acknowledgment ack)
It's not clear why you would commit offsets for only part of the batch.
If you must use a batch listener, it can still be done, but rather more complicated - you would need to get List<ConsumerRecord<?, ?>> to get topic/partition/offset information and also add Consumer<?, ?> consumer to the method parameters (and remove the Acknowledgment; you can then call commitOffsets() on the consumer however you want. But you MUST call it on the listener thread - the consumer is not thread-safe.

Spring Integration: TaskExecutor and MaxConcurrentConsumers on AmqpInboundChannelAdapter

My Spring Integration app consumes messages from RabbitMQ transforms them to SOAP message and does web service request.
It is possible to get many (10 – 50) messages per second from the queue.
Or after application restart there could be many thousand messages in RabbitMQ queue.
What is the best possible scenario to process up to 10 messages in parallel threads (message ordering is nice to have but not required feature, if web service answers with business failure then failed message should go to retry until it succeeds).
Amqp listener should not consume more messages from the queue as not busy threads available in task executor.
I could define an ThreadExecutor in an channel like this:
#Bean
public AmqpInboundChannelAdapterSMLCSpec
amqpInboundChannelAdapter(ConnectionFactory connectionFactory, Queue queue)
{
return Amqp.inboundAdapter(connectionFactory, queue);
}
IntegrationFlow integrationFlow = IntegrationFlows
.from(amqpInboundChannelAdapter)
.channel(c -> c.executor(exportFlowsExecutor))
.transform(businessObjectToSoapRequestTransformer)
.handle(webServiceOutboundGatewayFactory.getObject())
.get();
Or is it enough to define an task executor in AmqpInboundChannelAdapter like this and do not define channels task executor in the flow definition:
#Bean
public AmqpInboundChannelAdapterSMLCSpec
amqpInboundChannelAdapter(ConnectionFactory connectionFactory, Queue queue)
{
return Amqp.inboundAdapter(connectionFactory, queue)
.configureContainer(c->c.taskExecutor(taskExecutor));
}
Or maybe to define task executor for a channel like option 1 but additionally set maxConcurrentConsumers on a channel adapter like this:
#Bean
public AmqpInboundChannelAdapterSMLCSpec
amqpInboundChannelAdapter(ConnectionFactory connectionFactory, Queue queue)
{
return Amqp.inboundAdapter(connectionFactory, queue)
.configureContainer(c->c.maxConcurrentConsumers(10));
}
The best practice to configure a concurrency on the ListenerContainer and let all downstream processes happen on those threads from the container. This way you'll get a natural back-pressure when no more messages are polled from the queue because the thread is busy. And on the other hand there is not going to be message loss just because with an ExecutorChannel after the listener container you're going to free a polling thread and the current message will be acked as consumed, but you may fail downstream.

Spring Kafka discard message by condition in listener

In my Spring Boot/Kafka project I have the following listener:
#KafkaListener(topics = "${kafka.topic.update}", containerFactory = "updateKafkaListenerContainerFactory")
public void onUpdateReceived(ConsumerRecord<String, Update> consumerRecord, Acknowledgment ack) {
// do some logic
ack.acknowledge();
}
Inside of the listener I need to check some condition according to my business logic and if it is not met - skip processing of this certain message and let Kafka know to redeliver this message one more time.
The reason I need this - according to the business logic of my application I need to avoid sending more than one post per second into the particular Telegram chat. This why I'd like to check the chatLastSent time in the Kafka listener and postpone message sending if needed(via message redelivery to this Kafka topic)
How to properly do it? Do I only need to not perform the ack.acknowledge(); this time or there is another, more proper way in order to achieve it?
Use the SeekToCurrentErrorHandler.
When you throw an exception, the container will invoke the error handler which will re-seek the unprocessed messages so they will be fetched again on the next poll.
You can use a RecordFilterStrategy.
See doc here : https://docs.spring.io/spring-kafka/docs/2.0.5.RELEASE/reference/html/_reference.html#_filtering_messages

Resources