RabbitMQ infinite loop issue - spring-boot

I have an issue using rabbitMQ to send a message from a service A to service B which also sends a notification to service C , the problem is, i had to put the #RabbitListener and the Rabbittemplate in the same method like so :
#Autowired private RabbitTemplate template;
#RabbitListener(queues=RabbitConfig.QUEUETD)
public ResponseEntity<String> AddSas_Campaign(SasCampaign sasCampaign){
if
//...
template.convertAndSend(RabbitConfig.EXCHANGE,RabbitConfig.ROUTING_KEY,sasCampaign);
return new ResponseEntity<String>( "New line inserted ", HttpStatus.OK);}
//...
}
else return new ResponseEntity<String>("Campaign Code exists",HttpStatus.OK);
}
and it is creating and infinite loop of (+2000/min) messages and exceptions non stop.
2021-06-29 14:34:16,560 WARN [org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#1-1] org.springframework.amqp.rabbit.listener.ConditionalRejectingErrorHandler: Execution of Rabbit message listener failed.
org.springframework.amqp.rabbit.support.ListenerExecutionFailedException: Listener threw exception
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.wrapToListenerExecutionFailedExceptionIfNeeded(AbstractMessageListenerContainer.java:1746)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:1636)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.actualInvokeListener(AbstractMessageListenerContainer.java:1551)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:1539)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:1530)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:1474)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:967)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:913)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$1600(SimpleMessageListenerContainer.java:83)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.mainLoop(SimpleMessageListenerContainer.java:1288)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1194)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.amqp.rabbit.listener.adapter.ReplyFailureException: Failed to send reply with payload 'InvocationResult [returnValue=<200 OK OK,Test SasCamapign inserted ,[]>, returnType=org.springframework.http.ResponseEntity<java.lang.String>, bean=tn.itserv.services.Sas_CampaignService#4232ecc, method=public org.springframework.http.ResponseEntity tn.itserv.services.Sas_CampaignService.AddSas_Campaign(tn.itserv.entities.SasCampaign)]'
at org.springframework.amqp.rabbit.listener.adapter.AbstractAdaptableMessageListener.doHandleResult(AbstractAdaptableMessageListener.java:476)
at org.springframework.amqp.rabbit.listener.adapter.AbstractAdaptableMessageListener.handleResult(AbstractAdaptableMessageListener.java:400)
at org.springframework.amqp.rabbit.listener.adapter.MessagingMessageListenerAdapter.invokeHandlerAndProcessResult(MessagingMessageListenerAdapter.java:152)
at org.springframework.amqp.rabbit.listener.adapter.MessagingMessageListenerAdapter.onMessage(MessagingMessageListenerAdapter.java:135)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:1632)
... 10 common frames omitted
Caused by: org.springframework.amqp.AmqpException: Cannot determine ReplyTo message property value: Request message does not contain reply-to property, and no default response Exchange was set.
at org.springframework.amqp.rabbit.listener.adapter.AbstractAdaptableMessageListener.getReplyToAddress(AbstractAdaptableMessageListener.java:576)
at org.springframework.amqp.rabbit.listener.adapter.AbstractAdaptableMessageListener.doHandleResult(AbstractAdaptableMessageListener.java:472)
... 14 common frames omitted
To be honest I haven't understood 100% how it works ,should I create different Queue to every exchange ? because it works partially (From Service A to B or From B to C ) But when i use both it creates these exceptions.

I've experienced a similar problem in production that rears its head a couple times a year. The concept of 'requeue on failure' seems like a bad idea, but it's what was implemented by the previous developer.
The cause is that the message is put back in the queue on exception. To fix it, you would not throw an exception from your RabbitListener. Try surrounding with a try catch and return a response entity with a server error.
#Autowired private RabbitTemplate template;
#RabbitListener(queues=RabbitConfig.QUEUETD)
public ResponseEntity<String> AddSas_Campaign(SasCampaign sasCampaign){
try {
if
//...
template.convertAndSend(RabbitConfig.EXCHANGE,RabbitConfig.ROUTING_KEY,sasCampaign);
return new ResponseEntity<String>( "New line inserted ", HttpStatus.OK);}
//...
}
else return new ResponseEntity<String>("Campaign Code exists",HttpStatus.OK);
} catch(Exception e) {
return new ResponseEntity<String>("Error on Message Processing", HttpStatus.INTERNAL_SERVER_ERROR);
}
}

So i realized that a #RabbitListener function should have no return value which means void instead of my ResponseEntity so my idea was like to create a new method that calls my method and has no return value:
#RabbitListener(queues=RabbitConfig.QUEUETD)
public void receiveFromService(SasCampaign sasCampaign){
AddSas_Campaign(sasCampaign);
}

Related

How to know which exception is thrown from errorhandler in dead letter queue listener?

I have a quorum queue (myQueue) and it's dead letter queue (myDLQueue). We have several exceptions which we separated as Retryable or Fatal. But sometimes in below listener we make an api call that throws RateLimitException. In this case the application should increase both of retry count and retry delay.
#RabbitListener(queues = "#{myQueue.getName()}", errorHandler = "myErrorHandler")
#SendTo("#{myStatusQueue.getName()}")
public Status process(#Payload MyMessage message, #Headers MessageHeaders headers) {
int retries = headerProcessor.getRetries(headers);
if (retries > properties.getMyQueueMaxRetries()) {
throw new RetriesExceededException(retries);
}
if (retries > 0) {
logger.info("Message {} has been retried {} times. Process it again anyway", kv("task_id", message.getTaskId()), retries);
}
// here we send a request to an api. but sometimes api returns rate limit error in case we send too many requests.
// In that case makeApiCall throws RateLimitException which extends RetryableException
makeApiCall() // --> it will throw RateLimitException
if(/* a condition that needs to retry sending the message*/) {
throw new RetryableException()
}
if(/* a condition that should not retry*/){
throw new FatalException()
}
return new Status("Step 1 Success!");
}
I have also an error handler (myErrorHandler) that catches thrown exceptions from above rabbit listener and manages retry process according to the type of the exception.
public class MyErrorHandler implements RabbitListenerErrorHandler {
#Override
public Object handleError(Message amqpMessage,
org.springframework.messaging.Message<?> message,
ListenerExecutionFailedException exception) {
// Check if error is fatal or retryable
if (exception.getCause() /* ..is fatal? */) {
return new Status("FAIL!");
}
// Retryable exception, rethrow it and let message to be NACKed and retried via DLQ
throw exception;
}
}
Last part I have is a DLQHandler that listens dead letter queue messages and send them to original queue (myQueue).
#Service
public class MyDLQueueHandler {
private final MyAppProperties properties;
private final MessageHeaderProcessor headerProcessor;
private final RabbitProducerService rabbitProducerService;
public MyDLQueueHandler(MyProperties properties, MessageHeaderProcessor headerProcessor, RabbitProducerService rabbitProducerService) {
this.properties = properties;
this.headerProcessor = headerProcessor;
this.rabbitProducerService = rabbitProducerService;
}
/**
* Since message TTL is not available with quorum queues manually listen DL Queue and re-send the message with delay.
* This allows messages to be processed again.
*/
#RabbitListener(queues = {"#{myDLQueue.getName()}"}"})
public void handleError(#Payload Object message, #Headers MessageHeaders headers) {
String routingKey = headerProcessor.getRoutingKey(headers);
Map<String, Object> newHeaders = Map.of(
MessageHeaderProcessor.DELAY, properties.getRetryDelay(), // I need to send increased delay in case of RateLimitException.
MessageHeaderProcessor.RETRIES_HEADER, headerProcessor.getRetries(headers) + 1
);
rabbitProducerService.sendMessageDelayed(message, routingKey, newHeaders);
}
}
In the above handleError method inputs there is not any information related to exception instance thrown from MyErrorHandler or MyQueue listener. Currently I have to pass retry delay by reading it from app.properties. But I need to increase this delay if RateLimitException is thrown. So my question is how do I know which error is thrown from MyErrorHandler while in the MyDLQueueHandler?
When you use the normal dead letter mechanism in RabbitMQ, there is no exception information provided - the message is the original rejected message. However, Spring AMQP provides a RepublishMessageRecoverer which can be used in conjunction with a retry interceptor. In that case, exception information is published in headers.
See https://docs.spring.io/spring-amqp/docs/current/reference/html/#async-listeners
The RepublishMessageRecoverer publishes the message with additional information in message headers, such as the exception message, stack trace, original exchange, and routing key. Additional headers can be added by creating a subclass and overriding additionalHeaders().
#Bean
RetryOperationsInterceptor interceptor() {
return RetryInterceptorBuilder.stateless()
.maxAttempts(5)
.recoverer(new RepublishMessageRecoverer(amqpTemplate(), "something", "somethingelse"))
.build();
}
The interceptor is added to the container's advice chain.
https://github.com/spring-projects/spring-amqp/blob/57596c6a26be2697273cd97912049b92e81d3f1a/spring-rabbit/src/main/java/org/springframework/amqp/rabbit/retry/RepublishMessageRecoverer.java#L55-L61
public static final String X_EXCEPTION_STACKTRACE = "x-exception-stacktrace";
public static final String X_EXCEPTION_MESSAGE = "x-exception-message";
public static final String X_ORIGINAL_EXCHANGE = "x-original-exchange";
public static final String X_ORIGINAL_ROUTING_KEY = "x-original-routingKey";
The exception type can be found in the stack trace header.

spring-kafka consumer batch error handling with spring boot version 2.3.7

I am trying to perform the spring kafka batch process error handling. First of all i have few questions.
what is difference between listener and container error handlers and what errors comes into these two categories ?
Could you please help some samples on this to understand better ?
Here is our design:
Poll every certain interval
consume messages in a batch mode
push to local cache (application cache) based on key (to avoid duplicate events)
push all values one by one to another topic once batch process done.
clear the the cache once the operation 3 done and acknowledge the offsets manually.
Here is my plan to have error handling:
public ConcurrentKafkaListenerContainerFactory<String, String> myListenerPartitionContainerFactory(String groupId) {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory(groupId));
factory.setConcurrency(partionCount);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
factory.getContainerProperties().setIdleBetweenPolls(pollInterval);
factory.setBatchListener(true);
return factory;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> myPartitionsListenerContainerFactory()
{
return myListenerPartitionContainerFactory(groupIdPO);
}
#Bean
public RecoveringBatchErrorHandler(KafkaTemplate<String, String> errorKafkaTemplate) {
DeadLetterPublishingRecoverer recoverer =
new DeadLetterPublishingRecoverer(errorKakfaTemplate);
RecoveringBatchErrorHandler errorHandler =
new RecoveringBatchErrorHandler(recoverer, new FixedBackOff(2L, 5000)); // push error event to the error topic
}
#KafkaListener(id = "mylistener", topics = "someTopic", containerFactory = "myPartitionsListenerContainerFactory"))
public void listen(List<ConsumerRecord<String, String>> records, #Header(KafkaHeaders.MESSAGE_KEY) String key, Acknowledgement ack) {
Map hashmap = new Hashmap<>();
records.forEach(record -> {
try {
//key will be formed based on the input record - it will be id.
hashmap.put(key, record);
}
catch (Exception e) {
throw new BatchListenerFailedException("Failed to process", record);
}
});
// Once success each messages to another topic.
try {
hashmap.forEach( (key,value) -> { push to another topic })
hashmap.clear();
ack.acknowledge();
} catch(Exception ex) {
//handle producer exceptions
}
}
is the direction good or any improvements needs to be done? And also what type of container and listener handlers need to be implemented?
#Gary Russell.. could you please help on this ?
The listener error handler is intended for request/reply situations where the error handler can return a meaningful reply to the sender.
You need to throw an exception to trigger the container error handler and you need to know in the index in the original batch to tell it which record failed.
If you are using manual acks like that, you can use the nack() method to indicate which record failed (and don't throw an exception in that case).

Spring Integration - Scatter-Gather

I am using Spring Integration and Scatter Gather handler (https://docs.spring.io/spring-integration/docs/5.3.0.M1/reference/html/scatter-gather.html) in order to send 3 parallel requests (using ExecutorChannels) to external REST APIs and aggregate their response into one single message.
Everything works fine until exception is thrown within Aggregator's aggregatePayloads method (AggregatingMessageHandler). In this scenario error message is successfully delivered to Messaging Gateway which initiated the flow ( caller ). However, ScatterGatherHandler thread remains in hanging state waiting for gatherer reply (I believe) which never arrives due to the exception within it. I.e each sequential call leaves one additional thread in "stuck" state and eventually Thread Pool runs out of available working threads.
My current Scatter Gather configuration:
#Bean
public MessageHandler distributor() {
RecipientListRouter router = new RecipientListRouter();
router.setChannels(Arrays.asList(Channel1(asyncExecutor()),Channel2(asyncExecutor()),Channel3(asyncExecutor())));
return router;
}
#Bean
public MessageHandler gatherer() {
AggregatingMessageHandler aggregatingMessageHandler = new AggregatingMessageHandler(
new TransactionAggregator(),
new SimpleMessageStore(),
new HeaderAttributeCorrelationStrategy("correlationID"),
new ExpressionEvaluatingReleaseStrategy("size() == 3"));
aggregatingMessageHandler.setExpireGroupsUponCompletion( true );
return aggregatingMessageHandler;
}
#Bean
#ServiceActivator(inputChannel = "validationOutputChannel")
public MessageHandler scatterGatherDistribution() {
ScatterGatherHandler handler = new ScatterGatherHandler(distributor(), gatherer());
handler.setErrorChannelName("scatterGatherErrorChannel");
return handler;
}
#Bean("taskExecutor")
#Primary
public TaskExecutor asyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(4);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(100);
executor.setThreadNamePrefix("AsyncThread-");
executor.initialize();
return executor;
}
So far the only solution that I found is to add RequiresReply and GatherTimeout values for ScatterGatherHandler like below:
handler.setGatherTimeout(120000L);
handler.setRequiresReply(true);
This will produce an exception and release ScatterGatherHandler's thread to the pull after specified timeout value and after aggregator's exception is delivered to the messaging gateway. I can see following message in the log:
[AsyncThread-1] [WARN] [o.s.m.c.GenericMessagingTemplate$TemporaryReplyChannel:] [{}] - Reply message received but the receiving thread has already received a reply: ErrorMessage
Is there any other way to achieve this? My main goal is to make sure that I am not blocking any threads in case of exception is thrown within aggregator's aggregatePayloads method.
Thank you.
Technically this is really an expect behavior. See docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#scatter-gather-error-handling
In this case a reasonable, finite gatherTimeout must be configured for the ScatterGatherHandler. Otherwise it is going to be blocked waiting for a reply from the gatherer forever, by default.
There is really no way to break expectations from the BlockingQueue.take() from that ScatterGatherHandler code.

WebSphere MQ Messages Disappear From Queue

I figured I would toss a question on here incase anyone has ideas. My MQ Admin created a new queue and alias queue for me to write messages to. I have one application writing to the queue, and another application listening on the alias queue. I am using spring jmsTemplate to write to my queue. We are seeing a behavior where the message is being written to the queue but then instantly being discarded. We disabled gets and to see if an expiry parameter was being set somehow, I used the jms template to set the expiry setting (timeToLive). I set the expiry to 10 minutes but my message still disappears instantly. A snippet of my code and settings are below.
public void publish(ModifyRequestType response) {
jmsTemplate.setExplicitQosEnabled(true);
jmsTemplate.setTimeToLive(600000);
jmsTemplate.send(CM_QUEUE_NAME, new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
String responseXML = null;
try {
responseXML myJAXBContext.getInstance().toXML(response);
log.info(responseXML);
TextMessage message = session.createTextMessage(responseXML);
return message;
} catch (myException e) {
e.printStackTrace();
log.info(responseXML);
return null;
}
}
});
}
/////////////////My settings
QUEUE.PUB_SUB_DOMAIN=false
QUEUE.SUBSCRIPTION_DURABLE=false
QUEUE.CLONE_SUPPORT=0
QUEUE.SHARE_CONV_ALLOWED=1
QUEUE.MQ_PROVIDER_VERSION=6
I found my issue. I had a parent method that had the #Transactional annotation. I do not want my new jms message to be part of that transaction so I am going to add jmsTemplate.setSessionTransacted(false); before performing a jmsTemplate.send. I have created a separate jmsTempalte for sending my new message instead of reusing the one that was existing, which needs to be managed.

Spring rabbitlistner stop listening to queue using annotation syntax

A colleague and I are working on an application using Spring which needs to get a message from a RabbitMQ queue. The idea is to do this using (the usually excellent) spring annotation system to make the code easy to understand. We have the system working using the #RabbitListner annotation but we want to get a message on demand. The #RabbitListner annotation does not do this, it just receives messages when they are available. The demand is determined by the "readiness" of the client i.e. a client should "get" a message from te queue stop listing and process the message. Then determine if it is ready to receive a new one and reconnect to the queue.
We have been looking into doing this by hand just using the spring-amqp/spring-rabbit modules and while this is probably possible we would really like to do this using spring. After many hours of searching and going through the documentation, we have not been able to find an answer.
Here is the recieving code we currently have:
#RabbitListener(queues = "jobRequests")
public class Receiver {
#Autowired
private JobProcessor jobProcessor;
#RabbitHandler
public void receive(Job job) throws InterruptedException, IOException {
System.out.println(" [x] Received '" + job + "'");
jobProcessor.processJob(job);
}
}
Job processor:
#Service
public class JobProcessor {
#Autowired
private RabbitTemplate rabbitTemplate;
public boolean processJob(Job job) throws InterruptedException, IOException {
rabbitTemplate.convertAndSend("jobResponses", job);
System.out.println(" [x] Processing job: " + job);
rabbitTemplate.convertAndSend("processedJobs", job);
return true;
}
}
In other words, when the job is received by the Receiver it should stop listening for new jobs and wait for the job processor to be done and then start listing for new messages.
We have re-created the null pointer exception here is the code we use to send from the server side.
#Controller
public class MainController {
#Autowired
RabbitTemplate rabbitTemplate;
#Autowired
private Queue jobRequests;
#RequestMapping("/do-job")
public String doJob() {
Job job = new Job(new Application(), "henk", 42);
System.out.println(" [X] Job sent: " + job);
rabbitTemplate.convertAndSend(jobRequests.getName(), job);
return "index";
}
}
And then the receiving code on the client side
#Component
public class Receiver {
#Autowired
private JobProcessor jobProcessor;
#Autowired
private RabbitListenerEndpointRegistry rabbitListenerEndpointRegistry;
#RabbitListener(queues = "jobRequests")
public void receive(Job job) throws InterruptedException, IOException, TimeoutException {
Collection<MessageListenerContainer> messageListenerContainers = rabbitListenerEndpointRegistry.getListenerContainers();
for (MessageListenerContainer listenerContainer :messageListenerContainers) {
System.out.println(listenerContainer);
listenerContainer.stop();
}
System.out.println(" [x] Received '" + job + "'");
jobProcessor.processJob(job);
for (MessageListenerContainer listenerContainer :messageListenerContainers) {
listenerContainer.start();
}
}
}
And the updated job processor
#Service
public class JobProcessor {
public boolean processJob(Job job) throws InterruptedException, IOException {
System.out.println(" [x] Processing job: " + job);
return true;
}
}
And the stacktrace
[x] Received 'Job{application=com.olifarm.application.Application#aaa517, name='henk', id=42}'
[x] Processing job: Job{application=com.olifarm.application.Application#aaa517, name='henk', id=42}
Exception in thread "SimpleAsyncTaskExecutor-1" java.lang.NullPointerException
2015-12-18 11:17:44.494 at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.isActive(SimpleMessageListenerContainer.java:838)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$700(SimpleMessageListenerContainer.java:93)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1301)
at java.lang.Thread.run(Thread.java:745)
WARN 325899 --- [cTaskExecutor-1] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it
java.lang.NullPointerException: null
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.isActive(SimpleMessageListenerContainer.java:838) ~[spring-rabbit-1.5.2.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$700(SimpleMessageListenerContainer.java:93) ~[spring-rabbit-1.5.2.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1195) ~[spring-rabbit-1.5.2.RELEASE.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_91]
The stopping of the listener works and we do receive a new job but when it try's to start it again the NPE is thrown. We checked the rabbitMQ log and found that the connection is closed for about 2 seconds and then re-opened automatically even if we put the thread in sleep in the job processor. This might be the source of the problem? The error doesn't break the program however and after it is thrown the receiver is still able to receive new jobs. Are we abusing the mechanism here or is this valid code?
To get messages on-demand, it's generally better to use rabbitTemplate.receiveAndConvert() rather than a listener; that way you completely control when you receive messages.
Starting with version 1.5 you can configure the template to block for some period of time (or until a message arrives). Otherwise it immediately returns null if there's no message.
The listener is really designed for message-driven applications.
If you can block the thread in the listener until the job completes, no more messages will be delivered - by default the container has only one thread.
If you can't block the thread until the job completes, for some reason, you can stop()/start() the listener container by getting a reference to it from the Endpoint Registry.
It's generally better to stop the container on a separate thread.

Resources