I have a case where I might have Java NPE into the listener which accepts queue payload. I get multiple attempts and errors:
18:41:50.549 [processingeContainer-1] WARN o.s.a.r.l.ConditionalRejectingErrorHandler - Execution of Rabbit message listener failed.
2019-09-24 18:41:50,551 INFO [stdout] (processingContainer-1) org.springframework.amqp.rabbit.listener.exception.ListenerExecutionFailedException: Listener method 'transactionProcess' threw exception
Is there some way to limit the AMQP client attempts?
You should really fix the NPE but you can configure the listener container error handler.
The default ConditionalRejectingErrorHandler treats certain exceptions as fatal.
It uses a DefaultExceptionStrategy which has the following code:
private boolean isCauseFatal(Throwable cause) {
return cause instanceof MessageConversionException // NOSONAR boolean complexity
|| cause instanceof org.springframework.messaging.converter.MessageConversionException
|| cause instanceof MethodArgumentResolutionException
|| cause instanceof NoSuchMethodException
|| cause instanceof ClassCastException
|| isUserCauseFatal(cause);
}
/**
* Subclasses can override this to add custom exceptions.
* #param cause the cause
* #return true if the cause is fatal.
*/
protected boolean isUserCauseFatal(Throwable cause) {
return false;
}
So, configure your own ConditionalRejectingErrorHandler with a subclass of DefaultExceptionStrategy which overrides isUserCauseFatal() return true for NullPointerException.
You would then inject your error hander into the listener container or listener container factory.
Another technique would be to add a retry interceptor; by default, the error is just logged after the retries are exhausted. With spring boot, the default recoverer is a RejectAndDontRequeueRecoverer.
EDIT
I just tested it and it worked fine...
#SpringBootApplication
public class So58087354Application {
public static void main(String[] args) {
SpringApplication.run(So58087354Application.class, args);
}
#RabbitListener(queues = "foo")
public void listen(String in) {
System.out.println("here");
throw new NullPointerException("Test");
}
}
spring.rabbitmq.listener.simple.retry.enabled=true
spring.rabbitmq.listener.simple.retry.initial-interval=1000ms
spring.rabbitmq.listener.simple.retry.max-attempts=2
and
here
here
2019-10-01 09:07:11.936 WARN 75435 --- [ntContainer#0-1] o.s.a.r.r.RejectAndDontRequeueRecoverer : Retries exhausted for message (Body:'[B#6d890bbc(byte[3])' MessageProperties [headers={}, contentLength=0, receivedDeliveryMode=NON_PERSISTENT, redelivered=false, receivedExchange=, receivedRoutingKey=foo, deliveryTag=1, consumerTag=amq.ctag-mwYtmPtBplrefsOa05hG0w, consumerQueue=foo])
...
2019-10-01 09:07:11.937 WARN 75435 --- [ntContainer#0-1] s.a.r.l.ConditionalRejectingErrorHandler : Execution of Rabbit message listener failed.
...
Caused by: org.springframework.amqp.AmqpRejectAndDontRequeueException: null
... 19 common frames omitted
EDIT2
To add a retry advice to the container factory manually...
#Component
class ContainerRetryConfigurer {
ContainerRetryConfigurer(AbstractRabbitListenerContainerFactory<?> factory) {
factory.setAdviceChain(RetryInterceptorBuilder.stateless()
.maxAttempts(2)
.backOffOptions(1000, 1.0, 1000)
.build());
}
}
Related
I've implemented an KafkaListener using SpringBoot 2.7.6, Confluent platform and now I need to implement the Error Handler for it.
The listener manages to pick up a protobuf topic message and POST the payload to an HTTP endpoint properly. But in case of the java.net.ConnectException happens I need to send the same protobuf message to a DLT instead of Retry.
I implemented this using the following Listener:
#Component
class ConsumerListener(
private val apiPathsConfig: ApiPathsConfig,
private val myHttpClient: MyHttpClient,
#Value("\${ingestion.config.httpClientTimeOutInSeconds}") private val httpRequestTimeout: Long
) {
val log: Logger = LoggerFactory.getLogger(ConsumerListener::class.java)
#RetryableTopic(
attempts = "4",
backoff = Backoff(delay = 5000, multiplier = 2.0), //TODO: env var?
autoCreateTopics = "false",
topicSuffixingStrategy = TopicSuffixingStrategy.SUFFIX_WITH_INDEX_VALUE,
timeout = "3000", //TODO: env var?
dltStrategy = DltStrategy.FAIL_ON_ERROR
)
#KafkaListener(
id = "ingestionConsumerListener",
topics = ["#{'\${ingestion.config.topic.name}'}"],
groupId = "#{'\${ingestion.consumer.group.id}'}",
concurrency = "#{'\${ingestion.config.consumer.concurrency}'}"
)
fun consume(ingestionHttpRequest: ConsumerRecord<String, HttpRequest.HttpRequest>) {
...
try {
val response: HttpResponse<Void> = myHttpClient.send(request, HttpResponse.BodyHandlers.discarding())
if (response.statusCode() in 400..520) {
val ingestionResponseError = "Ingestion response status code [${response.statusCode()}] - headers [${response.headers()}] - body [${response.body()}]"
log.error(ingestionResponseError)
throw RuntimeException(ingestionResponseError)
}
} catch (e: IOException) {
log.error("IOException stackTrace : ${e.printStackTrace()}")
throw RuntimeException(e.stackTrace.contentToString())
} catch (e: InterruptedException) {
log.error("InterruptedException stackTrace : ${e.printStackTrace()}")
throw RuntimeException(e.stackTrace.contentToString())
} catch (e: IllegalArgumentException) {
log.error("IllegalArgumentException stackTrace : ${e.printStackTrace()}")
throw RuntimeException(e.stackTrace.contentToString())
}
}
...
}
When the java.net.ConnectException happens the DeadLetterPublishingRecoverFactory show this:
15:19:44.546 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=producer-1] ProducerId set to 3330155 with epoch 0
15:19:44.547 [ingestionConsumerListener-2-C-1] ERROR org.springframework.kafka.retrytopic.DeadLetterPublishingRecovererFactory$1 - Dead-letter publication to ingestion-topic-retry-0failed for: ingestion-topic-5#32
org.apache.kafka.common.errors.SerializationException: Can't convert value of class com.xxx.ingestion.IngestionHttpRequest$HttpRequest to class org.apache.kafka.common.serialization.StringSerializer specified in value.serializer
...
Caused by: java.lang.ClassCastException: class com.xxx.ingestion.IngestionHttpRequest$HttpRequest cannot be cast to class java.lang.String (com.xxx.ingestion.IngestionHttpRequest$HttpRequest is in unnamed module of loader 'app'; java.lang.String is in module java.base of loader 'bootstrap')
at org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:29)
Please, how to resend the protobuf message to a DLT instead of Retry in case of ConnectionException and how to keep the Retry in case of the the HTTP endpoint response 4xx or 5xx code?
Please user:2756547
You either need a protobuf serializer to re-serialize the data, or don't deserialize the prototbuf in a deserializer, use a ByteArrayDeserializer and convert it in your listener instead; then use a ByteArraySerializer.
You can also configure certain exception types to go straight to the DLT.
EDIT
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#retry-topic-global-settings
#Configuration
public class MyRetryTopicConfiguration extends RetryTopicConfigurationSupport {
#Override
protected void manageNonBlockingFatalExceptions(List<Class<? extends Throwable>> nonBlockingFatalExceptions) {
nonBlockingFatalExceptions.add(MyNonBlockingException.class);
}
}
/**
* Override this method to manage non-blocking retries fatal exceptions.
* Records which processing throws an exception present in this list will be
* forwarded directly to the DLT, if one is configured, or stop being processed
* otherwise.
* #param nonBlockingRetriesExceptions a {#link List} of fatal exceptions
* containing the framework defaults.
*/
protected void manageNonBlockingFatalExceptions(List<Class<? extends Throwable>> nonBlockingRetriesExceptions) {
}
When trying to use Quarkus (version 2.9.2.Final) EventBus requestAndForget with a #ConsumeEvent method that returns void, the following exception occurs in the logs, even though the processing occurs without any problem.
OK
2022-06-07 09:44:04,064 ERROR [io.qua.mut.run.MutinyInfrastructure]
(vert.x-eventloop-thread-1) Mutiny had to drop the following
exception: (TIMEOUT,-1) Timed out after waiting 30000(ms) for a reply.
address: __vertx.reply.3, repliedAddress: receivedSomeEvent
The consumer code:
#ApplicationScoped
public class ConsumerManiac{
#ConsumeEvent(value = "receivedSomeEvent")
public void consume(SomeEvent someEvent ) {
System.out.println("OK");
}
}
The Producer code (a REST Endpoint):
public class SomeResource {
private final EventBus eventBus;
#Inject
public SomeResource (EventBus eventBus) {
this.eventBus = eventBus;
}
#POST
public Response send(#Valid SomeEvent someEvent) {
eventBus.requestAndForget("receivedSomeEvent", someEvent);
return Response.accepted().build();
}
}
If the consumer method is changed to return some value, then the exception in logs does not occur.
#ApplicationScoped
public class ConsumerManiac{
#ConsumeEvent(value = "receivedSomeEvent")
public String consume(SomeEvent someEvent ) {
System.out.println("OK");
return "ok";
}
}
Is there any missing piece of code that is missing so the exception does not occur (even though processing concludes without any problem)?
Reference: https://quarkus.io/guides/reactive-event-bus#implementing-fire-and-forget-interactions
Full stacktrace:
2022-06-07 09:44:04,064 ERROR [io.qua.mut.run.MutinyInfrastructure]
(vert.x-eventloop-thread-1) Mutiny had to drop the following
exception: (TIMEOUT,-1) Timed out after waiting 30000(ms) for a reply.
address: __vertx.reply.3, repliedAddress: receivedSomeEvent at
io.vertx.core.eventbus.impl.ReplyHandler.handle(ReplyHandler.java:76)
at
io.vertx.core.eventbus.impl.ReplyHandler.handle(ReplyHandler.java:24)
at
io.vertx.core.impl.VertxImpl$InternalTimerHandler.handle(VertxImpl.java:893)
at
io.vertx.core.impl.VertxImpl$InternalTimerHandler.handle(VertxImpl.java:860)
at io.vertx.core.impl.EventLoopContext.emit(EventLoopContext.java:50)
at
io.vertx.core.impl.DuplicatedContext.emit(DuplicatedContext.java:168)
at io.vertx.core.impl.AbstractContext.emit(AbstractContext.java:53)
at
io.vertx.core.impl.VertxImpl$InternalTimerHandler.run(VertxImpl.java:883)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at
io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503) at
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:833)
I had to return arbitrary value to avoid this exception.
I've been trying to figure out how route errors to my own error handler with the following, seemingly simple configuration, but Camel is swallowing the exception without routing it to any error handler I configure. I've run out of ideas. Any help would be much appreciated.
I've got a seda route that supports multiple consumers:
#Component
public class MessageGenerator {
public static final String ERROR_GENERATOR_CHANNEL = "seda:my-error-generator?multipleConsumers=true&concurrentConsumers=3";
private final FluentProducerTemplate producerTemplate;
public MessageGenerator(FluentProducerTemplate producerTemplate) {
this.producerTemplate = producerTemplate;
}
public void generateMessage() {
producerTemplate
.to(ERROR_GENERATOR_CHANNEL)
.withBody("Hello World")
.asyncSend();
}
}
I've got two separate POJO consumers:
#Configuration
public class MessageConsumer1 {
#Consume(ERROR_GENERATOR_CHANNEL)
void receiveMessage(String message) {
System.out.println("Received message 1: " + message);
throw new NullPointerException("Error generated");
}
}
#Configuration
public class MessageConsumer2 {
#Consume(ERROR_GENERATOR_CHANNEL)
void receiveMessage(String message) {
System.out.println("Received message 2: " + message);
}
}
When I run the following example, the NullPointerException gets swallowed by the underlying Camel MulticastProcessor as we can see in the logs:
Received message 2: Hello World
Received message 1: Hello World
2022-01-15 13:40:23.711 DEBUG 32945 --- [error-generator] o.a.camel.processor.MulticastProcessor : Message exchange has failed: Multicast processing failed for number 0 for exchange: Exchange[] Exception: java.lang.NullPointerException: Error generated
2022-01-15 13:40:23.711 DEBUG 32945 --- [error-generator] o.a.camel.processor.MulticastProcessor : Message exchange has failed: Multicast processing failed for number 0 for exchange: Exchange[] Exception: java.lang.NullPointerException: Error generated
The exception only gets logged as debug and never gets propagated to any error handler I set up.
Any thoughts on how I could receive the error in my own error handler rather than Camel swallowing the exception as a debug statement?
Note1: I've attempted many variations on both default error handling and default dead letter handling to no avail. I could just be doing it wrong...
Note2: that I'm using Spring[Boot] here too, hence the #Configuration annotation.
Note1: I've attempted many variations on both default error handling and default dead letter handling to no avail. I could just be doing it wrong...
Haven't used #Consume annotations but generally if you want Camel route not to handle any errors you can use .errorHandler(noErrorHandler()). This can be used to pass the error back to parent route or all the way to the code calling ProducerTemplate.sendBody.
Example:
public class ExampleTest extends CamelTestSupport {
#Test
public void noErrorHandlerTest() {
try {
template.sendBody("direct:noErrorHandler", null);
fail();
} catch (Exception e) {
System.out.println("Caught Exception: " + e.getMessage());
}
}
#Override
protected RoutesBuilder createRouteBuilder() throws Exception {
return new RouteBuilder(){
#Override
public void configure() throws Exception {
from("direct:noErrorHandler")
.errorHandler(noErrorHandler())
.log("Throwing error")
.throwException(Exception.class, "Test Exception");
}
};
}
}
My service listens to RabbitMQ queue. I configure retry policy in consumer side. When I throw exception, all dead-letter messages requeue. But depend on my business logic, after throwing StopRequeueException (every exception except SmsException) I want to stop retry for this message. But the message still requeue.
Here is my configuration
spring:
rabbitmq:
listener:
simple:
retry:
enabled: true
initial-interval: 3s
max-attempts: 10
max-interval: 12s
multiplier: 2
missing-queues-fatal: false
if (!checkMobileService.isMobileNumberAdmitted(mobileNumber())) {
throw new StopRequeueException("SMS_BIMTEK.MOBILE_NUMBER_IS_NOT_ADMITTED");
}
My error handler:
public class CustomErrorHandler implements ErrorHandler {
#Override
public void handleError(Throwable t) {
if (!(t.getCause() instanceof SmsException)) {
throw new AmqpRejectAndDontRequeueException("Error Handler converted exception to fatal", t);
}
}
}
Calling the error handler is outside the scope of retry; it is called after retries are exhausted.
You need to classify which exceptions are retryable at the retry level and do the conversion in the recoverer.
Here is an example:
#SpringBootApplication
public class So67406799Application {
public static void main(String[] args) {
SpringApplication.run(So67406799Application.class, args);
}
#Bean
public RabbitRetryTemplateCustomizer customizer(
#Value("${spring.rabbitmq.listener.simple.retry.max-attempts}") int attempts) {
return (target, template) -> template.setRetryPolicy(new SimpleRetryPolicy(attempts,
Map.of(StopRequeueException.class, false), true, true));
}
#Bean
MessageRecoverer recoverer() {
return (msg, cause) -> {
throw new AmqpRejectAndDontRequeueException("Stop requeue after " +
RetrySynchronizationManager.getContext().getRetryCount() + " attempts");
};
}
#RabbitListener(queues = "so67406799")
void listen(String in) {
System.out.println(in);
if (in.equals("dontRetry")) {
throw new StopRequeueException("test");
}
throw new RuntimeException("test");
}
#Bean
Queue queue() {
return new Queue("so67406799");
}
}
#SuppressWarnings("serial")
class StopRequeueException extends NestedRuntimeException {
public StopRequeueException(String msg) {
super(msg);
}
}
EDIT
The customizer is called once by Spring Boot; it is called after the retry policy and back off policy have been set up. See RetryTemplateFactory.
In this case, the customizer replaces the retry policy with a new one with an exception classifier (that's why we need the max attempts injected here).
See the SimpleRetryPolicy constructor.
/**
* Create a {#link SimpleRetryPolicy} with the specified number of retry attempts. If
* traverseCauses is true, the exception causes will be traversed until a match is
* found. The default value indicates whether to retry or not for exceptions (or super
* classes) are not found in the map.
* #param maxAttempts the maximum number of attempts
* #param retryableExceptions the map of exceptions that are retryable based on the
* map value (true/false).
* #param traverseCauses true to traverse the exception cause chain until a classified
* exception is found or the root cause is reached.
* #param defaultValue the default action.
*/
public SimpleRetryPolicy(int maxAttempts, Map<Class<? extends Throwable>, Boolean> retryableExceptions,
boolean traverseCauses, boolean defaultValue) {
The last boolean in the config above (true) is the default behavior (retry exceptions that are not in the map), the third (true) tells the policy to follow the cause chain to look for the exception (like your getCause() in the error handler). The map <Exception, Boolean> says don't retry for this one.
You can also configure it the other way around (default false and true in the map values), explicitly stating which exceptions you want to retry and don't for all others.
The MessageRecoverer is called for all exceptions, either immediately for the classified exception or when retries are exhausted for the others.
Context:
I'm using spring-retry to retry restTemplate calls.
The restTemplate calls are called from a kafka listener.
The kafka listener is also configured to retry on error (if any exception are thrown during the process, not only the restTemplate call).
Goal:
I'd like to prevent kafka from retrying when the error come from a retry template which has exhausted.
Actual behavior :
When the retryTemplate exhaust all retries, the original exception is thrown. Thus preventing me from identifying if the error was retried by the retryTemplate.
Desired behavior:
When the retryTemplate exhaust all retries, wrap the original exception in a RetryExhaustedException which will allow me to blacklist it from kafka retries.
Question:
How can I do something like this ?
Thanks
Edit
RetryTemplate configuration :
RetryTemplate retryTemplate = new RetryTemplate();
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(1000);
retryTemplate.setBackOffPolicy(backOffPolicy);
Map<Class<? extends Throwable>, Boolean> retryableExceptions = new HashMap<>();
retryableExceptions.put(FunctionalException.class, false);
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy(3, retryableExceptions, true, true);
retryTemplate.setRetryPolicy(retryPolicy);
retryTemplate.setThrowLastExceptionOnExhausted(false);
Kafka ErrorHandler
public class DefaultErrorHandler implements ErrorHandler {
#Override
public void handle(Exception thrownException, ConsumerRecord<?, ?> data) {
Throwable exception = Optional.ofNullable(thrownException.getCause()).orElse(thrownException);
// TODO if exception as been retried in a RetryTemplate, stop it to prevent rollback and send it to a DLQ
// else rethrow exception, it will be rollback and handled by AfterRollbackProcessor to be retried
throw new KafkaException("Could not handle exception", thrownException);
}
}
Listener kafka :
#KafkaListener
public void onMessage(ConsumerRecord<String, String> record) {
retryTemplate.execute((args) -> {
throw new RuntimeException("Should be catched by ErrorHandler to prevent rollback");
}
throw new RuntimeException("Should be retried by afterRollbackProcessor");
}
Simply configure the listener retry template with a SimplyRetryPolicy that is configured to classify RetryExhaustedException as not retryable.
Be sure to set the traverseCauses property to true since the container wraps all listener exceptions in ListenerExecutionFailedException.
/**
* Create a {#link SimpleRetryPolicy} with the specified number of retry
* attempts. If traverseCauses is true, the exception causes will be traversed until
* a match is found. The default value indicates whether to retry or not for exceptions
* (or super classes) are not found in the map.
*
* #param maxAttempts the maximum number of attempts
* #param retryableExceptions the map of exceptions that are retryable based on the
* map value (true/false).
* #param traverseCauses is this clause traversable
* #param defaultValue the default action.
*/
public SimpleRetryPolicy(int maxAttempts, Map<Class<? extends Throwable>, Boolean> retryableExceptions,
boolean traverseCauses, boolean defaultValue) {
EDIT
Use
template.execute((args) -> {...}, (context) -> throw new Blah(context.getLastThrowable()));