Resilience4j circuit breaker used with reactive Flux never changes to OPEN on errors - spring

I am evaluating resilience4j to include it in our reactive APIs, so far I am using mock Fluxes.
The service below always fails as I want to test if the circuit OPENs on multiple errors:
#Service
class GamesRepositoryImpl : GamesRepository {
override fun findAll(): Flux<Game> {
return if (Math.random() <= 1.0) {
Flux.error(RuntimeException("fail"))
} else {
Flux.just(
Game("The Secret of Monkey Island"),
Game("Loom"),
Game("Maniac Mansion"),
Game("Day of the Tentacle")).log()
}
}
}
This is the handler that uses the repository, printing the state of the circuit:
#Component
class ApiHandlers(private val gamesRepository: GamesRepository) {
var circuitBreaker : CircuitBreaker = CircuitBreaker.ofDefaults("gamesCircuitBreaker")
fun getGames(serverRequest: ServerRequest) : Mono<ServerResponse> {
println("*********${circuitBreaker.state}")
return ok().body(gamesRepository.findAll().transform(CircuitBreakerOperator.of(circuitBreaker)), Game::class.java)
}
}
I invoke the API endpoint many times, always getting this stacktrace:
*********CLOSED
2018-03-14 12:02:28.153 ERROR 1658 --- [ctor-http-nio-3] .a.w.r.e.DefaultErrorWebExceptionHandler : Failed to handle request [GET http://localhost:8081/api/v1/games]
java.lang.RuntimeException: FAIL
at com.codependent.reactivegames.repository.GamesRepositoryImpl.findAll(GamesRepositoryImpl.kt:12) ~[classes/:na]
at com.codependent.reactivegames.web.handler.ApiHandlers.getGames(ApiHandlers.kt:20) ~[classes/:na]
...
2018-03-14 12:05:48.973 DEBUG 1671 --- [ctor-http-nio-2] i.g.r.c.i.CircuitBreakerStateMachine : No Consumers: Event ERROR not published
2018-03-14 12:05:48.975 ERROR 1671 --- [ctor-http-nio-2] .a.w.r.e.DefaultErrorWebExceptionHandler : Failed to handle request [GET http://localhost:8081/api/v1/games]
java.lang.RuntimeException: fail
at com.codependent.reactivegames.repository.GamesRepositoryImpl.findAll(GamesRepositoryImpl.kt:12) ~[classes/:na]
at com.codependent.reactivegames.web.handler.ApiHandlers.getGames(ApiHandlers.kt:20) ~[classes/:na]
at com.codependent.reactivegames.web.route.ApiRoutes$apiRouter$1$1$1.invoke(ApiRoutes.kt:14) ~[classes/:na]
As you see the circuit is always CLOSED. I don't know if it has anything to do but notice this message No Consumers: Event ERROR not published.
Why isn't this working?

The problem was the default ringBufferSizeInClosedState which is 100 requests and I never made so many manual requests.
I setup my own CircuitBreakerConfig for my tests and now the circuit opens right away:
val circuitBreakerConfig : CircuitBreakerConfig = CircuitBreakerConfig.custom()
.failureRateThreshold(50f)
.waitDurationInOpenState(Duration.ofMillis(10000))
.ringBufferSizeInHalfOpenState(5)
.ringBufferSizeInClosedState(5)
.build()
var circuitBreaker: CircuitBreaker = CircuitBreaker.of("gamesCircuitBreaker", circuitBreakerConfig)

Related

No subscriptions have been created in Reactor Kafka and Spring Integration

I'm trying to create a simple flow with Spring Integration and Project Reactor, where I consume records with Reactor Kafka, passing them to a channel that from there it will produce messages into another topic with Reactor Kafka.
The consuming flow is:
#Service
public class ReactiveConsumerService {
public ReactiveKafkaConsumerTemplate<String, String> reactiveKafkaConsumerTemplate;
#Qualifier("directChannel")
#Autowired
public MessageChannel directChannel;
public ReactiveConsumerService(ReactiveKafkaConsumerTemplate<String, String> reactiveKafkaConsumerTemplate) {
this.reactiveKafkaConsumerTemplate = reactiveKafkaConsumerTemplate;
}
#Bean
public IntegrationFlow readFromKafka() {
return IntegrationFlows.from(reactiveKafkaConsumerTemplate.receiveAutoAck()
.map(GenericMessage::new))
.<ConsumerRecord<String, String>, String>transform(ConsumerRecord::value)
.<String, String>transform(String::toUpperCase)
.channel(directChannel)
.get();
}
}
And the producing flow is:
#Service
public class ReactiveProducerService {
private final ReactiveKafkaProducerTemplate<String, String> reactiveKafkaProducerTemplate;
#Qualifier("directChannel")
#Autowired
public MessageChannel directChannel;
public ReactiveProducerService(ReactiveKafkaProducerTemplate<String, String> reactiveKafkaProducerTemplate) {
this.reactiveKafkaProducerTemplate = reactiveKafkaProducerTemplate;
}
#Bean
public IntegrationFlow kafkaProducerFlow() {
return IntegrationFlows.from(directChannel)
.handle(s -> reactiveKafkaProducerTemplate.send("topic2", s.getPayload().toString()))
.get();
}
}
I'd like to know how and where exactly should I perform the subscription.
Edit:
I've added a .subscripe() and it still doesn't work:
2022-01-25 20:36:59.570 INFO 1804 --- [ration-sample-1] o.a.kafka.common.utils.AppInfoParser : App info kafka.consumer for consumer-reactive-kafka-spring-integration-sample-1 unregistered
2022-01-25 20:36:59.573 ERROR 1804 --- [oundedElastic-1] reactor.core.publisher.Operators : Operator called default onErrorDropped
reactor.core.Exceptions$ErrorCallbackNotImplemented: java.lang.IllegalStateException: No subscriptions have been created
Caused by: java.lang.IllegalStateException: No subscriptions have been created
at reactor.kafka.receiver.ReceiverOptions.subscriber(ReceiverOptions.java:423) ~[reactor-kafka-1.3.9.jar:1.3.9]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Assembly trace from producer [reactor.core.publisher.FluxPeekFuseable] :
reactor.core.publisher.Flux.doOnRequest
Caused by: java.lang.IllegalStateException: No subscriptions have been created
reactor.kafka.receiver.internals.ConsumerHandler.receive(ConsumerHandler.java:110)
Error has been observed at the following site(s):
*________Flux.doOnRequest ? at reactor.kafka.receiver.internals.ConsumerHandler.receive(ConsumerHandler.java:110)
|_ Flux.filter ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.lambda$receiveAutoAck$6(DefaultKafkaReceiver.java:70)
|_ Flux.publishOn ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.lambda$receiveAutoAck$6(DefaultKafkaReceiver.java:71)
|_ Flux.map ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.lambda$receiveAutoAck$6(DefaultKafkaReceiver.java:72)
*______________Flux.using ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.lambda$withHandler$19(DefaultKafkaReceiver.java:137)
*__________Flux.usingWhen ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.withHandler(DefaultKafkaReceiver.java:129)
|_ ? at reactor.kafka.receiver.internals.DefaultKafkaReceiver.receiveAutoAck(DefaultKafkaReceiver.java:68)
|_ ? at reactor.kafka.receiver.KafkaReceiver.receiveAutoAck(KafkaReceiver.java:124)
|_ Flux.concatMap ? at org.springframework.kafka.core.reactive.ReactiveKafkaConsumerTemplate.receiveAutoAck(ReactiveKafkaConsumerTemplate.java:69)
|_ Flux.map ? at reactor.kafka.spring.integration.samples.service.ReactiveConsumerService.readFromKafka(ReactiveConsumerService.java:38)
|_ Flux.from ? at org.springframework.integration.channel.FluxMessageChannel.subscribeTo(FluxMessageChannel.java:118)
|_ Flux.delaySubscription ? at org.springframework.integration.channel.FluxMessageChannel.subscribeTo(FluxMessageChannel.java:119)
|_ Flux.publishOn ? at org.springframework.integration.channel.FluxMessageChannel.subscribeTo(FluxMessageChannel.java:120)
|_ Flux.doOnNext ? at org.springframework.integration.channel.FluxMessageChannel.subscribeTo(FluxMessageChannel.java:121)
Original Stack Trace:
at reactor.kafka.receiver.ReceiverOptions.subscriber(ReceiverOptions.java:423) ~[reactor-kafka-1.3.9.jar:1.3.9]
at reactor.kafka.receiver.internals.ConsumerEventLoop$SubscribeEvent.run(ConsumerEventLoop.java:207) ~[reactor-kafka-1.3.9.jar:1.3.9]
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68) ~[reactor-core-3.4.14.jar:3.4.14]
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28) ~[reactor-core-3.4.14.jar:3.4.14]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
2022-01-25 20:36:59.772 INFO 1804 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8090
2022-01-25 20:36:59.853 INFO 1804 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Routes startup summary (total:0 started:0)
2022-01-25 20:36:59.853 INFO 1804 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Apache Camel 3.12.0 (camel-1) started in 149ms (build:84ms init:59ms start:6ms)
2022-01-25 20:36:59.866 INFO 1804 --- [ main] ReactorKafkaSpringIntegrationApplication : Started ReactorKafkaSpringIntegrationApplication in 4.246 seconds (JVM running for 4.616)
The sample code:
#Service
public class ReactiveProducerService {
private final ReactiveKafkaProducerTemplate<String, String> reactiveKafkaProducerTemplate;
#Qualifier("directChannel")
#Autowired
public MessageChannel directChannel;
public ReactiveProducerService(ReactiveKafkaProducerTemplate<String, String> reactiveKafkaProducerTemplate) {
this.reactiveKafkaProducerTemplate = reactiveKafkaProducerTemplate;
}
#Bean
public IntegrationFlow kafkaProducerFlow() {
return IntegrationFlows.from(directChannel)
.handle(s -> reactiveKafkaProducerTemplate.send("topic2", s.getPayload().toString()).subscribe(System.out::println))
.get();
}
}
The subscription to the reactiveKafkaConsumerTemplate happens immediately when the endpoint for the .<ConsumerRecord<String, String>, String>transform(ConsumerRecord::value) is started automatically by the application context.
See this one as an alternative:
/**
* Represent an Integration Flow as a Reactive Streams {#link Publisher} bean.
* #param autoStartOnSubscribe start message production and consumption in the flow,
* when a subscription to the publisher is initiated.
* If this set to true, the flow is marked to not start automatically by the application context.
* #param <T> the expected {#code payload} type
* #return the Reactive Streams {#link Publisher}
* #since 5.5.6
*/
#SuppressWarnings(UNCHECKED)
protected <T> Publisher<Message<T>> toReactivePublisher(boolean autoStartOnSubscribe) {
Although I think you mean the subscription on the outbound side. It is not clear from your question, but that reactiveKafkaProducerTemplate has a contract like:
public Mono<SenderResult<Void>> send(String topic, V value) {
So, you need to subscribe to that returned Mono to initiate a process.
NOTE: you have messed arguments for that send() as well. Didn't you mean this instead: reactiveKafkaProducerTemplate.send("test", "topic2") ?
To make it subscribing to that Mono, you just need to do that yourself in that handle():
.handle(s -> reactiveKafkaProducerTemplate.send("topic2", "test").subscribe())
UPDATE 2
The error like java.lang.IllegalStateException: No subscriptions have been created from the reactor.kafka.receiver.ReceiverOptions.subscriber() means that you didn't assign topic, patterns or partitions to listen to.
See ReceiverOptions.subscription() or ReceiverOptions.assignment().

Spring Cloud Stream with RabbitMQ binder and Transactional consumer/producer with DB operations

I have a Spring Cloud Stream application that receives messages from RabbitMQ using the Rabbit Binder, update my database and send one or many messages. My application can be summarized as this demo app:
The problem is that it doesn't seem that #Transactional works(or at least that's my impression) since if there's an exception the Database is rollbacked but messages are sent even the consumer/producer are configured by default as transacted.
Given that what I want to achieve is when an exception occurs I want the consumed messages go to DLQ after being retried the Database is rolled back and messages are not sent.
How can I achieve this?
This is the output of the demo application when I send a message my-input exchange
2021-01-19 14:31:20.804 ERROR 59593 --- [nput.my-group-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Exception thrown while invoking MyListener#process[1 args]; nested exception is java.lang.RuntimeException: MyError, failedMessage=GenericMessage [payload=byte[4], headers={amqp_receivedDeliveryMode=NON_PERSISTENT, amqp_receivedRoutingKey=#, amqp_receivedExchange=my-input, amqp_deliveryTag=2, deliveryAttempt=3, amqp_consumerQueue=my-input.my-group, amqp_redelivered=false, id=006f733f-5eab-9119-347a-625570383c47, amqp_consumerTag=amq.ctag-CnT_p-IXTJqIBNNG4sGPoQ, sourceData=(Body:'[B#177259f3(byte[4])' MessageProperties [headers={}, contentLength=0, receivedDeliveryMode=NON_PERSISTENT, redelivered=false, receivedExchange=my-input, receivedRoutingKey=#, deliveryTag=2, consumerTag=amq.ctag-CnT_p-IXTJqIBNNG4sGPoQ, consumerQueue=my-input.my-group]), contentType=application/json, timestamp=1611063077789}]
at org.springframework.cloud.stream.binding.StreamListenerMessageHandler.handleRequestMessage(StreamListenerMessageHandler.java:64)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:134)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:56)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:115)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:133)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:72)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:317)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:272)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:208)
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter.access$1300(AmqpInboundChannelAdapter.java:66)
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter$Listener.lambda$onMessage$0(AmqpInboundChannelAdapter.java:308)
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:329)
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:225)
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter$Listener.onMessage(AmqpInboundChannelAdapter.java:304)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:1632)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.actualInvokeListener(AbstractMessageListenerContainer.java:1551)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:1539)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:1530)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:1474)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:967)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:913)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$1600(SimpleMessageListenerContainer.java:83)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.mainLoop(SimpleMessageListenerContainer.java:1288)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1194)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.RuntimeException: MyError
at com.example.demo.MyListener.process(DemoApplication.kt:46)
at com.example.demo.MyListener$$FastClassBySpringCGLIB$$4381219a.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:779)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750)
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:123)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:388)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:119)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:692)
at com.example.demo.MyListener$$EnhancerBySpringCGLIB$$f4ed3689.process(<generated>)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:171)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:120)
at org.springframework.cloud.stream.binding.StreamListenerMessageHandler.handleRequestMessage(StreamListenerMessageHandler.java:55)
... 29 more
message should not be received here hello world
employee name still toto == toto
message should not be received here hello world
employee name still toto == toto
message should not be received here hello world
employee name still toto == toto
Since you are publishing the failed message to the DLQ, from a Rabbit perspective, the transaction was successful and the original message is acknowledged and removed from the queue, and the Rabbit transaction is committed.
You can't do what you want with republishToDlq.
It will work if you use the normal DLQ mechanism (republishToDlq=false, whereby the broker sends the original message to the DLQ) instead of republishing with the extra metadata.
If you want to republish with metadata, you could manually publish to the DLQ with a non-transactional RabbitTemplate (so the DLQ publish doesn't get rolled back with the other publishes).
EDIT
Here is an example of how to do what you need.
A few things to note:
We have to add an error handler to rethrow the exception.
We have to move retries to the listener container instead of the binder; otherwise, the retries will occur within the transaction and if retries are successful, multiple messages would be deposited on the output queue.
For stateful retry to work, we must be able to uniquely identify each message; the simplest solution is to have the sender set a unique message_id property (e.g. a UUID).
#SpringBootApplication
#EnableBinding(Processor.class)
public class So65792643Application {
public static void main(String[] args) {
SpringApplication.run(So65792643Application.class, args);
}
#Autowired
Processor processor;
#StreamListener(Processor.INPUT)
public void in(Message<String> in) {
System.out.println(in.getPayload());
processor.output().send(new GenericMessage<>(in.getPayload().toUpperCase()));
int attempt = RetrySynchronizationManager.getContext().getRetryCount();
if (in.getPayload().equals("okAfterRetry") && attempt == 1) {
System.out.println("success");
}
else {
throw new RuntimeException();
}
}
#Bean
RepublishMessageRecoverer repub(RabbitTemplate template) {
RepublishMessageRecoverer repub =
new RepublishMessageRecoverer(template, "DLX", "rk");
return repub;
}
#Bean
Queue dlq() {
return new Queue("my-output.dlq");
}
#Bean
DirectExchange dlx() {
return new DirectExchange("DLX");
}
#Bean
Binding dlqBinding() {
return BindingBuilder.bind(dlq()).to(dlx()).with("rk");
}
#ServiceActivator(inputChannel = "my-input.group1.errors")
void errorHandler(ErrorMessage message) {
MessagingException mex = (MessagingException) message.getPayload();
throw mex;
}
#RabbitListener(queues = "my-output.dlq")
void dlqListen(Message<String> in) {
System.out.println("DLQ:" + in);
}
#RabbitListener(queues = "my-output.group2")
void outListen(String in) {
if (in.equals("OKAFTERRETRY")) {
System.out.println(in);
}
else {
System.out.println("Should not see this:" + in);
}
}
/*
* We must move retries from the binder to stateful retries in the container so that
* each retry is rolled back, to avoid multiple publishes to output.
* See max-attempts: 1 in the yaml.
* In order for stateful retry to work, inbound messages must have a unique message_id
* property.
*/
#Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer> customizer(RepublishMessageRecoverer repub) {
return (container, destinationName, group) -> {
if ("group1".equals(group)) {
container.setAdviceChain(RetryInterceptorBuilder.stateful()
.backOffOptions(1000, 2.0, 10000)
.maxAttempts(2)
.recoverer(recoverer(repub))
.keyGenerator(args -> {
// or generate a unique key some other way
return ((org.springframework.amqp.core.Message) args[1]).getMessageProperties()
.getMessageId();
})
.build());
}
};
}
private MethodInvocationRecoverer<?> recoverer(RepublishMessageRecoverer repub) {
return (args, cause) -> {
repub.recover(((ListenerExecutionFailedException) cause).getFailedMessage(), cause);
throw new AmqpRejectAndDontRequeueException(cause);
};
}
}
spring:
cloud:
stream:
rabbit:
default:
producer:
transacted: true
consumer:
transacted: true
requeue-rejected: true
bindings:
input:
destination: my-input
group: group1
consumer:
max-attempts: 1
output:
destination: my-output
producer:
required-groups: group2
okAfterRetry
2021-01-20 12:45:24.385 WARN 77477 --- [-input.group1-1] s.a.r.l.ConditionalRejectingErrorHandler : Execution of Rabbit message listener failed.
...
okAfterRetry
success
OKAFTERRETRY
notOkAfterRetry
2021-01-20 12:45:39.336 WARN 77477 --- [-input.group1-1] s.a.r.l.ConditionalRejectingErrorHandler : Execution of Rabbit message listener failed.
...
notOkAfterRetry
2021-01-20 12:45:39.339 WARN 77477 --- [-input.group1-1] s.a.r.l.ConditionalRejectingErrorHandler : Execution of Rabbit message listener failed.
...
DLQ:GenericMessage [payload=notOkAfterRetry, ..., x-exception-message...

#Retryable method not working that also is #Scheduled and #EnableSchedulerLock

I want to create a cron job which is retry-able and only 1 instance should execute it when we deploy multiple instances of the application.
I have also referred #Recover annotated method is not discovered for a #Retryable method that also is #Scheduled, but I am unable to resolve the issue of ArrayIndexOutOfBoundsException.
I am using 2.1.8.RELEASE version spring-boot
#Configuration
#EnableScheduling
#EnableSchedulerLock(defaultLockAtMostFor = "PT30S")
#EnableRetry
public class MyScheduler {
#Scheduled(cron = "0 16 16 * * *")
#SchedulerLock(name = "MyScheduler_lock", lockAtLeastForString = ""PT5M", lockAtMostForString ="PT14M")
#Retryable(value = Exception.class, maxAttempts = 2)
public void retryAndRecover() {
retry++;
log.info("Scheduling Service Failed " + retry);
throw new Exception();
}
#Recover
public void recover(Exception e, String str) {
log.info("Service recovering");
}
}
Detailed exception:
2019-12-07 19:42:00.109 INFO [my-service,false] 16767 --- [ scheduling-1] r.t.p.scheduler.MyScheduler : Scheduling Service Failed 1
2019-12-07 19:42:01.114 INFO [my-service,false] 16767 --- [ scheduling-1] r.t.p.scheduler.MyScheduler : Scheduling Service Failed 2
2019-12-07 19:42:01.123 ERROR [my-service,,,] 16767 --- [ scheduling-1] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task.
java.lang.ArrayIndexOutOfBoundsException: arraycopy: last source index 1 out of bounds for object array[0]
at java.base/java.lang.System.arraycopy(Native Method) ~[na:na]
at org.springframework.retry.annotation.RecoverAnnotationRecoveryHandler$SimpleMetadata.getArgs(RecoverAnnotationRecoveryHandler.java:166) ~[spring-retry-1.2.1.RELEASE.jar:na]
at org.springframework.retry.annotation.RecoverAnnotationRecoveryHandler.recover(RecoverAnnotationRecoveryHandler.java:62) ~[spring-retry-1.2.1.RELEASE.jar:na]
Your recover method can't have more parameters than the main method (aside from the exception.
String str

Mono gets OnCancel instead of OnSuccessOrError after successful request

I have a SpringBoot Webflux app which implements a WebFilter to log success/error once a Publisher completes.
My Webfilter is similar to the following:
public class MyFilter implements WebFilter {
private static final Logger LOG = LoggerFactory.getLogger(MyFilter.class);
#Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
return chain.filter(exchange)
.doOnCancel(() -> {
LOG.info("Cancelled");
})
.doOnSuccess(aVoid -> {
LOG.info("Success");
})
.doOnSuccessOrError((aVoid, throwable) -> {
if (throwable == null) {
LOG.info("Completed successfully");
} else {
LOG.error("An error occurred", throwable);
}
});
}
Since SpringBoot 2.1.5, the WebFilter does not work as expected anymore. Instead of getting the doOnSuccessOrError event I now get doOnCancel. This only happens when using Netty since version 0.8.7. When using Undertow, it works as expected.
In all cases the Web requests completes successfully, returning the correct data to the client.
Please see DEBUG/TRACE logs below:
[reactor-http-nio-3] DEBUG o.s.h.codec.json.Jackson2JsonEncoder - [6bfed744] Encoding [{"errors":[]}]
[reactor-http-nio-3] DEBUG r.n.http.server.HttpServerOperations - [id: 0x6bfed744, L:/0:0:0:0:0:0:0:1:8071 - R:/0:0:0:0:0:0:0:1:55927] Decreasing pending responses, now 0
[reactor-http-nio-3] DEBUG r.n.http.server.HttpServerOperations - [id: 0x6bfed744, L:/0:0:0:0:0:0:0:1:8071 - R:/0:0:0:0:0:0:0:1:55927] Last HTTP packet was sent, terminating the channel
[reactor-http-nio-3] TRACE r.netty.channel.ChannelOperations - [id: 0x6bfed744, L:/0:0:0:0:0:0:0:1:8071 - R:/0:0:0:0:0:0:0:1:55927] Disposing ChannelOperation from a channel
java.lang.Exception: ChannelOperation terminal stack
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:391)
at reactor.netty.http.server.HttpServerOperations.cleanHandlerTerminate(HttpServerOperations.java:519)
at reactor.netty.http.server.HttpTrafficHandler.operationComplete(HttpTrafficHandler.java:314)
at reactor.netty.http.server.HttpTrafficHandler.operationComplete(HttpTrafficHandler.java:54)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:502)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:476)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:415)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:540)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:529)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:101)
at io.netty.util.internal.PromiseNotificationUtil.trySuccess(PromiseNotificationUtil.java:48)
at io.netty.channel.ChannelOutboundBuffer.safeSuccess(ChannelOutboundBuffer.java:715)
at io.netty.channel.ChannelOutboundBuffer.remove(ChannelOutboundBuffer.java:270)
at io.netty.channel.ChannelOutboundBuffer.removeBytes(ChannelOutboundBuffer.java:350)
at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:411)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:939)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:360)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:906)
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1370)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.flush(CombinedChannelDuplexHandler.java:533)
at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:125)
at io.netty.channel.CombinedChannelDuplexHandler.flush(CombinedChannelDuplexHandler.java:358)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727)
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:127)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:789)
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:757)
at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:812)
at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1036)
at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:305)
...
[reactor-http-nio-3] INFO c.m.d.MyFilter - Cancelled
[reactor-http-nio-3] DEBUG r.n.channel.ChannelOperationsHandler - [id: 0x6bfed744, L:/0:0:0:0:0:0:0:1:8071 - R:/0:0:0:0:0:0:0:1:55927] No ChannelOperation attached. Dropping: EmptyLastHttpContent
This is a known issue in Spring Framework (see spring-framework#22952) and especially Reactor Netty (reactor-netty#741). There is no known workaround for that right now.

AWS Lambda - Spring boot is not handling the request

I am trying to run spring boot application as serverless in AWS lambda and I am getting below exception while calling lambda function. Spring boot application successfully ran but it seems that it is going to fail to map the request
2018-09-25 06:11:50.717 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-09-25 06:11:50.823 INFO 1 --- [ main] **my.service.Application : Started Application in 7.405 seconds (JVM running for 8.939)**
START RequestId: decfc13c-c089-11e8-bacd-a37f1ba65629 Version: $LATEST
2018-09-25 06:11:50.994 ERROR 1 --- [ main] **c.a.s.p.i.s.AwsProxyHttpServletRequest : Called set character encoding to UTF-8 on a request without a content type. Character encoding will not be set
2018-09-25 06:11:51.175 ERROR 1 --- [ main] o.s.boot.web.support.ErrorPageFilter : Forwarding to error page from request [/] due to exception [null]**
java.lang.NullPointerException: null
at com.amazonaws.serverless.proxy.internal.servlet.AwsProxyHttpServletRequest.getRemoteAddr(AwsProxyHttpServletRequest.java:575) ~[task/:na]
at org.springframework.web.servlet.FrameworkServlet.publishRequestHandledEvent(FrameworkServlet.java:1075) ~[task/:na]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005) ~[task/:na]
.........
2018-09-25 06:11:51.535 ERROR 1 --- [ main] s.p.i.s.AwsLambdaServletContainerHandler : Could not forward request
This is my StreamLambdaHandler java file.
public class StreamLambdaHandler implements RequestStreamHandler {
private static SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
static {
try {
handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(Application.class);
} catch (ContainerInitializationException e) {
throw new RuntimeException("Could not initialize Spring Boot application", e);
}
}
#Override
public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context)
throws IOException {
handler.proxyStream(inputStream, outputStream, context);
outputStream.close();
}
}
Looks like you might be hitting https://github.com/awslabs/aws-serverless-java-container/issues/172. According to the ticket, the fix will be available as part of the upcoming 1.2 release.

Resources