Exponential backoff for business exceptions when using reactive spring-amqp? - spring-boot

I'm using Spring AMQP 2.1.6 and Spring Boot 2.1.5 and I'm looking for the recommended way to configure spring-amqp to retry business exceptions for reactive components (Mono) with exponential backoff. For example:
#RabbitListener
public Mono<Void> myListener(MyMessage myMessage) {
Mono<Void> mono = myService.doSomething(myMessage);
return mono;
}
I'd like spring-amqp to retry automatically if doSomething returns an error. Usually one can configure this for blocking RabbitListener's when setting up the container factory:
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
...
factory.setAdviceChain(retryInterceptor(..));
Where retryInterceptor might be defined like this:
private static RetryOperationsInterceptor retryInterceptor(long backoffInitialInterval, double backoffMultiplier, long backoffMaxInterval, int maxAttempts) {
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(backoffInitialInterval);
backOffPolicy.setMultiplier(backoffMultiplier);
backOffPolicy.setMaxInterval(backoffMaxInterval);
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy((new SimpleRetryPolicy(maxAttempts)));
retryTemplate.setBackOffPolicy(backOffPolicy);
StatelessRetryOperationsInterceptorFactoryBean bean = new StatelessRetryOperationsInterceptorFactoryBean();
bean.setRetryOperations(retryTemplate);
return bean.getObject();
}
But the advice chain doesn't seem to be used for reactive RabbitListener's. This is probably because, if I understand it correctly, the RetryTemplate/ExponentialBackOffPolicy actually blocks the thread.
As a workaround I could of course do something like (switching to Kotlin because it's a bit easier):
#RabbitListener
fun myListener(MyMessage myMessage) : Mono<Void> {
return myService.doSomething(myMessage)
.retryExponentialBackoff(10, Duration.ofMillis(100), Duration.ofSeconds(5)) { ctx ->
log.info("Caught exception ${ctx.exception()}")
}
}
But I'd like this retry logic to be applied to for all instances of Mono returned from RabbitListener's. Is something like this possible or should you configure this another way when using reactive sequences from project reactor with spring-amqp?

It is really better to apply retry logic into your reactive sequence, similar way you do with the retryExponentialBackoff(). Just because the Reactive Streams execution doesn't happen on the same thread we can apply that Retrytemplate for the myListener().
Right now the logic internally is like this:
private static class MonoHandler {
static boolean isMono(Object result) {
return result instanceof Mono;
}
#SuppressWarnings("unchecked")
static void subscribe(Object returnValue, Consumer<? super Object> success,
Consumer<? super Throwable> failure) {
((Mono<? super Object>) returnValue).subscribe(success, failure);
}
}
That Consumer<? super Throwable> failure does this:
private void asyncFailure(Message request, Channel channel, Throwable t) {
this.logger.error("Future or Mono was completed with an exception for " + request, t);
try {
channel.basicNack(request.getMessageProperties().getDeliveryTag(), false, true);
}
catch (IOException e) {
this.logger.error("Failed to nack message", e);
}
}
So, we don't have any way to to initiate that RetryTemplate at all, but at the same time with an explicit basicNack() we have a natural retry with the re-fetching the same message from the RabbitMQ back.
We could probably apply a Reactor retry for that Mono internally, but it doesn't look like RetryOperationsInterceptor can simply be converted to the Mono.retry().
So, in other words, the RetryOperationsInterceptor is a wrong way for reactive processing. Use Mono.retry() explicitly in your own code.
You may expose some common utility method and apply it as a Mono.transform(Function<? super Mono<T>, ? extends Publisher<V>> transformer) whenever you have a reactive return for the #RabbitListener method.

Related

Set permissions/authentication for spring-cloud-stream message consumer so it passes #PreAuthorize checks

I consume messages from spring-cloud-stream through a Consumer<MyMessage> Implementation. As part of the message handling I need to access methods that are protected with #PreAuthorize security-checks. By default the Consumer run unauthenticated so message-handling fails.
Consumer:
#Bean
public Consumer<MyMessage> exampleMessageConsumer(MyMessageConsumer consumer) {
return consumer::handleMessage;
}
Secured Method:
#PreAuthorize("hasAuthority('ROLE_ADMIN') or hasAuthority('ROLE_USER')")
public void doSomething() { ... }
I dont just want to bypass security, so what is the easiest way to authenticate my Consumer so it passes the check?
EDIT: we are using google pubsub as a binder
For the Kafka binder:
Add an #EventListener to listen for ConsumerStartedEvents; you can then add the authentication to the security context via the SecurityContextHolder; this binds it to the thread; the same thread is used to call the listener.
I found two possible solutions to my problem
use springs RunAs support (baeldung) to add permissions to a security context for a specific method. If i do this i need to add ROLE_RUN_AS_USER to my secured methods. At scale this would complicated annotations a lot.
Manually change the security context before executing the handler method and return it to its original state afterwards.
I went with the second option. I would have liked a transparent solution but there does not appear to be one.
To make this work i created a class that wraps a functional interface with the changing code and returns it.
public class RunAs {
#FunctionalInterface
public interface RunAsMethod {
void runWithException() throws Throwable;
}
public static <T> Consumer<T> createWriteConsumer(Consumer<T> originalConsumer) {
return message -> runWithWritePermission(() -> originalConsumer.accept(message));
}
public static void runWithWritePermission(final RunAsMethod func) {
final Authentication originalAuthentication = SecurityContextHolder.getContext().getAuthentication();
final AnonymousAuthenticationToken token = new AnonymousAuthenticationToken(
"system",
originalAuthentication != null ? originalAuthentication.getPrincipal() : "system",
AuthorityUtils.createAuthorityList("ROLE_ADMIN", "SCOPE_write")
);
SecurityContextHolder.getContext().setAuthentication(token);
try {
func.runWithException();
} catch (Throwable e) {
throw new RuntimeException("exception during method with altered permissions", e);
} finally {
SecurityContextHolder.getContext().setAuthentication(originalAuthentication);
}
}
}

Difference between DirectChannel and FluxMessageChannel

I was reading about Spring Integration's FluxMessageChannel here and here, but I still don't understand exactly what are the differences between using a DirectChannel and FluxMessageChannel when using Project Reactor. Since the DirectChannel is stateless and controlled by its pollers, I'd expect the FluxMessageChannel to not be needed. I'm trying to understand when exactly should I use each and why, when speaking on Reactive Streams applications that are implemented with Spring Integration.
I currently have a reactive project that uses DirectChannel, and it seems to work fine, even the documentation says:
the flow behavior is changed from an imperative push model to a reactive pull model
I'd like to understand when to use each of the channels and what is the exact difference when working with Reactive Streams.
The DirectChannel does not have any poller and its implementation is very simple: as long as a message is sent to it, the handler is called. In the same caller's thread:
public class DirectChannel extends AbstractSubscribableChannel {
private final UnicastingDispatcher dispatcher = new UnicastingDispatcher();
private volatile Integer maxSubscribers;
/**
* Create a channel with default {#link RoundRobinLoadBalancingStrategy}.
*/
public DirectChannel() {
this(new RoundRobinLoadBalancingStrategy());
}
Where that UnicastingDispatcher is:
public final boolean dispatch(final Message<?> message) {
if (this.executor != null) {
Runnable task = createMessageHandlingTask(message);
this.executor.execute(task);
return true;
}
return this.doDispatch(message);
}
(There is no executor option for the DirectChannel)
private boolean doDispatch(Message<?> message) {
if (tryOptimizedDispatch(message)) {
return true;
}
...
protected boolean tryOptimizedDispatch(Message<?> message) {
MessageHandler handler = this.theOneHandler;
if (handler != null) {
try {
handler.handleMessage(message);
return true;
}
catch (Exception e) {
throw IntegrationUtils.wrapInDeliveryExceptionIfNecessary(message,
() -> "Dispatcher failed to deliver Message", e);
}
}
return false;
}
That's why I call it " imperative push model". The caller is this case is going to wait until the handler finishes its job. And if you have a big flow, everything is going to be stopped in the sender thread until a sent message has reached the end of the flow of direct channels. In two simple words: the publisher is in charge for the whole execution and it is blocked in this case. You haven't faced any problems with your solution based on the DirectChannel just because you didn't use reactive non-blocking threads yet like Netty in WebFlux or MongoDB reactive driver.
The FluxMessageChannel was really designed for Reactive Streams purposes where the subscriber is in charge for handling a message which it pulls from the Flux on demand. This way just after sending the publisher is free to do anything else. Just because it is already a subscriber responsibility to handle the message.
I would say it is definitely OK to use DirectChannel as long as your handlers are not blocking. As long as they are blocking you should go with FluxMessageChannel. Although don't forget that there are other channel types for different tasks: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations

Spring cloud function Function interface return success/failure handling

I currently have a spring cloud stream application that has a listener function that mainly listens to a certain topic and executes the following in sequence:
Consume messages from a topic
Store consumed message in the DB
Call an external service for some information
Process the data
Record the results in DB
Send the message to another topic
Acknowledge the message (I have the acknowledge mode set to manual)
We have decided to move to Spring cloud function, and I have been already able to already do almost all the steps above using the Function interface, with the source topic as input and the sink topic as an output.
#Bean
public Function<Message<NotificationMessage>, Message<ValidatedEvent>> validatedProducts() {
return message -> {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
return MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
}
}
My problem goes with exception handling in step 7 (Acknowledge the message). We only acknowledge the message if we are sure that it was sent successfully to the sink queue, otherwise we do no acknowledge the message.
My question is, how can such a thing be implemented within Spring cloud function, specially that the send method is fully dependant on the Spring Framework (as the result of the function interface implementation evaluation).
earlier, we could do this through try/catch
#StreamListener(value = NotificationMesage.INPUT)
public void onMessage(Message<NotificationMessage> message) {
try {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
}
Is there a listener that triggers after the Function interface have returned successfully, something like KafkaSendCallback but without specifying a template
Building upon what Oleg mentioned above, if you want to strictly restore the behavior in your StreamListener code, here is something you can try. Instead of using a function, you can switch to a consumer and then use KafkaTemplate to send on the outbound as you had previously.
#Bean
public Consumer<Message<NotificationMessage>> validatedProducts() {
return message -> {
try{
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message); //here, you make sure that the data was sent successfully by using some callback.
//only ack if the data was sent successfully.
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}
catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
};
}
Another thing that is worth looking into is using Kafka transactions, in which case if it doesn't work end-to-end, no acknowledgment will happen. Spring Cloud Stream binder has support for this based on the foundations in Spring for Apache Kafka. More details here. Here is the Spring Cloud Stream doc on this.
Spring cloud stream has no knowledge of function. It is just the same message handler as it was before, so the same approach with callback as you used before would work with functions. So perhaps you can share some code that could clarify what you mean? I also don't understand what do you mean by ..send method is fully dependant on the Spring Framework..
Alright, So what I opted in was actually not to use KafkaTemplate (Or streamBridge)for that matter. While it is a feasible solution it would mean that my Function is going to be split into Consumer and some sort of an improvised supplied (the KafkaTemplate in this case).
As I wanted to adhere to the design goals of the functional interface, I have isolated the behaviour for Database update in a ProducerListener interface implementation
#Configuration
public class ProducerListenerConfiguration {
private final MongoTemplate mongoTemplate;
public ProducerListenerConfiguration(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public ProducerListener myProducerListener() {
return new ProducerListener() {
#SneakyThrows
#Override
public void onSuccess(ProducerRecord producerRecord, RecordMetadata recordMetadata) {
final ValidatedEvent event = new ObjectMapper().readerFor(ValidatedEvent.class).readValue((byte[]) producerRecord.value());
final var updateResult = updateDocumentProcessedState(event.getKey(), event.getPayload().getVersion(), true);
}
#SneakyThrows
#Override
public void onError(ProducerRecord producerRecord, #Nullable RecordMetadata recordMetadata, Exception exception) {
ProducerListener.super.onError(producerRecord, recordMetadata, exception);
}
};
}
public UpdateResult updateDocumentProcessedState(String id, long version, boolean isProcessed) {
Query query = new Query();
query.addCriteria(Criteria.where("_id").is(id));
Update update = new Update();
update.set("processed", isProcessed);
update.set("version", version);
return mongoTemplate.updateFirst(query, update, ProductChangedEntity.class);
}
}
Then with each successful attempt, the DB is updated with the processing result and the updated version number.

Handling spring reactor exceptions in imperative spring application

I'm using the webflux in an imperative spring boot application. In this app I need to make rest calls to various backends using webclient and wait on all the responses before proceeding to the next step.
ClassA
public ClassA
{
public Mono<String> restCall1()
{
return webclient....exchange()...
.retryWhen(Retry.backoff(maxAttempts, Duration.ofSeconds(minBackOff))
.filter(this::isTransient)
.onRetryExhaustedThrow((retryBackoffSpec, retrySignal) -> {
return new MyCustomException();
});
}
}
ClassB
public ClassB
{
public Mono<String> restCall2()
{
return webclient....exchange()...
.retryWhen(Retry.backoff(maxAttempts, Duration.ofSeconds(minBackOff))
.filter(this::isTransient)
.onRetryExhaustedThrow((retryBackoffSpec, retrySignal) -> {
return new MyCustomException();
});
}
}
Mono<String> a = classAObj.restCall1();
Mono<String> b = classBObj.restCall2();
ArrayList<Mono<String>> myMonos = new ArrayList<>;
myMonos.add(a);
myMonos.add(b);
try {
List<String> results = Flux.mergeSequential(myMonos).collectList().block();}
catch(WebclientResponseException e) {
....
}
The above code is working as expected. The Webclient is configured to throw error on 5xx and 4xx which I'm able to catch using WebclientResponseException.
The problem is I'm unable to catch any exceptions from the react framework. For example my web clients are configured to retry with exponential backoff and throw exception on exhaustion and I have no way to catch it in my try catch block above. I explored the option to handle that exceptiom in the webclient stream using onErrorReturn but it does not propagate the error back to my subscriber.
I also cannot add the exception to the catch block as it's never being thrown by any part of the code.
Can anyone advice what is the best way to handle these type of error. scenarios. I'm new to webflux and reactive programming.

Spring Integration: Manual channel handling

What I want: Build a configurable library that
uses another library that has an internal routing and a subscribe method like: clientInstance.subscribe(endpoint, (endpoint, message) -> <handler>) , e.g. Paho MQTT library
later in my code I want to access the messages in a Flux.
My idea:
create MessageChannels like so:
integrationFlowContext
.registration(IntegrationFlows.from("message-channel:" + endpoint)).bridge().get())
.register()
forward to reactive publishers:
applicationContext.registerBean(
"publisher:" + endpoint,
Publisher.class,
() -> IntegrationFlows.from("message-channel:" + endpoint)).toReactivePublisher()
);
keep the message channels in a set or similar and implement the above handler: (endpoint, message) -> messageChannels.get(endpoint).send( <converter>(message))
later use (in a #PostConstruct method):
Flux
.from((Publihser<Message<?>>)applicationContext.getBean("publisher:" + enpoint))
.map(...)
.subscribe()
I doubt this to be the best way to do what I want. Feels like abusing spring integration. Any suggestions are welcome at this point.
In general however (at least in my tests) this seemed to be working. But when I run my application, I get errors like: "Caused by: org.springframework.messaging.core.DestinationResolutionException: no output-channel or replyChannel header available".
This is especially bad, since after this exception the publishers claim to not have a subscriber anymore. Thus, in a real application no messages are proceeded anymore.
I am not sure what this message means, but I can kind of reproduce it (but don't understand why):
#Test
public void channelTest() {
integrationFlowContext
.registration(
IntegrationFlows.from("any-channel").bridge().get()
)
.register();
registryUtil.registerBean(
"any-publisher",
Publisher.class,
() -> IntegrationFlows.from("any-channel").toReactivePublisher()
);
Flux
.from((Publisher<Message<?>>) applicationContext.getBean("any-publisher"))
.subscribe(System.out::println);
MessageChannel messageChannel = applicationContext.getBean("any-channel", MessageChannel.class);
try {
messageChannel.send(MessageBuilder.withPayload("test").build());
} catch (Throwable t) {
log.error("Error: ", t);
}
}
I of course read parts of the spring integration documentation, but don't quite get what happens behind the scenes. Thus, I feel like guessing possible error causes.
EDIT:
This, however works:
#TestConfiguration
static class Config {
GenericApplicationContext applicationContext;
Config(
GenericApplicationContext applicationContext,
IntegrationFlowContext integrationFlowContext
) {
this.applicationContext = applicationContext;
// optional here, but needed for some reason in my library,
// since I can't find the channel beans like I will do here,
// if I didn't register them like so:
//integrationFlowContext
// .registration(
// IntegrationFlows.from("any-channel").bridge().get())
// .register();
applicationContext.registerBean(
"any-publisher",
Publisher.class,
() -> IntegrationFlows.from("any-channel").toReactivePublisher()
);
}
#PostConstruct
void connect(){
Flux
.from((Publisher<Message<?>>) applicationContext.getBean("any-publisher"))
.subscribe(System.out::println);
}
}
#Autowired
ApplicationContext applicationContext;
#Autowired
IntegrationFlowContext integrationFlowContext;
#Test
#SneakyThrows
public void channel2Test() {
MessageChannel messageChannel = applicationContext.getBean("any-channel", MessageChannel.class);
try {
messageChannel.send(MessageBuilder.withPayload("test").build());
} catch (Throwable t) {
log.error("Error: ", t);
}
}
Thus apparently my issue above is realted to messages arriving "too early" .. I guess?!
No, your issue is related to round-robin dispatched on the DirectChannel for the any-channel bean name.
You define two IntegrationFlow instances starting with that channel and then you declare their own subscribers, but at runtime both of them are subscribed to the same any-channel instance. And that one comes with the round-robin balancer by default. So, one message goes to your Flux.from() subscriber, but another to that bridge() which doesn't know what to do with your message, so it tries to resolve a replyChannel header.
Therefore your solution just only with one IntegrationFlows.from("any-channel").toReactivePublisher() is correct. Although you could just do a FluxMessageChannel registration and use it from one side for regular messages sending and from other side as a reactive source for Flux.from().

Resources