How can I add retry and custom recovery method in a circuit breaker - functional java - circuit-breaker

I'm trying to add a resilience 4j circuit breaker to my project. For that, I have a custom mechanism if the call fails and a retry. How can I change the execution sequence of these two? Is there a way where I can execute my custom mechanism first and if that also fails then do the retry?

If I understood you correctly, you have 2 different calls. The first call you expect to fail sometimes. But instead of retry the first call you want to use the second call. Then, if this second one fails you would like to retry using the circuit breaker.
CircuitBreakerConfig config = CircuitBreakerConfig
.custom()
.slidingWindowType(CircuitBreakerConfig.SlidingWindowType.COUNT_BASED)
.slidingWindowSize(10)
.failureRateThreshold(25.0f)
.waitDurationInOpenState(Duration.ofSeconds(10))
.permittedNumberOfCallsInHalfOpenState(4)
.build();
CircuitBreakerRegistry registry = CircuitBreakerRegistry.of(config);
CircuitBreaker circuitBreaker = registry.circuitBreaker("searchService");
try {
// First call
service.search(request)
} catch (SearchException e) {
// Second call using circuit breaker
Supplier<List<SearchPojo>> searchSupplier = circuitBreaker
.decorateSupplier(() -> service.search(request, a_new_parameter));
}

Related

Project reactor - react to timeout happened downstream

Project Reactor has a variety of timeout() operators.
The very basic implementation raises TimeoutException in case no item arrives within the given Duration. The exception is propagated downstream , and to upstream it sends cancel signal.
Basically my question is: is it possible to somehow react (and do something) specifically to timeout that happened downstream, not just to cancelation that sent after timeout happened?
My question is based on the requirements of my real business case and also I'm wondering if there is a straight solution.
I'll simplify my code for better understanding what I want to achieve.
Let's say I have the following reactive pipeline:
Flux.fromIterable(List.of(firstClient, secondClient))
.concatMap(Client::callApi) // making API calls sequentially
.collectList() // collecting results of API calls for further processing
.timeout(Duration.ofMillis(3000)) // the entire process should not take more than duration specified
.subscribe();
I have multiple clients for making API calls. The business requirement is to call them sequantilly, so I call them with concatMap(). Then I should collect all the results and the entire process should not take more than some Duration
The Client interface:
interface Client {
Mono<Result> callApi();
}
And the implementations:
Client firstClient = () ->
Mono.delay(Duration.ofMillis(2000L)) // simulating delay of first api call
.map(__ -> new Result())
// !!! Pseudo-operator just to demonstrate what I want to achieve
.doOnTimeoutDownstream(() ->
log.info("First API call canceled due to downstream timeout!")
);
Client secondClient = () ->
Mono.delay(Duration.ofMillis(1500L)) // simulating delay of second api call
.map(__ -> new Result())
// !!! Pseudo-operator just to demonstrate what I want to achieve
.doOnTimeoutDownstream(() ->
log.info("Second API call canceled due to downstream timeout!")
);
So, if I have not received and collected all the results during the amount of time specified, I need to know which API call was actually canceled due to downstream timeout and have some callback for this "event".
I know I could put doOnCancel() callback to every client call (instead of pseudo-operator I demonstrated) and it would work, but this callback reacts to cancelation, which may happen due to any error.
Of course, with proper exception handling (onErrorResume(), for example) it would work as I expect, however, I'm interesting if there is some straight way to somehow react specifically to timeout in this case.

Reactor and Spring state machine execute Mono then Flux sequentially

I have the following method that is called the method does not have a return value but gets an object from another service metadataService that returns a Mono and then some processing is done with the object returned by the Mono, and once this is done I need to send a signal to the StateMachine so that next step can be triggered.
public void safeExecute(
StateContext<StateMachineStates, StateMachineEvents> context){
metadataService.getMetadata(context.getId())
.doOnSuccess(metadata ->{
// perform some operation here
context.getStateMachine()
// returns a Flux<StateMachineEventResult<S, E>>
.sendEvent(Mono.just(
MessageBuilder.withPayload(Events.E_GOTO_NEXT_STATE).build()
))
.subscribe()
})
.
}
I get however the warning :
Calling 'subscribe' in non-blocking context is not recommended
That I can apparently resolve by calling publishOn(Schedulers.boundedElastic()) however the warning is still there.
My question is how can you send the event to the StateMachine only after the code block within onSuccess is done? I tried using concatWith or doFinally but I do not have enough good understanding of reactive programming.
My current stack :
Spring Boot 3.0.1
Spring State Machine 3.0.1
Spring 6

Scatter Gather with parallel flow (Timeout in aggregator)

I've been trying to add a timeout in the gather to don't wait that every flow finished.
but when I added the timeout doesn't work because the aggregator waits that each flow finished.
#Bean
public IntegrationFlow queueFlow(LogicService service) {
return f -> f.scatterGather(scatterer -> scatterer
.applySequence(true)
.recipientFlow(aFlow(service))
.recipientFlow(bFlow(service))
, aggregatorSpec -> aggregatorSpec.groupTimeout(2000L))
E.g of my flows one of them has 2 secs of delay and the other one 4 secs
public IntegrationFlow bFlow(LogicService service) {
return IntegrationFlows.from(MessageChannels.executor(Executors.newCachedThreadPool()))
.handle(service::callFakeServiceTimeout2)
.transform((MessageDomain.class), message -> {
message.setMessage(message.getMessage().toUpperCase());
return message;
}).get();
}
I use Executors.newCachedThreadPool() to run parallel.
I'd like to release each message that was contained until the timeout is fulfilled
Another approach that I've been testing was to use a default gatherer and in scatterGather set the gatherTimeout but I don't know if I'm missing something
Approach gatherTimeout
UPDATE
All the approaches given in the comments were tested and work normally, the only problem is that each action is evaluated over the message group creation. and the message group is created just until the first message arrived. The ideal approach is having an option of valid at the moment when the scatterer distributes the request message.
My temporal solution was to use a release strategy ad hoc applying a GroupConditionProvider which reads a custom header that I created when I send the message through the gateway. The only concern of this is that the release strategy only will be executed when arriving at a new message or I set a group time out.
The groupTimeout on the aggregator is not enough to release the group. If you don't get the whole group on that timeout, then it is going to be discarded. See sendPartialResultOnExpiry option: https://docs.spring.io/spring-integration/reference/html/message-routing.html#agg-and-group-to
If send-partial-result-on-expiry is true, existing messages in the (partial) MessageGroup are released as a normal aggregator reply message to the output-channel. Otherwise, it is discarded.
The gatherTimeout is good to have if you expect no replies from the gatherer at all. So, this way you won't block the scatter-gather thread forever: https://docs.spring.io/spring-integration/reference/html/message-routing.html#scatter-gather-error-handling

Spring webflux how to return 200 response to client before processing large file

I am working on a Spring Webflux project,
I want to do something like, When client make API call, I want to send success message to client and perform large file operation in background.
So client does not have to wait till my entire file is process.
For try out I made sample code as below
REST controller
#GetMapping(value = "/{jobId}/process")
#ApiOperation("Start import job")
public Mono<Integer> process(#PathVariable("jobId") long jobId) {
return service.process(jobId);
}
File processing Service
public Mono<Integer> process(Integer jobId) {
return repository
.findById(jobId)
.map(
job -> {
File file = new File("read.csv");
return processFile(file);
});
}
Following is my stack
Spring Webflux 2.2.2.RELEASE
I try to make this call using WebClient, but till entire file is not processed I am not getting response.
As one of the options, you can run processing in a different thread.
For example:
Create an Event Listener Link
Enable #Async and #EnableAsync Link
Or use deferent types of Executors from Java concurrency package
Or manually run the thread
Also for Kotlin you can use Coroutines
You can use the subscribe method and start a job with its own scope in background.
Mono.delay(Duration.ofSeconds(10)).subscribeOn(Schedulers.newElastic("myBackgroundTask")).subscribe(System.out::println);
As long as you do not tie this to your response publisher using one of the zip/merge or similar operators your job will be run on background on its own scheduler pool.
subscribe() method returns a Disposable instance which can later be used cancel the background job by calling dispose() method.

How to dynamically change the poller cron for InboundChannelAdapter in spring integration

I've looked around a lot, this is my configuration, how can I change the poller cron dynamically? As in, when the application is running and I change the poller cron in the db it should be picked up by the Poller in the InboundChannelAdapter.
Note: I don't use spring cloud config so #RefreshScope is not really an option
#Bean
#InboundChannelAdapter(channel = "sftpStreamChannel", poller = #Poller(cron = "${pollerCron}", maxMessagesPerPoll = "-1"))
public MessageSource<InputStream> sftpMessageSource()
{
SftpStreamingMessageSource source = new SftpStreamingMessageSource(template());
source.setRemoteDirectory(sftpRemoteDirectory);
source.setFilter(abSftpFileFilter());
return source;
}
You cannot change the cron expression dynamically; the framework does provide a DynamicPeriodicTrigger which can be used to change the fixed-delay or fixed-rate at runtime (but the change doesn't take effect until the next poll).
You might also find that a smart poller might be suitable for your use case - see "Smart" Polling, where the poller can make decisions about whether or not to proceed with a poll.
You could also create your own Trigger that wraps a CronTrigger and delegate to it; that would allow you to change it at runtime. But, again, changes won't take effect until the next poll.

Resources