Spring Integration Flow : Circuit breaker for each endpoints or at flow level - spring

I have successfully implemented some spring Integration Flow.
I am looking to have a circuit breaker either the same one for each endpoints or either at the flow level.
I have already read this documentation https://docs.spring.io/spring-integration/reference/html/handler-advice.html, but I havent find my answer.
Should I use some AOP ?
Thanks
G.

I'm not sure what you have missed in the mentioned docs, but RequestHandlerCircuitBreakerAdvice is indeed over there: https://docs.spring.io/spring-integration/reference/html/handler-advice.html#circuit-breaker-advice
The advises like this should be applied in the Java DSL with this configuration option:
.transform(..., c -> c.advice(expressionAdvice()))
Pay attention to that advice(expressionAdvice()) call. The expressionAdvice() is a bean method. So, you can do something similar for the RequestHandlerCircuitBreakerAdvice and any your endpoints in the flow which need to be guarded by the circuit.
And yes, you can use only a single bean for the RequestHandlerCircuitBreakerAdvice. It does keep a state for any endpoint it is called against:
protected Object doInvoke(ExecutionCallback callback, Object target, Message<?> message) {
AdvisedMetadata metadata = this.metadataMap.get(target);
if (metadata == null) {
this.metadataMap.putIfAbsent(target, new AdvisedMetadata());
metadata = this.metadataMap.get(target);
}

Thanks for your answer #artem-bilan.
I really appreciate that a spring integration team member anwsered to this.
After more thoughts, I have reformulated my problem.
Given an IntegrationFlow, with a specific error channel, if there are more than a given amount of errors in given span time (more than 10 errors in 10s), I want to stop polling the input channel.
So I redirect all the errors for this flow to the specific flow error channel.
An error counter is incremented, and then if the threshold is reached in the given span time, I stop the poller.
I have a second flow that monitor "stopped" pollers, and it restart them after some time.
[UPDATE]
I do have use your recommendations.
Mainly because I the framework dont solve your problem, your probably wrong.
And I was wrong.
Thanks !

Related

Logging Microprofile fault tollerance events

I am working on a Quarkus app that uses the smallrye microprofile fault tolerance implementation.
We have configured fault tolerance on the client definitions via the annotations API (#Retry, #Bulkhead, etc) and it seems to work but we don't get any sort of feedback about what is happening. Ideally we would like to get some sort of callback but even just having logs would help out in the first step.
The rest clients look something like this:
#RegisterRestClient(configKey = "foo-backend")
#Path("/backend")
interface FooClient {
#POST
#Retry(maxRetries = 4, delay = 900)
#ExponentialBackoff
#Timeout(value = 3000)
fun getUser(payload: GetFooUserRequest): GetFooUserResponse
}
Looking at the logs, even though we trace all communication, I cannot see any event even if I manually stop foo-backend and start it again before the retires run out.
Our logging config looks like this right now but still nothing
quarkus.rest-client.logging.scope=request-response
quarkus.rest-client.logging.body-limit=2048
quarkus.log.category."org.jboss.resteasy.reactive.client.logging".level=DEBUG
Is there a way to get callbacks when a fault tolerance event happens? Or a setting which logs them out? I also would be interested in knowing when out Circuit Breakers are triggered or when a Bulkhead fills up. Logging them would be good enough for now but Ideally I would like to somehow listen for them.
You can enable DEBUG logging for the io.smallrye.faulttolerance category, and you should get all the information you need.
Specifically for circuit breakers, you can register state change listeners for circuit breakers that have been given a name using #CircuitBreakerName -- just inject CircuitBreakerMaintenance and use onStateChange. See https://smallrye.io/docs/smallrye-fault-tolerance/5.6.0/usage/extra.html#_circuit_breaker_maintenance
There's unfortunately nothing similar for bulkheads yet.

Spring Integration can’t use multiple Outbound Channel Adapters

I want to write to a channel adapter only if the previous channel adapter write has been written successfully. I’m trying to do this by:
#Bean
public IntegrationFlow buildFlow() {
return IntegrationFlows.from(someChannelAdapter)
.handle(outboundChannelAdapter1)
.handle(outboundChannelAdapter2)
.get();
}
But I’m getting the following exception: The ‘currentComponent’ (…ReactiveMessageHandlerAdapter) is a one-way 'MessageHandler’ and it isn’t appropriate to configure ‘outputChannel’. This is the end of the integration flow.
How can I perform this?
If your handler implementation is one-way, fire-n-forget, then indeed there is no justification to continue the flow. It can go ahead with the configuration if the current handler is reply-producing and there will be something we can build a message to send to the next channel.
In your case .handle(outboundChannelAdapter1) is just void, so the next .handle(outboundChannelAdapter2) is not going to have anything to continue the flow. So, the framework gives you a hint that such a configuration is wrong. It is called flow for a reason: the result of the current endpoint is going to be an input for the next one. If no result, no continuation. How else it could work in your opinion?
The point is that there need to be something to write to your channel adapter. One of the solution is a PublishSubscribeChannel which distributes the same input message to all its subscribers. If that is what would fit to your expectations, then take a look into its support in Java DSL: https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-subflows.
Another way is a RecipientListRouter pattern: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#router-implementations-recipientlistrouter.
You may achieve the same with WireTap as well, but it depends on a business logic of your solution: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-wiretap.
But anyway: you need to understand that the second handler can be called only if there is an input message for its channel. In all those cases I showed you it is exactly the same message you send to a first handler. If your expectations are different, please elaborate what kind of message you'd like to have for a second handler if the first does not return anything.

Spring Integration Usage and Approach Validation

I am testing out using Spring Integration to tie together disperate modules within the same Spring-Boot application, for now, and services into a unified flow starting with a single-entry point.
I am looking for the following clarifications with Spring Integration if possible:
Is the below code the right way to structure flows using the DSL?
In "C" below, can i bubble up the result to the "B" flow?
Is using the DSL vs. the XML the better approach?
I am confused as to how to correctly "terminate" a flow?
Flow Overview
In the code below, I am just publishing a page to a destination. The overall flow goes like this.
Publisher flow listens for the payload and splits it into parts.
Content flow filters out pages and splits them into parts.
AWS flow subscribes and handles the part.
File flow subscribes and handles the part.
Eventually, there may be additional and very different types of consumers to the Publisher flow which are not content which is why I split the publisher from the content.
A) Publish Flow (publisher.jar):
This is my "main" flow initiated through a gateway. The intent, is that this serves as the entry point to begin trigger all publishing flows.
Receive the message
Preprocess the message and save it.
Split the payload into individual entries contained in it.
Enrich each of the entries with the rest of the data
Put each entry on the output channel.
Below is the code:
#Bean
IntegrationFlow flowPublish()
{
return f -> f
.channel(this.publishingInputChannel())
//Prepare the payload
.<Package>handle((p, h) -> this.save(p))
//Split the artifact resolved items
.split(Package.class, Package::getItems)
//Find the artifact associated to each item (if available)
.enrich(
e -> e.<PackageEntry>requestPayload(
m ->
{
final PackageEntry item = m.getPayload();
final Publishable publishable = this.findPublishable(item);
item.setPublishable(publishable);
return item;
}))
//Send the results to the output channel
.channel(this.publishingOutputChannel());
}
B) Content Flow (content.jar)
This module's responsibility is to handle incoming "content" payloads (i.e. Page in this case) and split/route them to the appropriate subscriber(s).
Listen on the publisher output channel
Filter the entries by Page type only
Add the original payload to the header for later
Transform the payload into the actual type
Split the page into its individual elements (blocks)
Route each element to the appropriate PubSub channel.
At least for now, the subscribed flows do not return any response - they should just fire and forget but i would like to know how to bubble up the result when using the pub-sub channel.
Below is the code:
#Bean
#ContentChannel("asset")
MessageChannel contentAssetChannel()
{
return MessageChannels.publishSubscribe("assetPublisherChannel").get();
//return MessageChannels.queue(10).get();
}
#Bean
#ContentChannel("page")
MessageChannel contentPageChannel()
{
return MessageChannels.publishSubscribe("pagePublisherChannel").get();
//return MessageChannels.queue(10).get();
}
#Bean
IntegrationFlow flowPublishContent()
{
return flow -> flow
.channel(this.publishingChannel)
//Filter for root pages (which contain elements)
.filter(PackageEntry.class, p -> p.getPublishable() instanceof Page)
//Put the publishable details in the header
.enrichHeaders(e -> e.headerFunction("item", Message::getPayload))
//Transform the item to a Page
.transform(PackageEntry.class, PackageEntry::getPublishable)
//Split page into components and put the type in the header
.split(Page.class, this::splitPageElements)
//Route content based on type to the subscriber
.<PageContent, String>route(PageContent::getType, mapping -> mapping
.resolutionRequired(false)
.subFlowMapping("page", sf -> sf.channel(this.contentPageChannel()))
.subFlowMapping("image", sf -> sf.channel(this.contentAssetChannel()))
.defaultOutputToParentFlow())
.channel(IntegrationContextUtils.NULL_CHANNEL_BEAN_NAME);
}
C) AWS Content (aws-content.jar)
This module is one of many potential subscribers to the content specific flows. It handles each element individually based off of the routed channel published to above.
Subscribe to the appropriate channel.
Handle the action appropriately.
There can be multiple modules with flows that subscribe to the above routed output channels, this is just one of them.
As an example, the the "contentPageChannel" could invoke the below flowPageToS3 (in aws module) and also a flowPageToFile (in another module).
Below is the code:
#Bean
IntegrationFlow flowAssetToS3()
{
return flow -> flow
.channel(this.assetChannel)
.publishSubscribeChannel(c -> c
.subscribe(s -> s
.<PageContent>handle((p, h) ->
{
return this.publishS3Asset(p);
})));
}
#Bean
IntegrationFlow flowPageToS3()
{
return flow -> flow
.channel(this.pageChannel)
.publishSubscribeChannel(c -> c
.subscribe(s -> s
.<Page>handle((p, h) -> this.publishS3Page(p))
.enrichHeaders(e -> e.header("s3Command", Command.UPLOAD.name()))
.handle(this.s3MessageHandler())));
}
First of all there are a lot of content in your question: it's to hard to keep all the info during read. That is your project, so you should be very confident in the subject. But for us that is something new and may just give up even reading not talking already with attempt to answer.
Anyway I'll try to answer to your questions in the beginning, although I feel like you're going to start a long discussion "what?, how?, why?"...
Is the below code the right way to structure flows using the DSL?
It really depends of your logic. That is good idea to distinguish it between logical component, but that might be overhead to sever separate jar on the matter. Looking to your code that seems for me like you still collect everything into single Spring Boot application and just #Autowired appropriate channels to the #Configuration. So, yes, separate #Configuration is good idea, but separate jar is an overhead. IMHO.
In "C" below, can i bubble up the result to the "B" flow?
Well, since the story is about publish-subscribe that is really unusual to wait for reply. How many replies are you going to get from those subscribers? Right, that is the problem - we can send to many subscribers, but we can't get replies from all of them to single return. Let's come back to Java code: we can have several method arguments, but we have only one return. The same is applied here in Messaging. Anyway you may take a look into Scatter-Gather pattern implementation.
Is using the DSL vs. the XML the better approach?
Both are just a high-level API. Underneath there are the same integration components. Looking to your app you'd come to the same distributed solution with the XML configuration. Don't see reason to step back from the Java DSL. At least it is less verbose, for you.
I am confused as to how to correctly "terminate" a flow?
That's absolutely unclear having your big description. If you send to S3 or to File, that is a termination. There is no reply from those components, so no where to go, nothing to do. That is just stop. The same we have with the Java method with void. If you worry about your entry point gateway, so just make it void and don't wait for any replies. See Messaging Gateway for more info.

Project reactor processors v3.X

We are trying to migrate from 2.X to 3.X.
https://github.com/reactor/reactor-core/issues/375
We have used the EventBus as event manager in our application(Low latency FX system) and it works very well for us.
After the change we decided to take every module and create his own processor to handle event.
1. Does this use seems to be correct from your point of view? Because lack of document at the current stage and after reviewing everything we could we don't really know what to do here
2. We have tried to use Flux in order to perform action every X interval
For example: Market is arriving 1000 for 1 second but we want to process an update only 4 time in a second. After upgrading we are using:
Processor with buffer and sending to another method.
In this method we have Flux that get list and try to work in parallel in order to complete his task.
We had 2 major problems:
1. Sometimes we received Null event which we cannot find that our system is sending to i suppose maybe we are miss using the processor
//Definition of processor
ReplayProcessor<Event> classAEventProcessor = ReplayProcessor.create();
//Event handler subscribing
public void onMyEventX(Consumer<Event> consumer) {
Flux<Event> handler = classAEventProcessor .filter(event -> event.getType().equals(EVENT_X));
handler.subscribe(consumer);
}
in the example above the event in the handler sometimes get null.. Once he does the stream stop working until we are restating server(Because only on restart we are doing creating processor)
2.We have tried to us parallel but sometimes some of the message were disappeared so maybe we are misusing the framework
//On constructor
tickProcessor.buffer(1024, Duration.of(250, ChronoUnit.MILLIS)).subscribe(markets ->
handleMarkets(markets));
//Handler
Flux.fromIterable(getListToProcess())
.parallel()
.runOn(Schedulers.parallel())
.doOnNext(entryMap -> {
DoBlockingWork(entryMap);
})
.sequential()
.subscribe();
The intention of this is that the processor will wakeup every 250ms and invoke the handler. The handler will work work with Flux parallel in order to make better and faster processing.
*In case that DoBlockingWork takes more than 250ms i couldn't understand what will be the behavior
UPDATE:
The EventBus was wrapped by us and every event subscribed throw the wrapped event manager.
Now we have tried to create event processor for every module but it works very slow. We have used TopicProcessor with ThreadExecutor and still very slow.. EventBus did the same work in high speed
Anyone has any idea? BTW when i tried to use DirectProcessor it seems to work much better that the TopicProcessor
Reactor 3 is built around the concept that you should avoid blocking as much as you can, so in your second snippet DoBlockingWork doesn't look good.
How are the events generated? Do you maybe have an listener-based asynchronous API to get them? If so, you could try using Flux.create.
For your use case of "we have 1000 events in 1 second, but only want to process 4", I'd chain a sample operator. For instance, sample(Duration.ofMillis(250)) will divide each second into 4 windows, from which it will only emit the last element.
The reference guide is being written, as well as a page where you can find links to external articles and learning material.There's a preview of the WIP reference guide here and the learning resources page here.

spring integration message released twice from aggregator

I have a spring integration flow that starts with a channel inboundadapter and picks up files and passes them through the system as messages.
After a few components, the messages are aggregated at an "Aggregator" from where they are released based on release strategies or by group timeout of 30 sec.
The downstream processing has another bunch of components till the final one.
The problem I am facing is this,
When I send 33 files which create 33 "groups/buckets" based on correlation IDs, aggregated at the "Aggregator", some of the files or messages seems to be "released" twice. The reason I conclude that is because I have a channel interceptor which shows a few messages passing through the "released" channel (appearing right after the aggregator) a second time, after completing the downstream processing successfully, the first time. Additionally, this behavior causes my application to not find a file and throw an exception which I see. This leads me to conclude that the message bucket/group/corrID is somehow being "Released" twice.
I have tried to debug this many ways , but essentially, I want to know how a corrID/bucket after being released and having successfully gone through all downstream components in a single thread, can be "released" again.
My question is, how can I debug this? I want to know what is making this message/bucket re-appear in the aggregator.
My aggregator is as follows,
<int:aggregator id="bufferedFiles" input-channel="inQueueForStage"
output-channel="released" expire-groups-upon-completion="true"
send-partial-result-on-expiry="true" release-strategy="releaseHandler"
release-strategy-method="canRelease"
group-timeout-expression="size() > 0 ? T(com.att.datalake.ifr.loader.utils.MessageUtils).getAggregatorTimeout(one, #sourceSnapshot) : -1">
<int:poller fixed-delay="${files.pickup.delay:3000}"
max-messages-per-poll="${num.files.pickup.per.poll:10}"
task-executor="executor" />
</int:aggregator>
Explanation of aggregator: The size()>0 applies to EACH correlation bucket. each of the 33 files I am sending will spawn/generate/create a new bucket because of the file name, so the aggregator will have 33 buckets/groups/corrIds, each bucket will contain only one file.
So the aggregator SPEL expression simply says that if there no release strategies, then release the bucket/group after 30 secs if the group indeed has at least some files.
My Channel inbound adapter is as follows:
<int-file:inbound-channel-adapter id="files"
channel="dispatchFiles" directory="${source.dir}" scanner="directoryScanner">
<int:poller fixed-delay="${files.pickup.delay:3000}"
max-messages-per-poll="${num.files.pickup.per.poll:10}" />
</int-file:inbound-channel-adapter>
Logs
here is the log of message completing the flow the first time. The completion time invoked suggests reaching the last component a "completionHandler" SA.
Explanation of Log: "cor" is the bucket/corrId that is being released twice. The reason I get the final exception is because during the first time, the file is removed from that original location and processed. So the second time around when this erroneous release happens, there is nothing to process there.
From the pictures it can be seen that the first batch/corrId/bucket is processed and finished around 11:09, and the second one is started around 11:10
an important point I noticed that this behavior only happens when I have a global channel interceptor in which I am doing somewhat long processing. When this interceptor is commented out, the errors go away.
Question:
is it possible for aggregator to double release a batch/corrId under any circumstance? How can I make aggregator emit any logs?
Thanks
Edit 10:15pm
My channel following the aggregator has an interceptor as follows,
public Message<?> preSend(Message<?> message, MessageChannel channel) {
LOGGER.info("******** Releasing from aggregator(interceptor) , corrID:{} at time:{} ********",MessageUtils.getCorrelationId(message), new Date() );
finalReporter.callback(channel.toString(), message);
return message;
}
From Aggregator down to final compeltionHandler SA, I have single threaded processing
Aggregator -> releasedChannel -> some SA1 -> some channel -> ..... -> completionChannel->completeSA
When I run for 33 partitions, let's follow corrId = "alh" The first time it is released, it looks like following,
What it shows is that thread-5 released it and it should process all the downstream components. But it leaves it mid-way and starts doing other things and is picked up again by a diffferent thread a little later as follows,
That seems/seemed to be the problem,
Solution Update:
I did following 3 things to sort of work around, at the moment,
for some reason, my interceptors were doing return super.preSend(message, channel) instead of simply return message. I changed it to latter
I had a global channel interceptors, I removed global and kept individual ones
If the channel interceptors had any issues before returning, would that cause a new release?
Although I still see the above scenario depicted in pictures, I am not getting double processing attempts and as such it avoids the errors. I am still trying to make sense out of this.
I understand it's too specific and difficult to explain; still thanks for the time and comments...
However, yes. I think #GaryRussell is right: since you use expire-groups-upon-completion="true" some partial groups may be released by group-timeout-expression and the new messages with the same correlationId will form a new group, which is released by the next group-timeout. Your size() > 0 isn't good too. It means that it is going to release partial group after that group-timeout. Maybe size() > 1? The group can't be size() == 0 though. Because it is created on the first message, so, if gruop exists, it contains at least one message. Yes, group can be empty, but in that case the aggregator should be marked with expire-groups-upon-completion="false". In that case it is marked as completed and doesn't allow new messages.
After struggling with debugging and various blind scenarios, I believe that at least I have a workaround and a possible root cause. I will try to outline all the things that I modified,
Root Cause:
My interceptors were calling a Common class with a common callback method. This method, based on the channel name from which the request was coming from, would decide the appropriate action to take. The actions were essentially collecting data, incrementing counters and persisting to database some information.
It seems that some of them were having errors and consequently, the thread was dying and message re-released. I am not entirely sure about it and please correct me if that's not the case.
But after I fixed those errors, the re-release issue seems to have subsided or vanished altogether.
The reason it was hard to diagnose was because I could not see those errors thrown during callback method invocations; may be I was catching them or may be they were lost.
I also found that the issue was only on any channel interceptors AFTER the aggregator. Interceptors before the aggregator did not present any issues; may be because they were simpler...
To debug,
I removed the interceptors and made the callback directly from various components (SAs), removed global interceptors and tried to add individual interceptors for specific channels.
Thanks for all the help.

Resources