Spring Integration splitter and aggregator configuration - spring

I have a scenario where I need to invoke A and B system's REST call in parallel and aggregate the responses and transform into single FinalResponse.
To achieve this, I am using Spring Integration splitter and agreggator and the configuration is as given below.
I have exposed a REST endpoint, when the request(request has co-relationId in the header) comes to the controller, we invoke gateway and the splitter sends requests to A and B channels .
Service activator A listens to channel A and invokes A system's REST call and service Activator B listens to B channel and invokes B system's REST call.
Then I need to aggregate the responses from A and B system and then transform it into FinalResponse. Currently the aggregation and transformation is working fine.
Sometimes when multiple requests come to controller, the FinalResponse takes more time when compared to single request to controller. All the responses to the requests come almost at the same time not sure why (even though the last request to controller was sent 6-7 secs after the 1st request). Is there something wrong in my configuration related to threads? Not sure why it takes more time to respond when multiple requests comes to the controller.
Also, I am not using any CorrelationStrategy, do we need to use it? Will I face any issues in multi threading environment with the below configuration? Any feedback on the configuration would be helpful
// Controller
{
FinalResponse aggregatedResponse = gateway.collateServiceInformation(inputData);
}
//Configuration
#Autowired
Transformer transformer;
//Gateway
#Bean
public IntegrationFlow gatewayServiceFlow() {
return IntegrationFlows.from("input_channel")
.channel("split_channel").get();
}
//splitter
#Bean
public IntegrationFlow splitAggregatorFlow() {
return IntegrationFlows.from("split_channel").
.split(SomeClass.class, SomeClass::getResources)
.channel(c -> c.executor(Executors.newCachedThreadPool()))
.<Resource, String>route(Resource::getName,
mapping -> mapping.channelMapping("A", "A")
.channelMapping("B", "B"))
.get();
}
//aggregator
#Bean
public IntegrationFlow aggregateFlow() {
return IntegrationFlows.from("aggregate_channel").aggregate()
.channel("transform_channel").transform(transformer).get();
}
.
.
.
//Transformer
#Component
#Scope("prototype")
public class Transformer {
#Transformer
public FinalResponse transform(final List<Result> responsesFromAAndB) {
//transformation logic and then return final response
}
}

The splitter provides a default strategy for correlation details in headers. The Aggregator will use them afterwards. What you talk about is called scatter-gather: https://docs.spring.io/spring-integration/docs/5.0.8.RELEASE/reference/html/messaging-routing-chapter.html#scatter-gather. There is a Java DSL equivalent.
I think your problem that some request in the splitted set fails, so am Aggregator can’t finish a group for that request. Nothing obvious so far in your config...

Related

Spring Reactor and consuming websocket messages

I'm creating a spring reactor application to consume messages from websockets server, transform them and later save them to redis and some sql database, saving to redis and sql database is also reactive. Also, before writing to redis and sql database, messages will be windowed (with different timespans) and aggregated.
I'm not sure if the way I've accomplished what I want to achieve is a proper reactive wise, it means, I'm not losing reactive benefits (performance).
First, let me show you what I got:
#Service
class WebSocketsConsumer {
public ConnectableFlux<String> webSocketFlux() {
return Flux.<String>create(emitter -> {
createWebSocketClient()
.execute(URI.create("wss://some-url-goes-here.com"), session -> {
WebSocketMessage initialMessage = session.textMessage("SOME_MSG_HERE");
Flux<String> flux = session.send(Mono.just(initialMessage))
.thenMany(session.receive())
.map(WebSocketMessage::getPayloadAsText)
.doOnNext(emitter::next);
Flux<String> sessionStatus = session.closeStatus()
.switchIfEmpty(Mono.just(CloseStatus.GOING_AWAY))
.map(CloseStatus::toString)
.doOnNext(emitter::next)
.flatMapMany(Flux::just);
return flux
.mergeWith(sessionStatus)
.then();
})
.subscribe(); //1: highlighted by Intellij Idea: `Calling subsribe in not blocking context`
})
.publish();
}
private ReactorNettyWebSocketClient createWebSocketClient() {
return new ReactorNettyWebSocketClient(
HttpClient.create(),
() -> WebsocketClientSpec.builder().maxFramePayloadLength(131072 * 100)
);
}
}
And
#Service
class WebSocketMessageDispatcher {
private final WebSocketsConsumer webSocketsConsumer;
private final Consumer<String> reactiveRedisConsumer;
private final Consumer<String> reactiveJdbcConsumer;
private Disposable webSocketsDisposable;
WebSocketMessageDispatcher(WebSocketsConsumer webSocketsConsumer, Consumer<String> redisConsumer, Consumer<String> dbConsumer) {
this.webSocketsConsumer = webSocketsConsumer;
this.reactiveRedisConsumer = redisConsumer;
this.reactiveJdbcConsumer = dbConsumer;
}
#EventListener(ApplicationReadyEvent.class)
public void onReady() {
ConnectableFlux<String> messages = webSocketsConsumer.webSocketFlux();
messages.subscribe(reactiveRedisConsumer);
messages.subscribe(reactiveJdbcConsumer);
webSocketsDisposable = messages.connect();
}
#PreDestroy
public void onDestroy() {
if (webSocketsDisposable != null) webSocketsDisposable.dispose();
}
}
Questions:
Is it a proper use of reactive streams? Maybe redis and database writes should be done in flatMap, however IMO they can't as I want them to happen in the background and they will also aggregate messages with different time windows. Also note comment 1 from the code above where idea lints my code, code works however I wonder what this lint may result in? Maybe I should use doOnNext not to call emitter::next but to invoke some dispatcher of messages there with some funcion like doOnNext(dispatcher::dispatchMessage) ?
I want websockets client to start immediately after application is ready and stop consuming messages when application shuts down, are #EventListener(ApplicationReadyEvent.class) and #PreDestroy annotations and code shown above a proper way to handle this scenario in reactive world?
As I said saving to redis and sql database is also reactive, i.e. those saves are also producing Mono<T> is subscribing to those Monos inside subscribe of websockets flux ok or it should be accomplished some other way (comments 2 and 3 in code above)

Spring Integration Flow with #Restcontoller Timing issue

A simple #RestController is connected with a #MessagingGateway to an IntegrationFlow.
After a load test we saw within the tracing that we lose "a lot of time" before even starting the processing within the flow:
Tracing result
In this example we can see that over 90ms spend befor sending the message to the flow.
Did anyone have some idea what leads to this behavior?
As far as I understood the documentation, everything is handled in the sender thread and therefore no special worker threads are created.
We use the Restcontroller since we need to create the documentation with springdoc-openapi-ui
ExampleCode:
RestController
#RestController
public class DescriptionEndpoint {
HttpMessageGateway httpMessageGateway;
public Result findData(#Valid dataRequest dataRequest) {
final Map<String, Object> headerParams = new HashMap<>();
return httpMessageGateway.basicDataDescriptionFlow(dataRequest, headerParams);
}
}
Gateway
#MessagingGateway
public interface HttpMessageGateway {
#Gateway(requestChannel = "startDataFlow.input")
Result basicDataDescriptionFlow(#Payload dataRequest prDataRequest, #Headers Map<String, Object> map);
}
IntegrationFlow
public class ExampleFlow {
#Bean
public IntegrationFlow startDataFlow() {
return new FlowExtension()
.handle(someHandler1)
.handle(someHandler2)
.handle(someHandler3)
.get();
}
}
After adding some more traces I realized, that this timing issue is caused by my spring security configuration.
Unfortunatelly, i thought, the span is only representing the time after the start of findData(..). But it seems, the tracing starts already in the proxy methods and security chain.
After improving some implementation on our JWTToken filter, the spend times for these endpoints are OK.

Spring Integration Framework - Slowness in registering flows dynamically

We are developing a Spring Boot (2.4.0) application that uses Spring Integration framework(5.4.1) to build SOAP Integration flows and register them dynamically. The time taken to register ‘IntegrationFlow’ with ‘FlowContext’ is increasing exponentially as the number of flows being registered increase.
Following is a quick snapshot of time taken to register flows:
5 flows – 500 ms
100 flows – 80 sec
300 flows – 300 sec
We see that first few flows are taking about 100ms to register, and as it reaches 300 it is taking up to 7 sec to register each flow. These flows are identical in nature (and they simply log an info message and return).
Any help to resolve this issue would be highly appreciated.
SoapFlowsAutoConfiguration.java (Auto Configuration class that registers Flows dynamically(manually))
#Bean
public UriEndpointMapping uriEndpointMapping(
ServerProperties serverProps,
WebServicesProperties webServiceProps,
IntegrationFlowContext flowContext,
FlowMetadataProvider flowMetadataProvider,
#ErrorChannel(Usage.SOAP) Optional<MessageChannel> errorChannel,
BeanFactory beanFactory) {
UriEndpointMapping uriEndpointMapping = new UriEndpointMapping();
uriEndpointMapping.setUsePath(true);
Map<String, Object> endpointMap = new HashMap<>();
flowMetadataProvider
.flowMetadatas()
.forEach(
metadata -> {
String contextPath = serverProps.getServlet().getContextPath();
String soapPath = webServiceProps.getPath();
String serviceId = metadata.id();
String serviceVersion = metadata.version();
String basePath = contextPath + soapPath;
String endpointPath = String.join("/", basePath, serviceId, serviceVersion);
SimpleWebServiceInboundGateway inboundGateway = new SimpleWebServiceInboundGateway();
errorChannel.ifPresent(inboundGateway::setErrorChannel);
endpointMap.put(endpointPath, inboundGateway);
IntegrationFlowFactory flowFactory = beanFactory.getBean(metadata.flowFactoryClass());
IntegrationFlow integrationFlow =
IntegrationFlows.from(inboundGateway).gateway(flowFactory.createFlow()).get();
flowContext.registration(integrationFlow).register();
});
uriEndpointMapping.setEndpointMap(endpointMap);
return uriEndpointMapping;
}
SoapFlow.java (Integration Flow)
#Autowired private SoapFlowResolver soapFlowResolver;
#Autowired private CoreFlow delegate;
#Override
public IntegrationFlow createFlow() {
IntegrationFlow a =
flow -> flow.gateway(soapFlowResolver.resolveSoapFlow(delegate.createFlow()));
return a;
}
SoapFlowResolver.java (Common class used by all integration flows to delegate request to a Coreflow that is responsible for business logic implementation)
public IntegrationFlow resolveSoapFlow(
IntegrationFlow coreFlow) {
return flow -> {
flow.gateway(coreFlow);
};
}
CoreFlow.java (Class that handles the business logic)
#Override
public IntegrationFlow createFlow() {
return flow -> flow.logAndReply("Reached CoreFlow");
}
You are crating too many beans, where each of them checks the rest if it wasn't created before. That's how you get increase with the start time when you add more and more flows dynamically.
What I see is an abuse of the dynamic flows purpose. Each time we decide to go this way we need to think twice if we definitely need to have the whole flow as a fresh instance. Again: the flow is not volatile object, it registers a bunch of beans in the application context which are going to stay there until you remove them. And they are singletons, so can be reused in any other places of your application.
Another concern that you don't count with the best feature of Spring Integration MessageChannel pattern implementation. You definitely can have some common flows in advance and connect your dynamic with those through channel between them. You probably just need to create dynamically a SimpleWebServiceInboundGateway and wire it with the channel for your target logic which is the same for all the flows and so on.

Spring Integration with DSL: Can File Outbound Channel Adapter create file after say 10 mins of interval

I have a requirement where my application should read messages from MQ and write using file outbound channel adapter. I want each of my output file should contain messages of every 10 mins of interval. Is there any default implementation exist, or any pointers to do so.
public #Bean IntegrationFlow defaultJmsFlow()
{
return IntegrationFlows.from(
//read JMS topic
Jms.messageDrivenChannelAdapter(this.connectionFactory).destination(this.config.getInputQueueName()).errorChannel(errorChannel()).configureListenerContainer(c ->
{
final DefaultMessageListenerContainer container = c.get();
container.setSessionTransacted(true);
container.setMaxMessagesPerTask(-1);
}).get())
.channel(messageProcessingChannel()).get();
}
public #Bean MessageChannel messageProcessingChannel()
{
return MessageChannels.queue().get();
}
public #Bean IntegrationFlow messageProcessingFlow() {
return IntegrationFlows.from(messageProcessingChannel())
.handle(Files.outboundAdapter(new File(config.getWorkingDir()))
.fileNameGenerator(fileNameGenerator())
.fileExistsMode(FileExistsMode.APPEND).appendNewLine(true))
.get();
}
First of all you could use something like a QueueChannel with the poller on the endpoint for the FileWritingMessageHandler with the fixedDelay for those 10 mins. However you should keep in mind that messages are going to be stored in the memory before poller does its work. So, once a crash of your application, the messages are lost.
On the other hand you can use a JmsDestinationPollingSource with similar poller configuration. This way, however, you need to configure it with the maxMessagesPerPoll(-1) to let it to pull as much messages from the MQ as possible during single polling task - once in 10 mins.
Another variant is possible with an aggregator and its groupTimeout option. This way you won't have an output message from the aggregator until 10 mins interval passes. However again: the store is in memory by default. I wouldn't introduce one more persistence storage just to satisfy a periodic requirement when we already have an MQ and we really can poll exactly that. Therefore I would go a JmsDestinationPollingSource variant.
UPDATE
Can you help me with how to set fixed delay in file outbound adapter.
Since you deal with the QueueChannel, you need to configure for the "fixed delay" a PollingConsumer endpoint. This one really belongs to the subscriber of that channel. Indeed it is a .handle(Files.outboundAdapter) part. Only what you are missing that Poller is an option of the endpoint, not a MessageHandler. Consider to use an overloaded handle()variant:
.handle(Files.outboundAdapter(new File(config.getWorkingDir()))
.fileNameGenerator(fileNameGenerator())
.fileExistsMode(FileExistsMode.APPEND).appendNewLine(true),
e -> e.poller(p -> p.fixedDelay(10000)))
Or a sample example for JMSDestinationPollingSource
#Bean
public IntegrationFlow jmsInboundFlow() {
return IntegrationFlows
.from(Jms.inboundAdapter(cachingConnectionFactory())
.destination("jmsInbound"),
e -> e.poller(p -> p.fixedDelay(10000)))
.<String, String>transform(String::toUpperCase)
.channel(jmsOutboundInboundReplyChannel())
.get();
}

How to set a Message Handler programmatically in Spring Cloud AWS SQS?

maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}

Resources