Spring Integration with DSL: Can File Outbound Channel Adapter create file after say 10 mins of interval - spring

I have a requirement where my application should read messages from MQ and write using file outbound channel adapter. I want each of my output file should contain messages of every 10 mins of interval. Is there any default implementation exist, or any pointers to do so.
public #Bean IntegrationFlow defaultJmsFlow()
{
return IntegrationFlows.from(
//read JMS topic
Jms.messageDrivenChannelAdapter(this.connectionFactory).destination(this.config.getInputQueueName()).errorChannel(errorChannel()).configureListenerContainer(c ->
{
final DefaultMessageListenerContainer container = c.get();
container.setSessionTransacted(true);
container.setMaxMessagesPerTask(-1);
}).get())
.channel(messageProcessingChannel()).get();
}
public #Bean MessageChannel messageProcessingChannel()
{
return MessageChannels.queue().get();
}
public #Bean IntegrationFlow messageProcessingFlow() {
return IntegrationFlows.from(messageProcessingChannel())
.handle(Files.outboundAdapter(new File(config.getWorkingDir()))
.fileNameGenerator(fileNameGenerator())
.fileExistsMode(FileExistsMode.APPEND).appendNewLine(true))
.get();
}

First of all you could use something like a QueueChannel with the poller on the endpoint for the FileWritingMessageHandler with the fixedDelay for those 10 mins. However you should keep in mind that messages are going to be stored in the memory before poller does its work. So, once a crash of your application, the messages are lost.
On the other hand you can use a JmsDestinationPollingSource with similar poller configuration. This way, however, you need to configure it with the maxMessagesPerPoll(-1) to let it to pull as much messages from the MQ as possible during single polling task - once in 10 mins.
Another variant is possible with an aggregator and its groupTimeout option. This way you won't have an output message from the aggregator until 10 mins interval passes. However again: the store is in memory by default. I wouldn't introduce one more persistence storage just to satisfy a periodic requirement when we already have an MQ and we really can poll exactly that. Therefore I would go a JmsDestinationPollingSource variant.
UPDATE
Can you help me with how to set fixed delay in file outbound adapter.
Since you deal with the QueueChannel, you need to configure for the "fixed delay" a PollingConsumer endpoint. This one really belongs to the subscriber of that channel. Indeed it is a .handle(Files.outboundAdapter) part. Only what you are missing that Poller is an option of the endpoint, not a MessageHandler. Consider to use an overloaded handle()variant:
.handle(Files.outboundAdapter(new File(config.getWorkingDir()))
.fileNameGenerator(fileNameGenerator())
.fileExistsMode(FileExistsMode.APPEND).appendNewLine(true),
e -> e.poller(p -> p.fixedDelay(10000)))
Or a sample example for JMSDestinationPollingSource
#Bean
public IntegrationFlow jmsInboundFlow() {
return IntegrationFlows
.from(Jms.inboundAdapter(cachingConnectionFactory())
.destination("jmsInbound"),
e -> e.poller(p -> p.fixedDelay(10000)))
.<String, String>transform(String::toUpperCase)
.channel(jmsOutboundInboundReplyChannel())
.get();
}

Related

Spring Integration + MQTT parallelization

I wrote a Spring application using Integration flows that reads some MQTT messages and puts them in incomingMqttMessageChannel:
#Bean
public IntegrationFlow incomingMqttMessageFlow() {
return IntegrationFlows.from(mqttPahoMessageDrivenChannelAdapter())
.channel("incomingMqttMessageChannel").get();
}
public MqttPahoMessageDrivenChannelAdapter mqttPahoMessageDrivenChannelAdapter() {
MqttPahoMessageDrivenChannelAdapter adapter = new MqttPahoMessageDrivenChannelAdapter(
mqttBroker, UUID.randomUUID().toString(), incomingMqttTopic);
//...
}
//...
And then I use some Spring Integration annotations to process the messages in incomingMqttMessageChannel, e.g.:
#Transformer(inputChannel = "incomingMqttMessageChannel", outputChannel = "entityChannel")
public Entity transform(byte[] mqttMessage){
//transform mqtt message to other Entity
}
I performed some tests and I realized that with this code messages were processed one by one.
I want to process the MQTT messages I receive in parallel using a thread pool, not running several Spring applications.
According to this the MqttPahoMessageDrivenChannelAdapter is single-threaded.
Is there any way to parallelize message processing in this case? Which are the options I have?
Thanks in advance.
Make your incomingMqttMessageChannel as an Executor channel:
.channel(c -> c.executor("incomingMqttMessageChannel", threadPoolTaskExecutor))
This way your MQTT messages are going to consumed from that channel from threads of that executor.
See more info in docs: https://docs.spring.io/spring-integration/reference/html/core.html#executor-channel

Spring Integration - Inbound channel Adapter to execute down stream channel parallel processing

I am trying to configure below workflow all in annotations
Inbound channel adapter with poller (cron trigger) scheduled to run every 30 minutes
Poll the file form directory i.e. 10 files and move to stage directory
For each file need to invoke a batch job in parallel i.e. 10 Jobs should run in parallel with the different files polled
I am able to achieve everything but unable to configure downstream executor channel to run jobs in parallel.
Below is the reference implementation. Eveyrything is working i.e. job is launched file after the file but it needs to launch job for different files in parallel
Appreciate for any help on this
#InboundChannelAdapter (incoming channel, custompoller)
public MessageSource<File> pollFile ( Directory Scanner) {
}
public PollerMetadata custompoller(errorhandler) {
poller.trigger(cron for every 10 minutes)
}
#ServiceActivator(incoming channel)
public MessageHandler filewritertotempdiretory() {
outputchannel(tempdirchannel)
}
#ServiceActivator(inputChannel = tempdirchannel)
public MessageHandler tempdirfilehandler() {
MethodInvokingMessageHandler messageHandler = (launcher class, "methodname");
return messageHandler;
}
Poller Metadata. Read in some other SO that we should not put the task executor when setting poller on cron, is that true ?
also how can i make the messages polled (say 10 messages polled) execute in parallel i.e. add task executor in poller metadata
#Bean
public PollerMetadata preProcessPoller(MessagePublishingErrorHandler errorHandler) {
PollerMetadata poller = new PollerMetadata();
poller.setTrigger(new CronTrigger("0/15 * * * * ?"));
poller.setMaxMessagesPerPoll(Long.valueOf(maxMessagesPerPoll));
errorHandler.setDefaultErrorChannel(errorChannel());
poller.setErrorHandler(errorHandler);
return poller;
}
You need to show your complete PollerMetadata configuration.
Best guess is you haven't set maxMessagesPerPoll.
By default, for an inbound channel adapter, maxMessagesPerPoll is 1.
You can add a TaskExecutor to the poller metadata to run the messages in parallel, or make incoming channel an ExecutorChannel.

Spring Batch Parallel processing with JMS

I implemented a spring batch project that reads from a weblogic Jms queue (Custom Item Reader not message driven), then pass the Jms message data to an item writer (chunk = 1) where i call some APIs and write in DataBase.
However, i am trying to implement parallel Jms processing, reading in parallel Jms messages and passing them to the writer without waiting for the previous processes to complete.
I’ve used a DefaultMessageListenerContainer in a previous project and it offers a parallel consuming of jms messages, but in this project i have to use the spring batch framework.
I tried using the easiest solution (multi-threaded step) but it
didn’t work , JmsException : "invalid blocking receive when another
receive is in progress" which means probably that my reader is
statefull.
I thought about using remote partitioning but then i have to read all
messages and put the data into step execution contexts before calling
the slave steps, which isn't really efficient if dealing with a large
number of messages.
I looked a little bit into remote chunking, i understand that it passes data via queue channels, but i can't seem to find the utility in reading from a Jms and putting messages in a local queue for slave workers.
How can I approach this?
My code:
#Bean
Step step1() {
return steps.get("step1").<Message, DetectionIncoherenceLiqJmsOut>chunk(1)
.reader(reader()).processor(processor()).writer(writer())
.listener(stepListener()).build();
}
#Bean
Job job(#Qualifier("step1") Step step1) {
return jobs.get("job").start(step1).build();
}
Jms Code :
#Override
public void initQueueConnection() throws NamingException, JMSException {
Hashtable<String, String> properties = new Hashtable<String, String>();
properties.put(Context.INITIAL_CONTEXT_FACTORY, env.getProperty(WebLogicConstant.JNDI_FACTORY));
properties.put(Context.PROVIDER_URL, env.getProperty(WebLogicConstant.JMS_WEBLOGIC_URL_RECEIVE));
InitialContext vInitialContext = new InitialContext(properties);
QueueConnectionFactory vQueueConnectionFactory = (QueueConnectionFactory) vInitialContext
.lookup(env.getProperty(WebLogicConstant.JMS_FACTORY_RECEIVE));
vQueueConnection = vQueueConnectionFactory.createQueueConnection();
vQueueConnection.start();
vQueueSession = vQueueConnection.createQueueSession(false, 0);
Queue vQueue = (Queue) vInitialContext.lookup(env.getProperty(WebLogicConstant.JMS_QUEUE_RECEIVE));
consumer = vQueueSession.createConsumer(vQueue, "JMSCorrelationID IS NOT NULL");
}
#Override
public Message receiveMessages() throws NamingException, JMSException {
return consumer.receive(20000);
}
Item reader :
#Override
public Message read() throws Exception {
return jmsServiceReceiver.receiveMessages();
}
Thanks ! i'll appreciate the help :)
There's a BatchMessageListenerContainer in the spring-batch-infrastructure-tests sub project.
https://github.com/spring-projects/spring-batch/blob/d8fc58338d3b059b67b5f777adc132d2564d7402/spring-batch-infrastructure-tests/src/main/java/org/springframework/batch/container/jms/BatchMessageListenerContainer.java
Message listener container adapted for intercepting the message reception with advice provided through configuration.
To enable batching of messages in a single transaction, use the TransactionInterceptor and the RepeatOperationsInterceptor in the advice chain (with or without a transaction manager set in the base class). Instead of receiving a single message and processing it, the container will then use a RepeatOperations to receive multiple messages in the same thread. Use with a RepeatOperations and a transaction interceptor. If the transaction interceptor uses XA then use an XA connection factory, or else the TransactionAwareConnectionFactoryProxy to synchronize the JMS session with the ongoing transaction (opening up the possibility of duplicate messages after a failure). In the latter case you will not need to provide a transaction manager in the base class - it only gets on the way and prevents the JMS session from synchronizing with the database transaction.
Perhaps you could adapt it for your use case.
I was able to do so with a multithreaded step :
// Jobs et Steps
#Bean
Step stepDetectionIncoherencesLiq(#Autowired StepBuilderFactory steps) {
int threadSize = Integer.parseInt(env.getProperty(PropertyConstant.THREAD_POOL_SIZE));
return steps.get("stepDetectionIncoherencesLiq").<Message, DetectionIncoherenceLiqJmsOut>chunk(1)
.reader(reader()).processor(processor()).writer(writer())
.readerIsTransactionalQueue()
.faultTolerant()
.taskExecutor(taskExecutor())
.throttleLimit(threadSize)
.listener(stepListener())
.build();
}
And a jmsItemReader with jmsTemplate instead of creating session and connections explicitly, it manages connections so i dont have the jms exception anymore:( JmsException : "invalid blocking receive when another receive is in progress" )
#Bean
public JmsItemReader<Message> reader() {
JmsItemReader<Message> itemReader = new JmsItemReader<>();
itemReader.setItemType(Message.class);
itemReader.setJmsTemplate(jmsTemplate());
return itemReader;
}

Spring Integration splitter and aggregator configuration

I have a scenario where I need to invoke A and B system's REST call in parallel and aggregate the responses and transform into single FinalResponse.
To achieve this, I am using Spring Integration splitter and agreggator and the configuration is as given below.
I have exposed a REST endpoint, when the request(request has co-relationId in the header) comes to the controller, we invoke gateway and the splitter sends requests to A and B channels .
Service activator A listens to channel A and invokes A system's REST call and service Activator B listens to B channel and invokes B system's REST call.
Then I need to aggregate the responses from A and B system and then transform it into FinalResponse. Currently the aggregation and transformation is working fine.
Sometimes when multiple requests come to controller, the FinalResponse takes more time when compared to single request to controller. All the responses to the requests come almost at the same time not sure why (even though the last request to controller was sent 6-7 secs after the 1st request). Is there something wrong in my configuration related to threads? Not sure why it takes more time to respond when multiple requests comes to the controller.
Also, I am not using any CorrelationStrategy, do we need to use it? Will I face any issues in multi threading environment with the below configuration? Any feedback on the configuration would be helpful
// Controller
{
FinalResponse aggregatedResponse = gateway.collateServiceInformation(inputData);
}
//Configuration
#Autowired
Transformer transformer;
//Gateway
#Bean
public IntegrationFlow gatewayServiceFlow() {
return IntegrationFlows.from("input_channel")
.channel("split_channel").get();
}
//splitter
#Bean
public IntegrationFlow splitAggregatorFlow() {
return IntegrationFlows.from("split_channel").
.split(SomeClass.class, SomeClass::getResources)
.channel(c -> c.executor(Executors.newCachedThreadPool()))
.<Resource, String>route(Resource::getName,
mapping -> mapping.channelMapping("A", "A")
.channelMapping("B", "B"))
.get();
}
//aggregator
#Bean
public IntegrationFlow aggregateFlow() {
return IntegrationFlows.from("aggregate_channel").aggregate()
.channel("transform_channel").transform(transformer).get();
}
.
.
.
//Transformer
#Component
#Scope("prototype")
public class Transformer {
#Transformer
public FinalResponse transform(final List<Result> responsesFromAAndB) {
//transformation logic and then return final response
}
}
The splitter provides a default strategy for correlation details in headers. The Aggregator will use them afterwards. What you talk about is called scatter-gather: https://docs.spring.io/spring-integration/docs/5.0.8.RELEASE/reference/html/messaging-routing-chapter.html#scatter-gather. There is a Java DSL equivalent.
I think your problem that some request in the splitted set fails, so am Aggregator can’t finish a group for that request. Nothing obvious so far in your config...

How to set a Message Handler programmatically in Spring Cloud AWS SQS?

maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}

Resources