Java Outbound Adaptor processing - spring

I have created an application that adds inbound adaptors at run time for ftp servers and registers them for removing at a certain stage if needed, this app will pull a csv file from ftp server(s) and place it in my local in a folder having the name of the ftp server, so every server I add will have a separate local folder created and the csv file is saved in it, now this is accomplished smoothly, the second part is I want to change the format of that file and then send it back to the respective server, so basically I need to use outbound adaptor, in this case I would need to create outbound adaptors at run time at the same time when creating inbound adaptor or adding a server, this should be done through controller same as the inbound, I searched for possible solutions and tried a one that is below but did not work or did not perform any sending of files to destination, any solution on how I can accomplish this?
In the configuration class I added the below:
public IntegrationFlow ftpOutboundFlow(Branch myBranch){
return IntegrationFlows.from(OUTBOUND_CHANNEL)
.handle(Ftp.outboundAdapter(createNewFtpSessionFactory(myBranch), FileExistsMode.FAIL)
.useTemporaryFileName(true)
.remoteFileSeparator("/")
//.fileNameExpression("BEY/FEFOexportBEY.csv")
.remoteDirectory(myBranch.getFolderPath()))
.get();
}
#MessagingGateway
public interface MyGateway {
#Gateway(requestChannel = OUTBOUND_CHANNEL)
void sendToFtp(File file);
}
#Bean
public IntegrationFlow ftpOutboundChannel() {
return IntegrationFlows.from(OUTBOUND_CHANNEL)
.publishSubscribeChannel(p -> p
.subscribe(t -> t.handle(System.out::println)))
/*.transform(p -> {
LOG.info("Outbound intermediate Channel, message=rename file:" + p);
return p;
})*/
.channel(new NullChannel())
.get();
}
And in the Controller class
#RequestMapping("/branch/showbranch/{id}")
public String getBranch (#PathVariable String id, Model model){
model.addAttribute("branch", branchService.getById(Long.valueOf(id)));
addFlowftp(id);
addFlowftpOutbound(id);
return "/branch/showbranch";
}
private void addFlowFtp(String name) {
branch = branchService.getById(Long.valueOf(name));
System.out.println(branch.getBranchCode());
IntegrationFlow flow = ftIntegration.fileInboundFlowFromFTPServer(branch);
this.flowContext.registration(flow).id(name).register();
}
private void addFlowftpOutbound(String name) {
branch = branchService.getById(Long.valueOf(name));
System.out.println(branch.getBranchCode());
IntegrationFlow flow = ftIntegration.ftpOutboundFlow(branch);
// this.flowContext.registration(flow).id(name).register();
myGateway.sendToFtp(new File("BEY/FEFOexportBEY.csv"));
}
Here is what I get in the consol as error when I enable the register before sending the file:
java.lang.IllegalArgumentException: An IntegrationFlow 'IntegrationFlowRegistration{integrationFlow=StandardIntegrationFlow{integrationComponents={org.springframework.integration.ftp.inbound.FtpInboundFileSynchronizer#aadab28=98.org.springframework.integration.ftp.inbound.FtpInboundFileSynchronizer#0, org.springframework.integration.config.SourcePollingChannelAdapterFactoryBean#aa290b3=stockInboundPoller, org.springframework.integration.transformer.MethodInvokingTransformer#e5f85cd=98.org.springframework.integration.transformer.MethodInvokingTransformer#0, 98.channel#0=98.channel#0, org.springframework.integration.config.ConsumerEndpointFactoryBean#319dff2=98.org.springframework.integration.config.ConsumerEndpointFactoryBean#0}}, id='98', inputChannel=null}' with flowId '98' is already registered.
An existing IntegrationFlowRegistration must be destroyed before overriding.
After my second trial where I removed the registration from the first method and only tried with the second method but nothing was sent to the FTP:
GenericMessage [payload=BEY\FEFOexportBEY.csv, headers={id=43cfc2db-41e9-0866-8e4c-8e95968189ff, timestamp=1542702869007}]
2018-11-20 10:34:29.011 INFO 13716 --- [nio-8081-exec-6] f.s.s.configuration.FTIntegration : Outbound intermediate Channel, message=rename file:BEY\FEFOexportBEY.csv

You need to perform this.flowContext.registration(flow).id(name).register(); before sending the file via myGateway.sendToFtp(new File("BEY/FEFOexportBEY.csv"));.
That's first.
Other concern I see in your code that you have a ftpOutboundChannel bean for an IntegrationFlow which is subscribed to the same OUTBOUND_CHANNEL. If that one is not declared as a PublishSubscribeChannel, then you'll end up with the round-robin distribution. And I believe you would like to have file sent to the FTP and logged. So, indeed you need to declare that channel as a PublishSubscribeChannel.
You don't have any error because that OUTBOUND_CHANNEL has your ftpOutboundChannel IntegrationFlow as subscriber.

Related

Spring SFTP Outbound Adapter - determining when files have been sent

I have a Spring SFTP output adapter that I start via "adapter.start()" in my main program. Once started, the adapter transfers and uploads all the files in the specified directory as expected. But I want to stop the adapter after all the files have been transferred. How do I detect if all the files have been transferred so I can issue an adapter.stop()?
#Bean
public IntegrationFlow sftpOutboundFlow() {
return IntegrationFlows.from(Files.inboundAdapter(new File(sftpOutboundDirectory))
.filterExpression("name.endsWith('.pdf') OR name.endsWith('.PDF')")
.preventDuplicates(true),
e -> e.id("sftpOutboundAdapter")
.autoStartup(false)
.poller(Pollers.trigger(new FireOnceTrigger())
.maxMessagesPerPoll(-1)))
.log(LoggingHandler.Level.INFO, "sftp.outbound", m -> m.getPayload())
.log(LoggingHandler.Level.INFO, "sftp.outbound", m -> m.getHeaders())
.handle(Sftp.outboundAdapter(outboundSftpSessionFactory())
.useTemporaryFileName(false)
.remoteDirectory(sftpRemoteDirectory))
.get();
}
#Artem Bilan has already given the answer. But here's kind of a concrete implementation of what he said - for those who are a Spring Integration noob like me:
Define a service to get the PDF files on demand:
#Service
public class MyFileService {
public List<File> getPdfFiles(final String srcDir) {
File[] files = new File(srcDir).listFiles((dir, name) -> name.toLowerCase().endsWith(".pdf"));
return Arrays.asList(files == null ? new File[]{} : files);
}
}
Define a Gateway to start the SFTP upload flow on demand:
#MessagingGateway
public interface SFtpOutboundGateway {
#Gateway(requestChannel = "sftpOutboundFlow.input")
void uploadFiles(List<File> files);
}
Define the Integration Flow to upload the files to the SFTP server via Sftp.outboundGateway:
#Configuration
#EnableIntegration
public class FtpFlowIntegrationConfig {
// could be also bound via #Value
private String sftpRemoteDirectory = "/path/to/remote/dir";
#Bean
public SessionFactory<ChannelSftp.LsEntry> outboundSftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost("localhost");
factory.setPort(22222);
factory.setUser("client1");
factory.setPassword("password123");
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public IntegrationFlow sftpOutboundFlow(RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate) {
return e -> e
.log(LoggingHandler.Level.INFO, "sftp.outbound", Message::getPayload)
.log(LoggingHandler.Level.INFO, "sftp.outbound", Message::getHeaders)
.handle(
Sftp.outboundGateway(remoteFileTemplate, AbstractRemoteFileOutboundGateway.Command.MPUT, "payload")
);
}
#Bean
public RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate(SessionFactory<ChannelSftp.LsEntry> outboundSftpSessionFactory) {
RemoteFileTemplate<ChannelSftp.LsEntry> template = new SftpRemoteFileTemplate(outboundSftpSessionFactory);
template.setRemoteDirectoryExpression(new LiteralExpression(sftpRemoteDirectory));
template.setAutoCreateDirectory(true);
template.afterPropertiesSet();
template.setUseTemporaryFileName(false);
return template;
}
}
Wiring up:
public class SpringApp {
public static void main(String[] args) {
final MyFileService fileService = ctx.getBean(MyFileService.class);
final SFtpOutboundGateway sFtpOutboundGateway = ctx.getBean(SFtpOutboundGateway.class);
// trigger the sftp upload flow manually - only once
sFtpOutboundGateway.uploadFiles(fileService.getPdfFiles());
}
}
Import notes:
1.
#Gateway(requestChannel = "sftpOutboundFlow.input")
void uploadFiles(List files);
Here the DirectChannel channel sftpOutboundFlow.input will be used to pass message with the payload (= List<File> files) to the receiver. If this channel is not created yet, the Gateway is going to create it implicitly.
2.
#Bean
public IntegrationFlow sftpOutboundFlow(RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate) { ... }
Since IntegrationFlow is a Consumer functional interface, we can simplify the flow a little using the IntegrationFlowDefinition. During the bean registration phase, the IntegrationFlowBeanPostProcessor converts this inline (Lambda) IntegrationFlow to a StandardIntegrationFlow and processes its components. An IntegrationFlow definition using a Lambda populates DirectChannel as an inputChannel of the flow and it is registered in the application context as a bean with the name sftpOutboundFlow.input in the sample above (flow bean name + ".input"). That's why we use that name for the SFtpOutboundGateway gateway.
Ref: https://spring.io/blog/2014/11/25/spring-integration-java-dsl-line-by-line-tutorial
3.
#Bean
public RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate(SessionFactory<ChannelSftp.LsEntry> outboundSftpSessionFactory) {}
see: Remote directory for sftp outbound gateway with DSL
Flowchart:
But I want to stop the adapter after all the files have been transferred.
Logically this is not for what this kind of component has been designed. Since you are not going to have some constantly changing local directory, probably it is better to think about an even driver solution to list files in the directory via some action. Yes, it can be a call from the main, but only once for all the content of the dir and that's all.
And for this reason the Sftp.outboundGateway() with a Command.MPUT is there for you:
https://docs.spring.io/spring-integration/reference/html/sftp.html#using-the-mput-command.
You still can trigger an IntegrationFlow, but it could start from a #MessagingGateway interface to be called from a main with a local directory to list files for uploading:
https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl-gateway

Download the eligible file from SFTP server by using spring Integration using Spring boot project

My current project is based on Spring Integration. I am developing this project by using spring Boot.
My goal is to use Spring Integration to complete the below task.
Connect to SFTP
check if directory is created in the local at a specific folder
check the eligible file extension of the file specific to (CSV & XLSX)
Down load all the content from SFTP remote directory to local directory & need to track the file transfer start time and file transfer end time.
Read the file from local directory line by line and extract the certain column info.
Can you give me some suggestions ?
And how can I get the transfer start time ?
Note : This requirement i have to develop as an rest api. Please provide some guidance how i can achieve this by using spring integration?
Thanks. :)
public class SftpConfig {
#Value("${sftp.host}")
private String sftpHost;
#Value("${sftp.port:22}")
private int sftpPort;
#Value("${sftp.user}")
private String sftpUser;
#Value("${sftp.password:#{null}}")
private String sftpPasword;
#Value("${sftp.remote.directory:/}")
private String sftpRemoteDirectory;
#Value("${sftp.privateKey:#{null}}")
private Resource sftpPrivateKey;
#Value("${sftp.privateKeyPassPhrase:}")
private String privateKeyPassPhrase;
#Value("${sftp.remote.directory.download.filter:*.*}")
private String sftpRemoteDirectoryDownloadFilter;
#Value("${sftp.remote.directory.download:/}")
private String sftpRemoteDirectoryDownload;
#Value("${sftp.local.directory.download:${java.io.tmpdir}/localDownload}")
private String sftpLocalDirectoryDownload;
/*
* The SftpSessionFactory creates the sftp sessions. This is where you define
* the host , user and key information for your sftp server.
*/
// Creating session for Remote Destination SFTP server Folder
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpHost);
factory.setPort(sftpPort);
factory.setUser(sftpUser);
if (sftpPrivateKey != null) {
factory.setPrivateKey(sftpPrivateKey);
factory.setPrivateKeyPassphrase(privateKeyPassPhrase);
} else {
factory.setPassword("sftpPassword");
}
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<ChannelSftp.LsEntry>(factory);
}
/*
* The SftpInboundFileSynchronizer uses the session factory that we defined above.
* Here we set information about the remote directory to fetch files from.
* We could also set filters here to control which files get downloaded
*/
#Bean
public SftpInboundFileSynchronizer SftpInboundFileSynchronizer () {
SftpInboundFileSynchronizer synchronizer = new SftpInboundFileSynchronizer();
return null;
}
If you take a look into the SftpStreamingMessageSource instead: https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#sftp-streaming, plus use a FileSplitter to read that file line by line (which supports a "first like as header", too), you won't need to worry about transferring the file to local dir. You will do everything with the remote content on demand.
On the other hand, since you talk about a REST API, you probably are going to have some #RestController or Spring Integration HTTP Inbound Gateway: https://docs.spring.io/spring-integration/docs/current/reference/html/http.html#http-inbound, then you need to think about using an SftpOutboundGateway with an MGET command: https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#sftp-outbound-gateway.
If you still need to track a download time for every single file, you need to consider to use that gateway twice: with a LIST command and NAME_ONLY, and the second time for GET command. At this point you can add ChannelInterceptor for input and output channels of the second gateway, so you will have a file name info to correlate and catch start and stop time before and after this gateway.

Springboot cloud Stream with Kafka

I'm trying to setup a project with Springboot cloud Stream with Kafka. I managed to build a simple example, where a listener gets messages from a topic and after processed it, it sends the output to another topic.
My listener and channels are configured like this:
#Component
public class FileEventListener {
private FileEventProcessorService fileEventProcessorService;
#Autowired
public FileEventListener(FileEventProcessorService fileEventProcessorService) {
this.fileEventProcessorService = fileEventProcessorService;
}
#StreamListener(target = FileEventStreams.INPUT)
public void handleLine(#Payload(required = false) String jsonData) {
this.fileEventProcessorService.process(jsonData);
}
}
public interface FileEventStreams {
String INPUT = "file_events";
String OUTPUT = "raw_lines";
#Input(INPUT)
SubscribableChannel inboundFileEventChannel();
#Output(OUTPUT)
MessageChannel outboundRawLinesChannel();
}
The problem with this example is that when the service starts, it doesn't check for messages that already exist in the topic, it only process those messages that are sent after it started. I'm very new to Springboot stream and kafka, but for what I've read, this behavior may correspond to the fact that I'm using a SubscribableChannel. I tried to use a QueueChannel for example, to see how it works but I found the following exception:
Error creating bean with name ... nested exception is java.lang.IllegalStateException: No factory found for binding target type: org.springframework.integration.channel.QueueChannel among registered factories: channelFactory,messageSourceFactory
So, my questions are:
If I want to process all messages that exists in the topic once the application starts (and also messages are processed by only one consumer), I'm on the right path?
Even if QueueChannel is not the right choice for achieve the behavior explained in 1.) What do I have to add to my project to be able to use this type of channel?
Thanks!
Add spring.cloud.stream.bindings.file_events.group=foo
anonymous groups consume from the end of the topic only, bindings with a group consume from the beginning, by default.
You cannot use a PollableChannel for a binding, it must be a SubscribableChannel.

Spring Cloud Stream RabbitMQ

I am trying to understand why I would want to use Spring cloud stream with RabbitMQ. I've had a look at the RabbitMQ Spring tutorial 4 (https://www.rabbitmq.com/tutorials/tutorial-four-spring-amqp.html) which is basically what I want to do. It creates a direct exchange with 2 queues attached and depending on the routing key a message is either routed to Q1 or to Q2.
The whole process is pretty straight forward if you look at the tutorial, you create all the parts, bind them together and youre ready to go.
I was wondering what benefit I would gain in using Sing Cloud Stream and if that is even the use case for it. It was easy to create a simple exchange and even defining destination and group was straight forward with stream. So I thought why not go further and try to handle the tutorial case with stream.
I have seen that Stream has a BinderAwareChannelResolver which seems to do the same thing. But I am struggling to put it all together to achieve the same as in the RabbitMQ Spring tutorial. I am not sure if it is a dependency issue, but I seem to misunderstand something fundamentally here, I thought something like:
spring.cloud.stream.bindings.output.destination=myDestination
spring.cloud.stream.bindings.output.group=consumerGroup
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression='key'
should to the trick.
Is there anyone with a minimal example for a source and sink which basically creates a direct exchange, binds 2 queues to it and depending on routing key routes to either one of those 2 queues like in https://www.rabbitmq.com/tutorials/tutorial-four-spring-amqp.html?
EDIT:
Below is a minimal set of code which demonstrates how to do what I asked. I did not attach the build.gradle as it is straight forward (but if anyone is interested, let me know)
application.properties: setup the producer
spring.cloud.stream.bindings.output.destination=tut.direct
spring.cloud.stream.rabbit.bindings.output.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression=headers.type
Sources.class: setup the producers channel
public interface Sources {
String OUTPUT = "output";
#Output(Sources.OUTPUT)
MessageChannel output();
}
StatusController.class: Respond to rest calls and send message with specific routing keys
/**
* Status endpoint for the health-check service.
*/
#RestController
#EnableBinding(Sources.class)
public class StatusController {
private int index;
private int count;
private final String[] keys = {"orange", "black", "green"};
private Sources sources;
private StatusService status;
#Autowired
public StatusController(Sources sources, StatusService status) {
this.sources = sources;
this.status = status;
}
/**
* Service available, service returns "OK"'.
* #return The Status of the service.
*/
#RequestMapping("/status")
public String status() {
String status = this.status.getStatus();
StringBuilder builder = new StringBuilder("Hello to ");
if (++this.index == 3) {
this.index = 0;
}
String key = keys[this.index];
builder.append(key).append(' ');
builder.append(Integer.toString(++this.count));
String payload = builder.toString();
log.info(payload);
// add kv pair - routingkeyexpression (which matches 'type') will then evaluate
// and add the value as routing key
Message<String> msg = new GenericMessage<>(payload, Collections.singletonMap("type", key));
sources.output().send(msg);
// return rest call
return status;
}
}
consumer side of things, properties:
spring.cloud.stream.bindings.input.destination=tut.direct
spring.cloud.stream.rabbit.bindings.input.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.input.consumer.bindingRoutingKey=orange
spring.cloud.stream.bindings.inputer.destination=tut.direct
spring.cloud.stream.rabbit.bindings.inputer.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.inputer.consumer.bindingRoutingKey=black
Sinks.class:
public interface Sinks {
String INPUT = "input";
#Input(Sinks.INPUT)
SubscribableChannel input();
String INPUTER = "inputer";
#Input(Sinks.INPUTER)
SubscribableChannel inputer();
}
ReceiveStatus.class: Receive the status:
#EnableBinding(Sinks.class)
public class ReceiveStatus {
#StreamListener(Sinks.INPUT)
public void receiveStatusOrange(String msg) {
log.info("I received a message. It was orange number: {}", msg);
}
#StreamListener(Sinks.INPUTER)
public void receiveStatusBlack(String msg) {
log.info("I received a message. It was black number: {}", msg);
}
}
Spring Cloud Stream lets you develop event driven micro service applications by enabling the applications to connect (via #EnableBinding) to the external messaging systems using the Spring Cloud Stream Binder implementations (Kafka, RabbitMQ, JMS binders etc.,). Apparently, Spring Cloud Stream uses Spring AMQP for the RabbitMQ binder implementation.
The BinderAwareChannelResolver is applicable for dynamically binding support for the producers and I think in your case it is about configuring the exchanges and binding of consumers to that exchange.
For instance, you need to have 2 consumers with the appropriate bindingRoutingKey set based on your criteria and a single producer with the properties(routing-key-expression, destination) you mentioned above (except the group). I noticed that you have configured group for the outbound channel. The group property is applicable only for the consumers (hence inbound).
You might also want to check this one: https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit/issues/57 as I see some discussion around using routing-key-expression. Specifically, check this one on using the expression value.

How to set a Message Handler programmatically in Spring Cloud AWS SQS?

maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}

Resources