I am working on an use case using FTP endpoint in Mule.
I am expecting FTP inbound endpoint should poll oldest files first. Currently It is polling files randomly.
Is there way to configure FTP to poll oldest file(the file has been placed first in the FTP shared folder) first?
Can some one please suggest me the solution to achieve this usecase.
Thank you
I believe you would have to extend the default Mule FtpConnectionFactory class and set your new class as an attribute named connectionFactoryClass in the FTP Connector. For the actual ordering you would need to extend the Apache FTPClient class the factory uses, with a method that orders the file listings by timestamp. Overriding the FTPClient.initiateListParsing() method would probably be enough.
I have done similar thing in sftp connector but i have time stamp in file names.
we can try this approach if timestamp or on some basis we can sort the files.
it polls the directory brings all the file names, configured a service override in the connector which sorts the file names based on the timestamp then route. pasting the sample code for reference. we can try similar thing with ftp also.
override the poll() in your service override class
public void poll(){
String[] files = sftpRRUtil.getAvailableFiles(false);
List<String> orderedFileList = sortAllFileNamesByTime(files);
Map<String,String> eventStateMap=new HashMap<String,String>();
for (String file : orderedFileList){
if (getLifecycleState().isStopping()){
break;
}
routeFile(file);
}
...
Related
How to configure the dynamic file name in Spring Integration flow?
Scenario: On FTP, every day file get generated for current date. Through the SFTP, need to read current date file.
Example: *_20220525_DATA.TXT, *_20220526_DATA.TXT
Why just patternFilter("*_DATA.TXT") doesn't work for you?
The SftpSimplePatternFileListFilter cannot be modified for its pattern.
Although you can implement a custom FileListFilter to have its patterns to be changed every day.
I have a pre existing springboot application like this one which uses spring batch to read a huge CSV file(size=1GB) from a local location and inserts the content after some changes to a database. Now, the file will be coming from a bucket in Google Cloud Platform(GCP). I am planning to use something like Inbound streaming channel adapter mentioned here. Is this feasible?If yes, what changes will be needed in the ItemReader of the batch? Or is there any other option I need to consider? Any suggestion is welcome.
The FlatFileItemReader works with any Resource implementation. So you could use an URLResource and point it to your file on GCP without having to download it.
Here’s a specific example for a CSV file using the URLResource:
URLResource ur = new URLResource("http://www.dukelearntoprogram.com/course2/java/food.csv");
for (CSVRecord record : ur.getCSVParser()) {
// print or process fields in record
String name = record.get("Name");
// other processing
}
This was extracted from non-official documentation.
by adding relevant dependencies and changing the local file path
->"flatFileItemReader.setResource(new FileSystemResource("src/main/resources/message.csv"));"
to this
->"flatFileItemReader.setResource(new UrlResource("storage.googleapis.com/test-mybucket/mycsv.csv"));"
will it work ??
I've been working with Apache Camel for a while and doing some basic stuff, but now I'm trying to create a route in which I can have multiple "consumers" to the same route, or add a consumer to the route and then process the message.
My idea is to have an Event Driven Consumer which is triggered by an event, and then to read a file from an ftp for example. I was planning to do something like this:
from("direct:processFile")
.from("ftp://localhost:21/folder?fileName=${body.fileName}") // etc.
.log("Start downloading file ${file:name}.")
.unmarshal().bindy(BindyType.Csv, MyFile.class)
.to("bean:fileProcessor")
.log("Downloaded file ${file:name} complete.");
So the idea is I have an event (for example a direct or from a message queue) that has a "fileName" property, and then use the property to download/consume a file with that name from a ftp.
I believe the problem is to have from().from() in the same route, but the problem is if I leave the ftp component inside a "to", then my queue event will be written into a file in the ftp, which is the opposite from what I want; it behaves as a produces instead of a consumer.
Is there any possible way to achieve what I'm trying to do or does it conflict with what Camel is for?
Thanks to the comment from Claus Ibsen I found what I was looking for, the component I needed and wich made it work was the Content Enricher.
Here is the route that worked for me:
from("direct:processFile")
.pollEnrich().simple("ftp://localhost:21/folder?fileName=${body.fileName}")
.log("Start downloading file ${file:name}.")
.unmarshal().bindy(BindyType.Csv, MyFile.class)
.to("bean:fileProcessor")
.log("Downloaded file ${file:name} complete.");
How about something like this ?
.from("direct:processFile")
.transform(simple("${body.fileName}"))
.from("ftp://localhost:21/folder?fileName=${body.fileName}") // etc.
.log("Start downloading file ${file:name}.")
.unmarshal().bindy(BindyType.Csv, MyFile.class)
.to("bean:fileProcessor")
.log("Downloaded file ${file:name} complete.");
I have a service of class EnsLib.HL7.Service.FTPService that picks up files from multiple subfolders and sends them to an EnsLib.HL7.MsgRouter.RoutingEngine. What I want to do is somehow capture the subfolder as a variable for use in the routing rules. Is this possible?
Let's say I have the following files and directory structure on my FTP Server
/incoming/green/apple.dat
/incoming/yellow/banana.dat
I want the Routing Rule to be able to send anything that came from the /green/ folder to one operation and from /yellow/ to another.
With a message viewer, you can trace any messages. Where you can see any message properties, and one of them is Source. Text in this property looks like:
Source apple.dat via FTP localhost:21 path '/incoming/green/'
So with all of this data, you can create a rule by this property in a Rule editor
I am trying to upload an excel file in the OpenShift server data directory (OPENSHIFT_DATA_DIR) and store the data in the MySQL database provided by OpenShift. I am following the below steps:
Upload the excel file to the OPENSHIFT_DATA_DIR.
Read the excel file from OPENSHIFT_DATA_DIR and parse it.
Update/insert the parse data into the database
I am able to upload excel file into the OPENSHIFT_DATA_DIR with the below code. I use Spring's multipart resolver
private void uploadExcelFile(MultipartFile file) throws IOException{
String fileNameWithPath = System.getenv("OPENSHIFT_DATA_DIR")+"Test.xls";
file.transferTo(new File(fileNameWithPath));
//Start reading the excel file from OPENSHIFT_DATA_DIR
.......
.......
}
I will be able to read the excel file from OPENSHIFT_DATA_DIR, but my concern is I am not sure whether the file is completely uploaded or not at the time of reading the excel.
Once "transferTo()" method is executed can I confirm that the file is completely uploaded?
Or, How will I handle it if the file is not completely uploaded?
After the transferTo() mothod call, you can do anything you want, with the transferred file.
If you read the doc of the transferTo() method, you will see, it is not asynchron.
Documentation says:
void transferTo(File dest)
throws IOException,
IllegalStateException
Transfer the received file to the given destination file.
This may either move the file in the filesystem, copy the file in the filesystem, or save memory-held contents to the destination file. If the destination file already exists, it will be deleted first.
If the file has been moved in the filesystem, this operation cannot be invoked again. Therefore, call this method just once to be able to work with any storage mechanism.
Note: when using Servlet 3.0 multipart support you need to configure the location relative to which files will be copied as explained in Part.write(java.lang.String).
Parameters:
dest - the destination file
Throws:
IOException - in case of reading or writing errors
IllegalStateException - if the file has already been moved in the filesystem and is not available anymore for another transfer