I have a problem with spring integration.
I want to make a request on an ftp server to retrieve the name of a file
(at the command line: ls "filename")
But I cannot recover the file name dynamically.
I understood that there was a story with payload or header but I can not
This is what I have:
Review my controller, I use this :
private FtpConfig.MyGateway gateway;
...
gateway.fichierExist(filename);
in my FTP file :
#Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("");
sf.setPort(21);
sf.setUsername("");
sf.setPassword("");
return new CachingSessionFactory<FTPFile>(sf);
}
#Bean
#ServiceActivator(inputChannel = "ftpChannelExist")
public MessageHandler handler2() {
FtpOutboundGateway ftpOutboundGateway =
new FtpOutboundGateway(ftpSessionFactory(), "ls");
ftpOutboundGateway.setOptions("-a -1")
FtpSimplePatternFileListFilter filter = new FtpSimplePatternFileListFilter("filename"); //on filtre sur le nom
return ftpOutboundGateway;
}
#MessagingGateway
public interface MyGateway {
#Gateway(requestChannel = "ftpChannelExist")
ArrayList<String> fichierExist(String filename);
}
I tried with header too, but I can not do anything ...
Thanks.
(Sorry for my english, i'm french)
See LS command description in the Reference Manual:
In addition, filename filtering is provided, in the same manner as the inbound-channel-adapter.
The message payload resulting from an ls operation is a list of file names, or a list of FileInfo objects. These objects provide information such as modified time, permissions etc.
The remote directory that the ls command acted on is provided in the file_remoteDirectory header.
What you are missing in your configuration is a fact of the remote directory to fetch files from. Typically we suggest to have such a directory in the payload as you do with your fichierExist(String filename) and configure the third ctor arg for the FtpOutboundGateway:
FtpOutboundGateway ftpOutboundGateway =
new FtpOutboundGateway(ftpSessionFactory(), "ls", "payload");
According the logic in the FtpOutboundGateway that expression is serving as a source for the remote directory in the LS command. In your case this one is going to be an argument of your fichierExist(String filename) gateway.
You indeed can use there a FtpSimplePatternFileListFilter, but be sure to specify a proper pattern to filter remote files.
In the end the names of the remote files in the requested directory, after filtering are going to be returned to the ArrayList<String> of your gateway. That's correct.
Otherwise your question isn't clear.
Thanks for your reply.
I have change my FtpOutboundGateway for add "payload" but I can't use payload for my FtpSimplePatternFileListFilter.
I've try :
FtpSimplePatternFileListFilter filter = new FtpSimplePatternFileListFilter("filename");
FtpSimplePatternFileListFilter filter = new FtpSimplePatternFileListFilter("payload");
FtpSimplePatternFileListFilter filter = new FtpSimplePatternFileListFilter("payload.filename");
FtpSimplePatternFileListFilter filter = new FtpSimplePatternFileListFilter("payload['filename']");
Related
I've written an application that downloads files from a Sftp server. What I want to achieve is to download only ZIP files, but to download them only when they've been modified.
I've written a SftpInboundFileSynchronizer and several InboundChannelAdapter. What is weird is that the same file gets downloaded once and again. I know the key is to choose the right filters, but I don't know how to accomplish it.
public static final String SYNCHRONIZER_BEAN_NAME = "synchorinzer-bean-name";
#Bean(SYNCHRONIZER_BEAN_NAME)
public SftpInboundFileSynchronizer synchronizer(
SessionFactory<SftpClient.DirEntry> sf,
PropertiesPersistingMetadataStore ms,
AppProps cfg) {
SftpInboundFileSynchronizer sync = new SftpInboundFileSynchronizer(sf);
sync.setDeleteRemoteFiles(false);
sync.setRemoteDirectory(cfg.getFtpRemoteDirectory());
sync.setPreserveTimestamp(true);
// sync.setFilter(); ????
return sync;
}
public static final String GIPUZKOANA_OUT_CHANNEL_NAME = "GIPUZKOANA_OUT_CHANNEL";
public static final String GIPUZKOANA_SYNCHRONIZER_BEAN_NAME = "GIPUZKOANA_FILE_SYNCHRONIZER_BEAN";
#Bean(GIPUZKOANA_SYNCHRONIZER_BEAN_NAME)
#InboundChannelAdapter(channel = GIPUZKOANA_OUT_CHANNEL_NAME)
public MessageSource<File> gipuzkoanaMessageSource(
#Qualifier(SYNCHRONIZER_BEAN_NAME) SftpInboundFileSynchronizer sync,
AppProps cfg) {
SftpInboundFileSynchronizingMessageSource source = new SftpInboundFileSynchronizingMessageSource(sync);
source.setLocalDirectory(cfg.getGtfsLocalDirSyncGtfs());
source.setAutoCreateLocalDirectory(true);
source.setMaxFetchSize(1);
source.setLoggingEnabled(true);
source.setLocalFilter(files -> Lists.newArrayList(files)
.stream()
.filter(f -> f.getName().equalsIgnoreCase(cfg.getGtfsGipuzkoana()))
.collect(Collectors.toList()));
return source;
}
// ...
I've tried so far new SftpPersistentAcceptOnceFileListFilter(ms, "gtfs_"), new SftpSimplePatternFileListFilter("*.zip")... but with no luck.
How can achieve what I want?
Thanks!
Try to use something like this:
ChainFileListFilter<SftpClient.DirEntry> chainFileListFilter =
new ChainFileListFilter<>()
.addFilters(new SftpSimplePatternFileListFilter("*.zip"),
new SftpPersistentAcceptOnceFileListFilter(ms, "gtfs_"));
sync.setFilter();
This way it will check for file extension first and only then check for its previous state after processing.
Not sure about your "only when they've been modified" since this filter cannot know about such a state. You can try with a LastModifiedFileListFilter modification for SFTP to be sure that the file is old enough to be pulled.
My spring integration application reads files from a fileshare, does some processing including api calls etc. In case something goes wrong in between I would like to use a afterRollbakExpression to write the file to a failed directory in the file share/ftp/sftp etc.
I found an example of doing the same to a local file directory as follows,
#Bean
TransactionSynchronizationFactory transactionSynchronizationFactory() {
ExpressionParser parser = new SpelExpressionParser();
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
//afterCommit expression moves the file to a processed directory
syncProcessor.setAfterCommitExpression(parser.parseExpression("payload.renameTo(new java.io.File(#inboundProcessedDirectory.path "
+ " + T(java.io.File).separator + payload.name))"));
//afterRollback expression moves the file to a failed directory
syncProcessor.setAfterRollbackExpression(parser.parseExpression("payload.renameTo(new java.io.File(#inboundFailedDirectory.path "
+ " + T(java.io.File).separator + payload.name))"));
return new DefaultTransactionSynchronizationFactory(syncProcessor);
}
I would like to do the same thing but write the file to the fileshare/ftp/sftp during a rollback scenario and not to a local directory.
I have a messageHandler which I invoke to write files using the integration flow to the smb fileshare. I dont know how do i invoke the following messagehandler as a AfterRollbackExpression,
#Bean(name = SMB_FILE_ERROR_MESSAGE_HANDLER)
public MessageHandler smbMessageHandler() {
FileTransferringMessageHandler<SmbFile> handler =
new FileTransferringMessageHandler<>(smbSessionFactory);
handler.setRemoteDirectoryExpression(
new LiteralExpression("/INPUT/ERROR"));
handler.setFileNameGenerator(m ->
m.getHeaders().get(FileHeaders.FILENAME, String.class) + "." + DateTimeFormatter.ofPattern(dateFormat).format(LocalDateTime.now()));
handler.setAutoCreateDirectory(true);
return handler;
}
You just mark that smbMessageHandler bean with the #ServiceActivator(inputChannel = "smbStoreChannel") and this ExpressionEvaluatingTransactionSynchronizationProcessor may just have a setAfterRollbackChannel(smbStoreChannel). So, when rollback happens a failed message is going to be sent to that channel where your FileTransferringMessageHandler will consume it from the channel and probably send to SMB. Consider also to use a TransactionSynchronizationFactoryBean for convenience.
I have the following controller method:
#PostMapping(consumes = MediaType.MULTIPART_FORM_DATA_VALUE, path = "/upload")
public Mono<SomeResponse> saveEnhanced(#RequestPart("file") Mono<FilePart> file) {
return documentService.save(file);
}
which calls a service method where I try to use a WebClient to put some data in another application:
public Mono<SomeResponse> save(Mono<FilePart> file) {
MultipartBodyBuilder bodyBuilder = new MultipartBodyBuilder();
bodyBuilder.asyncPart("file", file, FilePart.class);
bodyBuilder.part("identifiers", "some static content");
return WebClient.create("some-url").put()
.uri("/remote-path")
.syncBody(bodyBuilder.build())
.retrieve()
.bodyToMono(SomeResponse.class);
}
but I get the error:
org.springframework.core.codec.CodecException: No suitable writer found for part: file
I tried all variants of the MultipartBodyBuilder (part, asyncpart, with or without headers) and I cannot get it to work.
Am I using it wrong, what am I missing?
Regards,
Alex
I found the solution after getting a reply from one of the contributes on the Spring Framework Github issues section.
For this to work:
The asyncPart method is expecting actual content, i.e. file.content(). I'll update it to unwrap the part content automatically.
bodyBuilder.asyncPart("file", file.content(), DataBuffer.class)
.headers(h -> {
h.setContentDispositionFormData("file", file.name());
h.setContentType(file.headers().getContentType());
});
If both headers are not set then the request will fail on the remote side, saying it cannot find the form part.
Good luck to anyone needing this!
i'm doing a simple batch job with Spring Batch and Spring Boot.
I need to read a flat file, separate the header data (first line) from the body data (rest of lines) for individual business logic processing and then write everything into a single file.
As you can see, the header has 5 params that have to be mapped to one class, and the body has 12 which have to be mapped to a different one.
I first thought of using FlatFileItemReader and skip the header. Then use the skippedLinesCallback to handle that line, but i couldn't figure out how to do it.
I'm new to Spring Batch and Java Config. If someone can help me writing a solution for my problem i would really aprecciate it!
I leave here the input file:
01.01.2017|SUBDCOBR|12:21:23|01/12/2016|31/12/2016
01.01.2017|12345678231234|0002342434|BORGIA RUBEN|27-32548987-9|FA|A|2062-
00010443/444/445|142,12|30/08/2017|142,01
01.01.2017|12345673201234|2342434|ALVAREZ ESTHER|27-32533987-9|FA|A|2062-
00010443/444/445|142,12|30/08/2017|142,02
01.01.2017|12345673201234|0002342434|LOPEZ LUCRECIA|27-32553387-9|FA|A|2062-
00010443/444/445|142,12|30/08/2017|142,12
01.01.2017|12345672301234|0002342434|SILVA JESUS|27-32558657-9|NC|A|2062-
00010443|142,12|30/08/2017|142,12
Cheers!
EDIT 1:
This would be my first attepmt . My "body" POJO is called DetalleFacturacion and my "header" POJO is CabeceraFacturacion. The reader I thought to do it with DetalleFacturacion pojo, so i can skip the header and treat it later... however i'm not sure how to assign header's data into CabeceraFacturacion.
public FlatFileItemReader<DetalleFacturacion> readerDetalleFacturacion(){
FlatFileItemReader<DetalleFacturacion> reader = new FlatFileItemReader<>();
reader.setLinesToSkip(1);
reader.setResource(new ClassPathResource("/inputFiles/GLEO-MN170100-PROCESO01-SUBDFACT-000001.txt"));
DefaultLineMapper<DetalleFacturacion> detalleLineMapper = new DefaultLineMapper<>();
DelimitedLineTokenizer tokenizerDet = new DelimitedLineTokenizer("|");
tokenizerDet.setNames(new String[] {"fechaEmision", "tipoDocumento", "letra", "nroComprobante",
"nroCliente", "razonSocial", "cuit", "montoNetoGP", "montoNetoG3",
"montoExento", "impuestos", "montoTotal"});
LineCallbackHandler skippedLineCallback = new LineCallbackHandler() {
#Override
public void handleLine(String line) {
String[] headerSeparado = line.split("|");
String printDate = headerSeparado[0];
String reportIdentifier = headerSeparado[1];
String tituloReporte = headerSeparado[2];
String fechaDesde = headerSeparado[3];
String fechaHasta = headerSeparado[4];
CabeceraFacturacion cabeceraFacturacion = new CabeceraFacturacion();
cabeceraFacturacion.setPrintDate(printDate);
cabeceraFacturacion.setReportIdentifier(reportIdentifier);
cabeceraFacturacion.setTituloReporte(tituloReporte);
cabeceraFacturacion.setFechaDesde(fechaDesde);
cabeceraFacturacion.setFechaHasta(fechaHasta);
}
};
reader.setSkippedLinesCallback(skippedLineCallback);
detalleLineMapper.setLineTokenizer(tokenizerDet);
detalleLineMapper.setFieldSetMapper(new DetalleFieldSetMapper());
detalleLineMapper.afterPropertiesSet();
reader.setLineMapper(detalleLineMapper);
// Test to check if it is saving correctly data in CabeceraFacturacion
CabeceraFacturacion cabeceraFacturacion = new CabeceraFacturacion();
System.out.println("Print Date:"+cabeceraFacturacion.getPrintDate());
System.out.println("Report Identif:
"+cabeceraFacturacion.getReportIdentifier());
return reader;
}
You are correct . You need to use skippedLinesCallback to handle skip lines.
You need to implement LineCallbackHandler interface and add you processing in handleLine method.
LineCallbackHandler Interface passes the raw line content of the lines in the file to be skipped. If linesToSkip is set to 2, then this interface is called twice.
This is how you can define Reader for the same.
Java Config - Spring Batch 4
#Bean
public FlatFileItemReader<POJO> myReader() {
return FlatFileItemReader<pojo>().
.setResource(new FileSystemResource("resources/players.csv"));
.name("myReader")
.delimited()
.delimiter(",")
.names("pro1,pro2,pro3")
.targetType(POJO.class)
.skippedLinesCallback(skippedLinesCallback)
.build();
}
I added a SpringContextProcessor to my NiFi flow, it executes as expected and updates the FlowFile content and attributes. But in the data provenance section of NiFi instead of seeing SEND/RECEIVE, I am seeing
03/27/2017 11:47:57.164 MDT RECEIVE 42fa1c3f-edde-4cb7-8e73-ce752f7e3d66
03/27/2017 11:47:57.163 MDT DROP 667094a7-8eef-4657-981a-dc9fdc6c4056
03/27/2017 11:47:57.163 MDT SEND 667094a7-8eef-4657-981a-dc9fdc6c4056
Looks like the original message is being dropped and replaced by a new message. I haven't seen this behavior in other components, i.e. they all seem to preserve the original Flow File UUID. Simplified version of the Spring processor code:
#ServiceActivator(inputChannel = "fromNiFi", outputChannel = "toNiFi")
public Message<byte[]> process1(Message<byte[]> inMessage) {
String inMessagePayload = new String(inMessage.getPayload());
String userId = getUserIdFromDb(inMessagePayload);
String outMessagePayload = inMessagePayload + userId;
return MessageBuilder.withPayload(outMessagePayload.getBytes())
.copyHeaders(inMessage.getHeaders())
.setHeader("userId", userId)
.build();
}
Is there a way to preserve the original Flow File UUID in the outgoing message?
This is probably an oversight on our end, so yes please do raise a JIRA.
However as a workaround you can try to extract FlowFile attributes from the incoming Message headers and then propagate them back to the outgoing message.
public Message<byte[]> process1(Message<byte[]> inMessage) {
String myHeader = inMessage.getHeader("someHeader");
. . .
return MessageBuilder.withPayload(outMessagePayload.getBytes())
.copyHeaders(inMessage.getHeaders())
.setHeader("userId", userId)
.setHeader("someHeader", myHeader)
.build();
}