FileSystemPersistentAcceptOnceFileListFilter not working - spring

I am using FileSystemPersistentAcceptOnceFileListFilter so as to to achieve parallelism while picking files from a common shared directory. I want that 2 instances of the service pickup different files at the same time . However files are being picked by one instancE; not by second instance
Using spring boot 2
#Bean("filterFiles")
public FileListFilter<File> filterFiles() {
/**
* with accept once file list filter if the file is processed
* it will not be processed again . This will require service restart
* this done for concurrency since we dont want same file to processed again by different threads
*
*/
ChainFileListFilter<File> filters = new ChainFileListFilter<File>();
filters.addFilter(new FileSystemPersistentAcceptOnceFileListFilter(redisMetaDataStore(), "rci-files"));
filters.addFilter(new SimplePatternFileListFilter("*.xml"));
}
#Bean
ConcurrentMetadataStore redisMetaDataStore() {
return new RedisMetadataStore(dmsRedisConnectionFactory(), "rci");
}
#Bean
public JedisClientConfigurationBuilder jedisClientConfigurationBuilder() {
JedisPoolConfig jpc = new JedisPoolConfig();
jpc.setMaxIdle(redismaxIdle);
jpc.setMaxTotal(redisMaxTotal);
JedisClientConfigurationBuilder jccb = JedisClientConfiguration.builder();
jccb.connectTimeout(Duration.ofSeconds(redisTimeout));
jccb.readTimeout(Duration.ofSeconds(redisTimeout));
jccb.usePooling().poolConfig(jpc);
jccb.useSsl();
return jccb;
}
#Bean
public RedisConnectionFactory dmsRedisConnectionFactory() {
RedisStandaloneConfiguration standaloneConfig = new RedisStandaloneConfiguration(dmsRedisHost, dmsRedisPort);
standaloneConfig.setPassword(RedisPassword.of(dmsRedisPassword));
JedisConnectionFactory factory = new JedisConnectionFactory(standaloneConfig, jedisClientConfigurationBuilder().build());
return factory;
}
and xml
<int-file:inbound-channel-adapter directory="${rnr.file.directory}" auto-startup="true"
filter="filterFiles" channel="filesIn">
<integration:poller
cron="*/10 * 0-17,18-23 * * ?"
task-executor="largeFileTaskExecutor"
max-messages-per-poll="${max-messages}"/>
</int-file:inbound-channel-adapter>
<integration:service-activator input-channel="filesIn" output-channel="toArchive"
ref="processSingleLargeFile" method="process"></integration:service-activator>

Related

Spring Integration | TCP Connections dropping after idle wait of 350 seconds

We have a java spring integration application running on aws (multiple pods within a Kubernetes cluster). We use TCP Outbound gateways to communicate with third party systems and cache these connections using a CachingClientConnectionFactory factory. On the factory we have set the sokeepalive as true however we still see that after 350 seconds the connection is dropped. Do we need anythign else in the configuration to keep pinging the server a little before 350 seconds of idle waiting time ? AWS talks about the 350s restriction here -
https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-troubleshooting.html#nat-gateway-troubleshooting-timeout
Configuration of our connection factory and gateway is as follows
#Bean
public AbstractClientConnectionFactory primeClientConnectionFactory() {
TcpNetClientConnectionFactory tcpNetClientConnectionFactory = new TcpNetClientConnectionFactory(host, port);
tcpNetClientConnectionFactory.setDeserializer(new PrimeCustomStxHeaderLengthSerializer());
tcpNetClientConnectionFactory.setSerializer(new PrimeCustomStxHeaderLengthSerializer());
tcpNetClientConnectionFactory.setSingleUse(false);
tcpNetClientConnectionFactory.setSoKeepAlive(true);
return tcpNetClientConnectionFactory;
}
#Bean
public AbstractClientConnectionFactory primeTcpCachedClientConnectionFactory() {
CachingClientConnectionFactory cachingConnFactory = new CachingClientConnectionFactory(primeClientConnectionFactory(), connectionPoolSize);
//cachingConnFactory.setSingleUse(false);
cachingConnFactory.setLeaveOpen(true);
cachingConnFactory.setSoKeepAlive(true);
return cachingConnFactory;
}
#Bean
public MessageChannel primeOutboundChannel() {
return new DirectChannel();
}
#Bean
public RequestHandlerRetryAdvice retryAdvice() {
RequestHandlerRetryAdvice retryAdvice = new RequestHandlerRetryAdvice();
RetryTemplate retryTemplate = new RetryTemplate();
FixedBackOffPolicy fixedBackOffPolicy = new FixedBackOffPolicy();
fixedBackOffPolicy.setBackOffPeriod(500);
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(3);
retryTemplate.setBackOffPolicy(fixedBackOffPolicy);
retryTemplate.setRetryPolicy(retryPolicy);
retryAdvice.setRetryTemplate(retryTemplate);
return retryAdvice;
}
#Bean
#ServiceActivator(inputChannel = "primeOutboundChannel")
public MessageHandler primeOutbound(AbstractClientConnectionFactory primeTcpCachedClientConnectionFactory) {
TcpOutboundGateway tcpOutboundGateway = new TcpOutboundGateway();
List<Advice> list = new ArrayList<>();
list.add(retryAdvice());
tcpOutboundGateway.setAdviceChain(list);
tcpOutboundGateway.setRemoteTimeout(timeOut);
tcpOutboundGateway.setRequestTimeout(timeOut);
tcpOutboundGateway.setSendTimeout(timeOut);
tcpOutboundGateway.setConnectionFactory(primeTcpCachedClientConnectionFactory);
return tcpOutboundGateway;
}
}
See this SO thread for more about Keep Alive: Does a TCP socket connection have a "keep alive"?.
According to current Java Net API we got this class:
/**
* Defines extended socket options, beyond those defined in
* {#link java.net.StandardSocketOptions}. These options may be platform
* specific.
*
* #since 1.8
*/
public final class ExtendedSocketOptions {
Which provides this constant:
/**
* Keep-Alive idle time.
*
* <p>
* The value of this socket option is an {#code Integer} that is the number
* of seconds of idle time before keep-alive initiates a probe. The socket
* option is specific to stream-oriented sockets using the TCP/IP protocol.
* The exact semantics of this socket option are system dependent.
*
* <p>
* When the {#link java.net.StandardSocketOptions#SO_KEEPALIVE
* SO_KEEPALIVE} option is enabled, TCP probes a connection that has been
* idle for some amount of time. The default value for this idle period is
* system dependent, but is typically 2 hours. The {#code TCP_KEEPIDLE}
* option can be used to affect this value for a given socket.
*
* #since 11
*/
public static final SocketOption<Integer> TCP_KEEPIDLE
= new ExtSocketOption<Integer>("TCP_KEEPIDLE", Integer.class);
So, what we need on the TcpNetClientConnectionFactory is this:
public void setTcpSocketSupport(TcpSocketSupport tcpSocketSupport) {
Implement that void postProcessSocket(Socket socket); to be able to do this:
try {
socket.setOption(ExtendedSocketOptions.TCP_KEEPIDLE, 349);
}
catch (IOException ex) {
throw new UncheckedIOException(ex);
}
According to that AWS doc you have shared with us.
See also some info in Spring Integration docs: https://docs.spring.io/spring-integration/docs/current/reference/html/ip.html#the-tcpsocketsupport-strategy-interface

Accessing file from SFTP without downloading it to local using Spring Integration

I currently have the following configuration:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpHost);
factory.setPort(sftpPort);
factory.setUser(sftpUser);
if (null != sftpPrivateKey) {
factory.setPrivateKey(sftpPrivateKey);
factory.setPrivateKeyPassphrase(sftpPrivateKeyPassphrase);
} else {
factory.setPassword(sftpPassword);
}
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
// fileSynchronizer.setDeleteRemoteFiles(true);
fileSynchronizer.setRemoteDirectory(sftpRemoteDirectory);
fileSynchronizer
.setFilter(new SftpSimplePatternFileListFilter(sftpRemoteDirectoryFilter));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "fromSftpChannel", poller = #Poller(cron = "0/5 * * * * *"))
public MessageSource<File> sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source = new SftpInboundFileSynchronizingMessageSource(
sftpInboundFileSynchronizer());
source.setLocalDirectory(new File(sftpLocalDirectory));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<>());
return source;
}
#Bean
#ServiceActivator(inputChannel = "fromSftpChannel")
public MessageHandler resultFileHandler() {
return message -> System.err.println(message.getPayload());
}
This one downloads anything from the remote directory to a local directory. But I have a rest controller and I would like to stream back a byte array of the file from the SFTP server instead of downloading it to a local machine. Is it possible in Spring Integration/Boot? Do you have some code examples, please?
Since you say that you have a REST controller to make requests for SFTP files, then I would recommend to look into an SftpOutboundGateway, which indeed designed for requests and replies. See its Command.GET and Option.STREAM:
/**
* (-stream) Streaming 'get' (returns InputStream); user must call {#link Session#close()}.
*/
STREAM("-stream"),
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#using-the-get-command
Not sure what led you to SftpInboundFileSynchronizingMessageSource for your request-reply task...
You can use the FtpClient and call retrieveFileStream to read a file from the remote sftp server. See https://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/ftp/FTPClient.html#retrieveFileStream-java.lang.String-
I think you can achieve this by using RemoteFileTemplate

Create multiple beans of SftpInboundFileSynchronizingMessageSource dynamically with InboundChannelAdapter

I am using spring inbound channel adapter to poll files from sftp server. Application needs to poll from multiple directories from single sftp server. Since Inbound channel adapter does not allow to poll multiple directories I tried creating multiple beans of same type with different values. Since number of directories can increase in future, I want to control it from application properties and want to register beans dynamically.
My code -
#Override
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
beanFactory.registerSingleton("sftpSessionFactory", sftpSessionFactory(host, port, user, password));
beanFactory.registerSingleton("sftpInboundFileSynchronizer",
sftpInboundFileSynchronizer((SessionFactory) beanFactory.getBean("sftpSessionFactory")));
}
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory(String host, String port, String user, String password) {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(host);
factory.setPort(Integer.parseInt(port));
factory.setUser(user);
factory.setPassword(password);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
private SftpInboundFileSynchronizer sftpInboundFileSynchronizer(SessionFactory sessionFactory) {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sessionFactory);
fileSynchronizer.setDeleteRemoteFiles(true);
fileSynchronizer.setPreserveTimestamp(true);
fileSynchronizer.setRemoteDirectory("/mydir/subdir);
fileSynchronizer.setFilter(new SftpSimplePatternFileListFilter("*.pdf"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "sftpChannel", poller = #Poller(fixedDelay = "2000"))
public MessageSource<File> sftpMessageSource(String s) {
SftpInboundFileSynchronizingMessageSource source = new SftpInboundFileSynchronizingMessageSource(
(AbstractInboundFileSynchronizer<ChannelSftp.LsEntry>) applicationContext.getBean("sftpInboundFileSynchronizer"));
source.setLocalDirectory(new File("/dir/subdir"));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<>());
source.setMaxFetchSize(Integer.parseInt(maxFetchSize));
source.setAutoCreateLocalDirectory(true);
return source;
}
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler handler() {
return message -> {
LOGGER.info("Payload - {}", message.getPayload());
};
}
This code works fine. But If I create sftpMessageSource dynamically, then #InboundChannelAdapter annotation won't work. Please suggest a way to dynamically create sftpMessageSource and handler beans also and add respective annotations.
Update:
Following Code Worked :
#PostConstruct
void init() {
int index = 0;
for (String directory : directories) {
index++;
int finalI = index;
IntegrationFlow flow = IntegrationFlows
.from(Sftp.inboundAdapter(sftpSessionFactory())
.preserveTimestamp(true)
.remoteDirectory(directory)
.autoCreateLocalDirectory(true)
.localDirectory(new File("/" + directory))
.localFilter(new AcceptOnceFileListFilter<>())
.maxFetchSize(10)
.filter(new SftpSimplePatternFileListFilter("*.pdf"))
.deleteRemoteFiles(true),
e -> e.id("sftpInboundAdapter" + finalI)
.autoStartup(true)
.poller(Pollers.fixedDelay(2000)))
.handle(handler())
.get();
this.flowContext.registration(flow).register();
}
}
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(host);
factory.setPort(Integer.parseInt(port));
factory.setUser(user);
factory.setPassword(password);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
Annotations in Java are static. You can't add them at runtime for created objects. Plus the framework reads those annotation on application context startup. So, what you are looking for is just not possible with Java as language per se.
You need consider to switch to Java DSL in Spring Integration to be able to use its "dynamic flows": https://docs.spring.io/spring-integration/docs/5.3.1.RELEASE/reference/html/dsl.html#java-dsl-runtime-flows.
But, please, first of all study more what Java can do and what cannot.

Spring Integration - Dynamic MailReceiver configuration

I'm pretty new to spring-integration anyway I'm using it in order to receive mails and elaborate them.
I used this spring configuration class:
#Configuration
#EnableIntegration
#PropertySource(value = { "classpath:configuration.properties" }, encoding = "UTF-8", ignoreResourceNotFound = false)
public class MailReceiverConfiguration {
private static final Log logger = LogFactory.getLog(MailReceiverConfiguration.class);
#Autowired
private EmailTransformerService emailTransformerService;
// Configurazione AE
#Bean
public MessageChannel inboundChannelAE() {
return new DirectChannel();
}
#Bean(name= {"aeProps"})
public Properties aeProps() {
Properties javaMailPropertiesAE = new Properties();
javaMailPropertiesAE.put("mail.store.protocol", "imap");
javaMailPropertiesAE.put("mail.debug", Boolean.TRUE);
javaMailPropertiesAE.put("mail.auth.debug", Boolean.TRUE);
javaMailPropertiesAE.put("mail.smtp.socketFactory.fallback", "false");
javaMailPropertiesAE.put("mail.imap.socketFactory.class", "javax.net.ssl.SSLSocketFactory");
return javaMailPropertiesAE;
}
#Bean(name="mailReceiverAE")
public MailReceiver mailReceiverAE(#Autowired MailConfigurationBean mcb, #Autowired #Qualifier("aeProps") Properties javaMailPropertiesAE) throws Exception {
return ConfigurationUtil.getMailReceiver("imap://USERNAME:PASSWORD#MAILSERVER:PORT/INBOX", new BigDecimal(2), javaMailPropertiesAE);
}
#Bean
#InboundChannelAdapter( autoStartup = "true",
channel = "inboundChannelAE",
poller = {#Poller(fixedRate = "${fixed.rate.ae}",
maxMessagesPerPoll = "${max.messages.per.poll.ae}") })
public MailReceivingMessageSource pollForEmailAE(#Autowired MailReceiver mailReceiverAE) {
MailReceivingMessageSource mrms = new MailReceivingMessageSource(mailReceiverAE);
return mrms;
}
#Transformer(inputChannel = "inboundChannelAE", outputChannel = "transformerChannelAE")
public MessageBean transformitAE( MimeMessage mailMessage ) throws Exception {
// amministratore email inbox
MessageBean messageBean = emailTransformerService.transformit(mailMessage);
return messageBean;
}
#Splitter(inputChannel = "transformerChannelAE", outputChannel = "nullChannel")
public List<Message<?>> splitIntoMessagesAE(final MessageBean mb) {
final List<Message<?>> messages = new ArrayList<Message<?>>();
for (EmailFragment emailFragment : mb.getEmailFragments()) {
Message<?> message = MessageBuilder.withPayload(emailFragment.getData())
.setHeader(FileHeaders.FILENAME, emailFragment.getFilename())
.setHeader("directory", emailFragment.getDirectory()).build();
messages.add(message);
}
return messages;
}
}
So far so good.... I start my micro-service and there is this component listening on the specified mail server and mails are downloaded.
Now I have this requirement: mail server configuration (I mean the string "imap://USERNAME:PASSWORD#MAILSERVER:PORT/INBOX") must be taken from a database and it can be configurable. In any time a system administrator can change it and the mail receiver must use the new configuration.
As far as I understood I should create a new instance of MailReceiver when a new configuration is present and use it in the InboundChannelAdapter
Is there any best practice in order to do it? I found this solution: ImapMailReceiver NO STORE attempt on READ-ONLY folder (Failure) [THROTTLED];
In this solution I can inject the ThreadPoolTaskScheduler if I define it in my Configuration class; I can also inject the DirectChannel but every-time I should create a new MailReceiver and a new ImapIdleChannelAdapter without considering this WARN message I get when the
ImapIdleChannelAdapter starts:
java.lang.RuntimeException: No beanfactory at org.springframework.integration.expression.ExpressionUtils.createStandardEvaluationContext(ExpressionUtils.java:79) at org.springframework.integration.mail.AbstractMailReceiver.onInit(AbstractMailReceiver.java:403)
Is there a better way to satisfy my scenario?
Thank you
Angelo
The best way to do this is to use the Java DSL and dynamic flow registration.
Documentation here.
That way, you can unregister the old flow and register a new one, each time the configuration changes.
It will automatically handle injecting dependencies such as the bean factory.

Spring JMS - activemq - individualDLQ not used

I'm trying to set up spring JMS for activemq, and I'd like individual DLQs for easier monitoring rather than everything being lumped on one DLQ.
However my bean for this doesn't seem to be picked up. Could anyone point me out what I'm doing wrong as the documentation's pretty vague on how to do this programatically?
My Queue config:
#Bean
public MessageConverter jacksonJmsMessageConverter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName("_type");
return converter;
}
#Bean
public DeadLetterStrategy deadLetterStrategy() {
IndividualDeadLetterStrategy deadLetterStrategy = new IndividualDeadLetterStrategy();
deadLetterStrategy.setQueueSuffix(".dlq");
return deadLetterStrategy;
}
#Bean
public RedeliveryPolicy redeliveryPolicy() {
RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setInitialRedeliveryDelay(5000);
redeliveryPolicy.setBackOffMultiplier(2);
redeliveryPolicy.setUseExponentialBackOff(true);
redeliveryPolicy.setMaximumRedeliveries(5);
return redeliveryPolicy;
}
#Bean
public Queue myQueue() {
ActiveMQQueue queue = new ActiveMQQueue("myQueue");
return queue;
}
You can apply Individual Dead Letter Strategy using configurations something like this
#Bean
DeadLetterStrategy deadLetterStrategy(){
IndividualDeadLetterStrategy dlq = new IndividualDeadLetterStrategy(); //Messages of each will get to their respective Dead Letter Queues. if Original queue = 'x', its DLQ = 'prefix + x'
dlq.setQueueSuffix(".dlq");
dlq.setUseQueueForQueueMessages(true);
return dlq;
}
#Bean
public BrokerService brokerService(#Autowired DeadLetterStrategy strategy) throws Exception {
BrokerService broker = new BrokerService();
TransportConnector connector = new TransportConnector();
connector.setUri(new URI("your broker url")); //default/embedded broker url: vm://localhost?broker.persistent=true
broker.addConnector(connector);
PolicyEntry entry = new PolicyEntry();
entry.setDestination(new ActiveMQQueue("*")); //given DeadLetterStrategy will be applied to all types of Queues; ',' can also be used
entry.setDeadLetterStrategy(strategy);
PolicyMap map = new PolicyMap();
map.setPolicyEntries(Arrays.asList(entry));
broker.setDestinationPolicy(map);
return broker;
}
And finally your queue should look like this:
#JmsListener(destination = "main_queue_name" + ".dlq")
protected void processFailedItem(YourCustomPojo data) {
//do whatever you want
}

Resources