Spring Integration : get poll expression from database - ftp

I have an FTP message source and I want to enable the user to configure the poll frequency through an application.
This is the current configuration of the Inbound channel adapter
#Bean
#InboundChannelAdapter(channel = "fromSmartpath", poller = #Poller(cron = "0 15 8 ? * MON,TUE,WED,THU,FRI,SAT"))
public MessageSource<File> sftpMessageSource() throws SftpException {
SftpInboundFileSynchronizingMessageSource source = new SftpInboundFileSynchronizingMessageSource(
sftpInboundFileSynchronizer());
source.setLocalDirectory(new File(Constants.LOCAL_REPOSITORY_PATH));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new FileSystemPersistentAcceptOnceFileListFilter(metaDataStore(), "metastore"));
return source;
}
My goal is to retrieve the cron expression from the database. Is There a way to achieve this?
Thank you

The cron expression ends up in the CronTrigger. You can develop some service which SELECT an expression from DB in its afterPropertiesSet() and returns it via getter.
Then you declare a #Bean for the CronTrigger and call that getter from the service during its definition.
The #Poller on the #InboundChannelAdapter has a trigger option to refer to the existing bean.

Related

Spring Integration - WIth AWS S3 ( Retry Strategy)

I am creating a simple integration service with AWS S3. I am facing some difficulties when an exception occurs.
My requirement is to poll an S3 Bucket periodically and to apply some transformation whenever a file is newly placed into S3 Bucket. The below code snippet works fine, but when an exception occurs it continues to retry again and again. I do not want that to happen. Can someone help me here.,
The IntegrationFlow is defined as below.,
#Configuration
public class S3Routes {
#Bean
public IntegrationFlow downloadFlow(MessageSource<InputStream> s3InboundStreamingMessageSource) {
return IntegrationFlows.from(s3InboundStreamingMessageSource)
.channel("s3Channel")
.handle("QueryServiceImpl", "processFile")
.get();
}
}
Configuration file is as below.,
#Service
public class S3AppConfiguration {
#Bean
#InboundChannelAdapter(value = "s3Channel")
public MessageSource<InputStream> s3InboundStreamingMessageSource(S3RemoteFileTemplate template) {
S3StreamingMessageSource messageSource = new S3StreamingMessageSource(template);
messageSource.setRemoteDirectory("my-bucket-name");
messageSource.setFilter(new S3PersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
return messageSource;
}
#Bean
public PollableChannel s3Channel() {
return new QueueChannel();
}
#Bean
public S3RemoteFileTemplate template(AmazonS3 amazonS3) {
return new S3RemoteFileTemplate(new S3SessionFactory(amazonS3));
}
#Bean(name = "amazonS3")
public AmazonS3 nonProdAmazonS3(BasicAWSCredentials basicAWSCredentials) {
ClientConfiguration config = new ClientConfiguration();
config.setProxyHost("localhost");
config.setProxyPort(3128);
return AmazonS3ClientBuilder.standard().withRegion(Regions.fromName("ap-southeast-1"))
.withCredentials(new AWSStaticCredentialsProvider(basicAWSCredentials))
.withClientConfiguration(config)
.build();
}
#Bean
public BasicAWSCredentials basicAWSCredentials() {
return new BasicAWSCredentials("access_key", "secret_key");
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata nonProdPoller() {
return Pollers.cron("* */2 * * * *")
.get();
}
}
AcceptOnceFileList filter that I have used here, helps me to prevent handling the same file for continuous retries. But, I do not want to use AcceptOnceFileList filter, because when a file is not processed on 1st attempt, I wish to retry on next Poll (usually it happens every 1 hour in Prod region). I tried to use filter.remove() method whenever the processing fails(in case of any exception), it again results in continuous retries.
I am not sure how to disable the continuous retries on failure. Where should I configure it?
I took a look at Spring Integration ( Retry Strategy). Same scenario, but a different integration. I am not sure how to set up this for my IntegrationFlow. Can someone help here? Thanks in advance
That story is different: it talks about a listener container for AMQP. You use a source polling channel adapter - the approach might be different.
You create two source polling channel adapters: one via that #InboundChannelAdapter, another via IntegrationFlows.from(s3InboundStreamingMessageSource). Both of them produces data to the same channel. Not sure if that is really intentional.
It is not clear what is that retry in your case unless you really do that manual filter.remove() call. In this case indeed it is going to retry. But this is a single, not controlled retry. It is going to retry again only if you call that filter.remove() again. So, if you do everything yourself, why is the question?
Consider to use a RequestHandlerRetryAdvice configured for that your handle() instead: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#message-handler-advice-chain. This way you really going to pull the remote file only once and retry is going to be managed by the Spring Retry API.
UPDATE
So, after some Cron Expression learning I realized that your one is wrong:
* */2 * * * * - means every second of every even minute
Must be like this:
0 */2 * * * * - at the beginning of every even minute
Perhaps something similar is with your hourly cron expression on the prod...

Kafka stream does not retry on deserialisation error

Spring cloud Kafka stream does not retry upon deserialization error even after specific configuration. The expectation is, it should retry based on the configured retry policy and at the end push the failed message to DLQ.
Configuration as below.
spring.cloud.stream.bindings.input_topic.consumer.maxAttempts=7
spring.cloud.stream.bindings.input_topic.consumer.backOffInitialInterval=500
spring.cloud.stream.bindings.input_topic.consumer.backOffMultiplier=10.0
spring.cloud.stream.bindings.input_topic.consumer.backOffMaxInterval=100000
spring.cloud.stream.bindings.iinput_topic.consumer.defaultRetryable=true
public interface MyStreams {
String INPUT_TOPIC = "input_topic";
String INPUT_TOPIC2 = "input_topic2";
String ERROR = "apperror";
String OUTPUT = "output";
#Input(INPUT_TOPIC)
KStream<String, InObject> inboundTopic();
#Input(INPUT_TOPIC2)
KStream<Object, InObject> inboundTOPIC2();
#Output(OUTPUT)
KStream<Object, outObject> outbound();
#Output(ERROR)
MessageChannel outboundError();
}
#StreamListener(MyStreams.INPUT_TOPIC)
#SendTo(MyStreams.OUTPUT)
public KStream<Key, outObject> processSwft(KStream<Key, InObject> myStream) {
return myStream.mapValues(this::transform);
}
The metadataRetryOperations in KafkaTopicProvisioner.java is always null and hence it creates a new RetryTemplate in the afterPropertiesSet().
public KafkaTopicProvisioner(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties, KafkaProperties kafkaProperties) {
Assert.isTrue(kafkaProperties != null, "KafkaProperties cannot be null");
this.adminClientProperties = kafkaProperties.buildAdminProperties();
this.configurationProperties = kafkaBinderConfigurationProperties;
this.normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties, kafkaBinderConfigurationProperties);
}
public void setMetadataRetryOperations(RetryOperations metadataRetryOperations) {
this.metadataRetryOperations = metadataRetryOperations;
}
public void afterPropertiesSet() throws Exception {
if (this.metadataRetryOperations == null) {
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy simpleRetryPolicy = new SimpleRetryPolicy();
simpleRetryPolicy.setMaxAttempts(10);
retryTemplate.setRetryPolicy(simpleRetryPolicy);
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(100L);
backOffPolicy.setMultiplier(2.0D);
backOffPolicy.setMaxInterval(1000L);
retryTemplate.setBackOffPolicy(backOffPolicy);
this.metadataRetryOperations = retryTemplate;
}
}
The retry configuration only works with MessageChannel-based binders. With the KStream binder, Spring just helps with building the topology in a prescribed way, it's not involved with the message flow once the topology is built.
The next version of spring-kafka (used by the binder) has added the RecoveringDeserializationExceptionHandler (commit here); while it can't help with retry, it can be used with a DeadLetterPublishingRecoverer send the record to a dead-letter topic.
You can use a RetryTemplate within your processors/transformers to retry specific operations.
Spring cloud Kafka stream does not retry upon deserialization error even after specific configuration.
The behavior you are seeing matches the default settings of Kafka Streams when it encounters a deserialization error.
From https://docs.confluent.io/current/streams/faq.html#handling-corrupted-records-and-deserialization-errors-poison-pill-records:
LogAndFailExceptionHandler implements DeserializationExceptionHandler and is the default setting in Kafka Streams. It handles any encountered deserialization exceptions by logging the error and throwing a fatal error to stop your Streams application. If your application is configured to use LogAndFailExceptionHandler, then an instance of your application will fail-fast when it encounters a corrupted record by terminating itself.
I am not familiar with Spring's facade for Kafka Streams, but you probably need to configure the desired org.apache.kafka.streams.errors.DeserializationExceptionHandler, instead of configuring retries (they are meant for a different purpose). Or, you may want to implement your own, custom handler (see link above for more information), and then configure Spring/KStreams to use it.

Spring Jdbc inbound channel adapter

I'm trying for a program in spring which does DB poll and selects that record to read. I see example for xml but i would like to know how do we do in java config. Can someone show me an example ?
You need JdbcPollingChannelAdapter #Bean definition, marked with the #InboundChannelAdapter:
#Bean
#InboundChannelAdapter(value = "fooChannel", poller = #Poller(fixedDelay="5000"))
public MessageSource<?> storedProc(DataSource dataSource) {
return new JdbcPollingChannelAdapter(dataSource, "SELECT * FROM foo where status = 0");
}
http://docs.spring.io/spring-integration/docs/4.3.11.RELEASE/reference/html/overview.html#programming-tips

Send and receive files from FTP in Spring Boot

I'm new to Spring Framework and, indeed, I'm learning and using Spring Boot. Recently, in the app I'm developing, I made Quartz Scheduler work, and now I want to make Spring Integration work there: FTP connection to a server to write and read files from.
What I want is really simple (as I've been able to do so in a previous java application). I've got two Quartz Jobs scheduled to fired in different times daily: one of them reads a file from a FTP server and another one writes a file to a FTP server.
I'll detail what I've developed so far.
#SpringBootApplication
#ImportResource("classpath:ws-config.xml")
#EnableIntegration
#EnableScheduling
public class MyApp extends SpringBootServletInitializer {
#Autowired
private Configuration configuration;
//...
#Bean
public DefaultFtpsSessionFactory myFtpsSessionFactory(){
DefaultFtpsSessionFactory sess = new DefaultFtpsSessionFactory();
Ftp ftp = configuration.getFtp();
sess.setHost(ftp.getServer());
sess.setPort(ftp.getPort());
sess.setUsername(ftp.getUsername());
sess.setPassword(ftp.getPassword());
return sess;
}
}
The following class I've named it as a FtpGateway, as follows:
#Component
public class FtpGateway {
#Autowired
private DefaultFtpsSessionFactory sess;
public void sendFile(){
// todo
}
public void readFile(){
// todo
}
}
I'm reading this documentation to learn to do so. Spring Integration's FTP seems to be event driven, so I don't know how can I execute either of the sendFile() and readFile() from by Jobs when the trigger is fired at an exact time.
The documentation tells me something about using Inbound Channel Adapter (to read files from a FTP?), Outbound Channel Adapter (to write files to a FTP?) and Outbound Gateway (to do what?):
Spring Integration supports sending and receiving files over FTP/FTPS by providing three client side endpoints: Inbound Channel Adapter, Outbound Channel Adapter, and Outbound Gateway. It also provides convenient namespace-based configuration options for defining these client components.
So, I haven't got it clear as how to follow.
Please, could anybody give me a hint?
Thank you!
EDIT:
Thank you #M. Deinum. First, I'll try a simple task: read a file from the FTP, the poller will run every 5 seconds. This is what I've added:
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(myFtpsSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setPreserveTimestamp(true);
fileSynchronizer.setRemoteDirectory("/Entrada");
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.csv"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "ftpChannel", poller = #Poller(fixedDelay = "5000"))
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source = new FtpInboundFileSynchronizingMessageSource(inbound);
source.setLocalDirectory(new File(configuracion.getDirFicherosDescargados()));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
Object payload = message.getPayload();
if(payload instanceof File){
File f = (File) payload;
System.out.println(f.getName());
}else{
System.out.println(message.getPayload());
}
}
};
}
Then, when the app is running, I put a new csv file intro "Entrada" remote folder, but the handler() method isn't run after 5 seconds... I'm doing something wrong?
Please add #Scheduled(fixedDelay = 5000) over your poller method.
You should use SPRING BATCH with tasklet. It is far easier to configure bean, crone time, input source with existing interfaces provided by Spring.
https://www.baeldung.com/introduction-to-spring-batch
Above example is annotation and xml based both, you can use either.
Other benefit Take use of listeners and parallel steps. This framework can be used in Reader - Processor - Writer manner as well.

Spring Cloud Stream + Quartz

I am planning to use Spring cloud Stream for my project. I see that there's built-in Trigger source application starter. What I want to do is to use, quartz job scheduler as the source app. This is to allow dynamic job schedules from application. Is there a good sample to achieve this?
I found this. spring integration + cron + quartz in cluster?. This solution talks about getting reference to inbound channel adapter. I am using Annotation to define the inbound channel adapter. How do I get references to this object so that I can do start / stop mentioned in the solution.
This is how i define inbound channel adapter.
#Bean
#InboundChannelAdapter(autoStartup = "false", value = SourceChannel.CHANNEL_NAME, poller = #Poller(trigger = "fireOnceTrigger"))
public MessageSource<String> timerMessageSource() {
return new MessageSource<String>() {
public Message<String> receive() {
System.out.println("******************");
System.out.println("At the Source");
System.out.println("******************");
String value = "{\"value\":\"hi\"}";
System.out.println("Sending value: " + value);
return MessageBuilder.withPayload(value).setHeader(MessageHeaders.CONTENT_TYPE, "application/json").build();
}
};
}
The related issue on GitHub: https://github.com/spring-projects/spring-integration-java-dsl/issues/138
The algorithm to build a bean name for automatically created endpoints is like:
The bean names are generated with this algorithm: * The MessageHandler (MessageSource) #Bean gets its own standard name from the method name or name attribute on the #Bean. This works like there is no Messaging Annotation on the #Bean method. * The AbstractEndpoint bean name is generated with the pattern: [configurationComponentName].[methodName].[decapitalizedAnnotationClassShortName]. For example the endpoint (SourcePollingChannelAdapter) for the consoleSource() definition above gets a bean name like: myFlowConfiguration.consoleSource.inboundChannelAdapter.
See Reference Manual for more information.

Resources