Why does my FlywayMigrationStrategy call afterMigrate.sql twice? - spring

I work on a Spring-boot project and use Flyway for database migration.
While working in dev-profile I want to fill the database with dummy data.
In order to have the exact same initial data values I overrode the FlywayMigrationStrategy Bean so that it performs a flyway.clean() before the migration starts:
#Bean
#Profile("dev")
public FlywayMigrationStrategy cleanMigrateStrategy() {
FlywayMigrationStrategy strategy = new FlywayMigrationStrategy() {
#Override
public void migrate(Flyway flyway) {
flyway.clean();
flyway.migrate();
}
};
return strategy;
}
My Migration folder contains several versioned migration scripts and ONE afterMigrate callback script which adds data to the created Tables.
The problem now is, the afterMigrate.sql script gets called two times as you can see from the following log:
2017-07-03 13:12:42.332 INFO 23222 --- [ main] o.f.core.internal.command.DbClean : Successfully cleaned schema "PUBLIC" (execution time 00:00.031s)
2017-07-03 13:12:42.397 INFO 23222 --- [ main] o.f.core.internal.command.DbValidate : Successfully validated 4 migrations (execution time 00:00.044s)
2017-07-03 13:12:42.413 INFO 23222 --- [ main] o.f.c.i.metadatatable.MetaDataTableImpl : Creating Metadata table: "PUBLIC"."schema_version"
2017-07-03 13:12:42.428 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Current version of schema "PUBLIC": << Empty Schema >>
2017-07-03 13:12:42.430 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1 - create users
2017-07-03 13:12:42.449 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 2 - create address
2017-07-03 13:12:42.464 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 3 - create patient case
2017-07-03 13:12:42.475 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 4 - state machine
2017-07-03 13:12:42.498 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Successfully applied 4 migrations to schema "PUBLIC" (execution time 00:00.086s).
2017-07-03 13:12:42.499 INFO 23222 --- [ main] o.f.c.i.c.SqlScriptFlywayCallback : Executing SQL callback: afterMigrate
2017-07-03 13:12:42.502 INFO 23222 --- [ main] o.f.c.i.c.SqlScriptFlywayCallback : Executing SQL callback: afterMigrate
2017-07-03 13:12:42.917 INFO 23222 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
If I remove the flyway.clean() function call it gets called only once.
Can somebody tell me why its called twice when I call flyway.clean() and flyway.migrate() and how to prevent the second call?

It's a known issue which will be fixed in Flyway 5.0. See https://github.com/flyway/flyway/issues/1653

Related

Quickfix/j doesn't attempt to connect to the specified socket

I am using QuickFix/J 2.3.1 (same results with 2.3.0). I have a rather straightforward spring boot application, where a FIX service is one of the beans. It creates an initiator. Until recently everything worked fine. Suddenly I stumbled into the following issue - quickfix doesn't seem to even attempt to open a connection to the specified host:port. I do suspect that this can be something to do with my code, but so far I don't have a clue on how to figure out what is going on.
Here is the initialisation code (Kotlin):
#PostConstruct
override fun start() {
logger.info("Using config file {}", config.tradingServiceConfig.quickFixConfigFile)
val sessionSettings = SessionSettings(config.tradingServiceConfig.quickFixConfigFile)
val messageStoreFactory = FileStoreFactory(sessionSettings)
val messageFactory = DefaultMessageFactory()
initiator = SocketInitiator(
this,
messageStoreFactory,
sessionSettings,
SLF4JLogFactory(sessionSettings),
messageFactory
)
logger.info("Calling initiator start")
initiator?.start()
logger.info("Initiator startup finished")
}
Here is the corresponding piece of log:
2021-12-12 22:20:48.962 INFO 94182 --- [ restartedMain] i.s.trading.gateway.service.FixService : Calling initiator start
2021-12-12 22:20:49.157 INFO 94182 --- [ restartedMain] quickfix.DefaultSessionSchedule : [FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT] daily, 08:00:00-UTC - 08:45:00-UTC
2021-12-12 22:20:49.180 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Session FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT schedule is daily, 08:00:00-UTC - 08:45:00-UTC
2021-12-12 22:20:49.181 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Session state is not current; resetting FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT
2021-12-12 22:20:49.185 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Created session: FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT
2021-12-12 22:20:49.186 INFO 94182 --- [ restartedMain] i.s.t.gateway.service.FixServiceBase : New session started: FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT}
2021-12-12 22:20:49.193 INFO 94182 --- [ restartedMain] quickfix.mina.NetworkingOptions : Socket option: SocketTcpNoDelay=true
2021-12-12 22:20:49.194 INFO 94182 --- [ restartedMain] quickfix.mina.NetworkingOptions : Socket option: SocketSynchronousWrites=false
2021-12-12 22:20:49.194 INFO 94182 --- [ restartedMain] quickfix.mina.NetworkingOptions : Socket option: SocketSynchronousWriteTimeout=30000
2021-12-12 22:20:49.276 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Configured socket addresses for session: [localhost/127.0.0.1:10669]
2021-12-12 22:20:49.277 INFO 94182 --- [ restartedMain] quickfix.SocketInitiator : SessionTimer started
2021-12-12 22:20:49.280 INFO 94182 --- [ restartedMain] i.s.trading.gateway.service.FixService : Initiator startup finished
2021-12-12 22:20:49.280 INFO 94182 --- [ssage Processor] quickfix.SocketInitiator : Started QFJ Message Processor
No other FIX, including quickfix, messages appear in the log. And I can see via netstat that not even an attempt is made to connect to the specified socket. I tried stopping the process in debugger to see what was going on, but couldn't see anything obvious.
As I said before, this used to work just fine a week or so ago when I last tried, that's why I'm so puzzled.
Any thoughts on how to debug the issue?
You seem to have configured the initiator to connect to the acceptor on a daily basis, between 08:00:00-UTC and 08:45:00-UTC.
Try increasing the date range (i. e. 08:00:00 to 18:00:00) and see if you get connected.
PS: If you're using quickfixj and Spring, have a look at QuickFixJ Spring Boot starter in Github https://github.com/esanchezros/quickfixj-spring-boot-starter

Azure Spring Cloud AppConfiguration refresh is not working

im having some problems refreshing anything, be it #ConfigurationProperties or #Value, using the
implementation("com.azure.spring:azure-spring-cloud-appconfiguration-config:2.1.1")
library. From what i could find and debug, the inner AppConfigurationRefresh class is called and the RefreshEvent is created reacting correctly to changes done in Azure Config Server. Problem is that, when context is updated, there also should be new values recognized by the ContextRefresher, which is not the case for me.
Spring Boot ContextRefresher
public synchronized Set<String> refreshEnvironment() {
Map<String, Object> before = extract(this.context.getEnvironment().getPropertySources());
updateEnvironment();
Set<String> keys = changes(before, extract(this.context.getEnvironment().getPropertySources())).keySet();
this.context.publishEvent(new EnvironmentChangeEvent(this.context, keys));
return keys;
}
The result of that refresh method is always empty, which means no changes were found.
Logs generated by the refresh event:
2021-11-29 19:53:03.543 INFO [] 34820 --- [ task-2] c.a.s.c.config.AppConfigurationRefresh : Configuration Refresh Event triggered by /myprefix/my.config.value
2021-11-29 19:53:53.694 INFO [] 34820 --- [ task-2] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
2021-11-29 19:53:53.719 INFO [] 34820 --- [ task-2] o.s.boot.SpringApplication : The following profiles are active: messaging,db,dev
2021-11-29 19:53:53.736 INFO [] 34820 --- [ task-2] o.s.boot.SpringApplication : Started application in 3.347 seconds (JVM running for 158.594)
2021-11-29 19:54:01.265 INFO [] 34820 --- [ task-2] o.s.c.e.event.RefreshEventListener : Refresh keys changed: []
2021-11-29 19:54:03.553 INFO [] 34820 --- [ scheduling-1] d.l.d.a.s.c.AppConfigurationUpdater : All configurations were refreshed.
bootstrap.yml
spring:
cloud:
azure:
appconfiguration:
stores:
- connection-string: ${connection-string}
selects:
- key-filter: '/myprefix/'
label-filter: dev
monitoring:
enabled: true
refresh-interval: 1s
triggers:
-
label: dev
key: /myprefix/my.config.value
I only noticed one thing that could be relevant to this, looking at log from start of the application (where everything is loaded properly) and at the point of refresh:
2021-11-29 19:51:31.578 INFO [] 34820 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/myprefix/https://my-config-store-stage.azconfig.io/dev'}, BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
2021-11-29 19:53:53.694 INFO [] 34820 --- [ task-2] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
It seems that when refreshing, the Spring is not able to locate all BootstrapPropertySources and maybe thats why there are no changes found. Am i missing some configuration somewhere to specify these or does anyone know whats the problem here. Thanks
The Problem
The changed values in azure appconfig store are triggering refresh event (either automaticaly using "web" version of library or through manual call to AppConfigurationRefresh.refreshConfigurations) and you can see it in the logs like this:
2021-11-29 19:53:03.543 INFO [] 34820 --- [ task-2] c.a.s.c.config.AppConfigurationRefresh : Configuration Refresh Event triggered by /myprefix/my.config.value
2021-11-29 19:53:53.694 INFO [] 34820 --- [ task-2] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
2021-11-29 19:53:53.719 INFO [] 34820 --- [ task-2] o.s.boot.SpringApplication : The following profiles are active: messaging,db,dev
2021-11-29 19:53:53.736 INFO [] 34820 --- [ task-2] o.s.boot.SpringApplication : Started application in 3.347 seconds (JVM running for 158.594)
2021-11-29 19:54:01.265 INFO [] 34820 --- [ task-2] o.s.c.e.event.RefreshEventListener : Refresh keys changed: []
However the Spring Boot is unable to locate any changes in PropertySources as is evident from:
2021-11-29 19:54:01.265 INFO [] 34820 --- [ task-2] o.s.c.e.event.RefreshEventListener : Refresh keys changed: []
The Research
What actually was deciding factor for me to find the issue, was indeed the difference between the found BootstrapPropertySources at the start of the application and at the refresh.
2021-11-29 19:51:31.578 INFO [] 34820 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/myprefix/https://my-config-store-stage.azconfig.io/dev'}, BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
2021-11-29 19:53:53.694 INFO [] 34820 --- [ task-2] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
The culprit for the not found changes is indeed the missing BootstrapPropertySource at the update. From my testing its evident, that all configuration properties are dependant on the name of the PropertySource they came from and if its missing, they will retain their original old value.
The problem is in the way the appconfig library is locating/creating the BootstrapPropertySources and does not differentiate between startup and update.
The following code is from appconfiguration library and i only took the part that is causing the bug.
public final class AppConfigurationPropertySourceLocator implements PropertySourceLocator {
...
#Override
public PropertySource<?> locate(Environment environment) {
...
String applicationName = this.properties.getName();
if (!StringUtils.hasText(applicationName)) {
applicationName = env.getProperty(SPRING_APP_NAME_PROP);
}
...
}
...
}
The problem here is that env.getProperty(SPRING_APP_NAME_PROP); is filled with "spring.application.name" during startup, because spring loads all .yml files at once, but is not available during update. Also the AppConfigurationProperties properties.name is never mentioned in any documentation from azure, but is crutial to overcome this problem.
The Solution
If you are using custom spring.application.name include some name also into the bootstrap.yml like this:
spring:
cloud:
azure:
appconfiguration:
name: your-name #any value will work
stores:
- connection-string: ${connection-string}
selects:
- key-filter: '/myprefix/'
label-filter: dev
monitoring:
enabled: true
refresh-interval: 1s
triggers:
-
label: dev
key: /myprefix/my.config.value
This will make the library use the properties name value at all times and avoid usage of the problematic spring.application.name value.
You should be using "com.azure.spring:azure-spring-cloud-appconfiguration-config-web:2.1.1" if you want to enable auto refresh. Otherwise you have to manually trigger refresh using AzureCloudConfigRefresh's refreshConfiguration. See: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/appconfiguration/azure-spring-cloud-starter-appconfiguration-config#configuration-refresh
As for the ContextRefresher, is it inside of the refresh scope? If not then the values are unable to be changed.
Though looking at your logs you don't seem to be picking up the changes. Can you take a look at this demo?
https://github.com/Azure-Samples/azure-spring-boot-samples/tree/main/appconfiguration/azure-appconfiguration-refresh-sample
The instructions to run it are here https://github.com/Azure-Samples/azure-spring-boot-samples/pull/106/files

How do I start runBoot before starting the Cypress Tests in Gradle?

I want a spring boot by using bootRun task Gradle. When spring boot is up running should Gradle run the Cypress API test
I have the following build.gradle
plugins { id 'base'}
apply plugin: 'groovy'
apply plugin: 'java-gradle-plugin'
apply from: "$rootDir/gradle/integration-test.gradle"
apply from: "$rootDir/gradle/functional-test.gradle"
apply from: "$rootDir/buildSrc/build.gradle"
repositories {
jcenter()
}
dependencies {
localGroovy()
testCompile ('org.codehaus.groovy:groovy-all:2.5.7')
testCompile('org.spockframework:spock-core:1.3-groovy-2.5')
testImplementation gradleTestKit()
}
allprojects {
task printInfo {
doLast {
println "This is ${project.name}"
}
}
}
task systemtestDevEnv (type: Exec){
workingDir 'frontend'
commandLine 'npm test'
commandLine 'npm start'
workingDir 'functionalsystemtest'
commandLine 'npm run cypress:run'
}
systemtestDevEnv.dependsOn 'backend:runWebServer'
task functionalapitest (type: Exec) {
workingDir 'funcionalapitest'
commandLine 'npm run cypress:run'
}
functionalapitest.dependsOn 'backend:runWebServer'
The in the project is the directory stucture is
JavaProject
-- fuctionalsystemtest
-- functionalapitest
-- backend
-- frontend
-- buildSrc
When I exeute gradle functionalapitest is the bootRun been exeuted but the exeution do not exeut the next steps
workingDir 'funcionalapitest'
commandLine 'npm run cypress:run'
How sould I spesifie the ask functionalapitest so that after the spring boot prosess is up an running then the cypresse test is exeuted?
Task :backend:bootRun
. ____ _ __ _ _
/\ / ' __ _ ()_ __ __ _ \ \ \ \
( ( )_ | '_ | '| | ' / ` | \ \ \ \
\/ )| |)| | | | | || (| | ) ) ) )
' |____| .|| ||| |__, | / / / /
=========|_|==============|___/=///_/
:: Spring Boot :: (v2.1.6.RELEASE)
2019-07-25 13:48:20.457 INFO 59589 --- [ main] c.s.r.R.RestfulWebServiceApplication : Starting RestfulWebServiceApplication on Steins-MacBook-Air.local with PID 59589 (/Users/steinkorsveien/Development/TestWorkSpace/JavaProject/backend/build/classes/java/main started by steinkorsveien in /Users/steinkorsveien/Development/TestWorkSpace/JavaProject/backend)
2019-07-25 13:48:20.469 INFO 59589 --- [ main] c.s.r.R.RestfulWebServiceApplication : No active profile set, falling back to default profiles: default
2019-07-25 13:48:23.388 INFO 59589 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2019-07-25 13:48:23.466 INFO 59589 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2019-07-25 13:48:23.467 INFO 59589 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.21]
2019-07-25 13:48:23.798 INFO 59589 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-07-25 13:48:23.799 INFO 59589 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 3188 ms
2019-07-25 13:48:24.356 INFO 59589 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-07-25 13:48:24.977 INFO 59589 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2019-07-25 13:48:24.989 INFO 59589 --- [ main] c.s.r.R.RestfulWebServiceApplication : Started RestfulWebServiceApplication in 6.301 seconds (JVM running for 7.844)
<=======------> 60% EXECUTING [20s]

Retrieve all files that match a filter once

I'm trying to get the file count with my filter from my streaming inbound ftp apdater, so after i process all files, I want to launch a remote shell, or is there any other way to know that the adapter finished sending messages?
I tried already with CompositeFileListFilter overriding the public List filterFiles(F[] files) method, but it never gets called.
for now I'm using a fixed file count, but it should be dynamic.
I made an override of this method on the CompositeFileListFilter
#Override
public List<F> filterFiles(F[] files) {
log.info("received {} files", files.length);
return super.filterFiles(files);
}
I have the following integration flow, using an atomic counter until 3, it should be 3.:
AtomicInteger messageCounter = new AtomicInteger(0);
return IntegrationFlows.from(Ftp.inboundStreamingAdapter(goldv5template())
.remoteDirectory("/inputFolder")
.filter(new CompositeFileListFilterWithCount<>() {{
addFilter(new FtpSimplePatternFileListFilter("pattern1.*"));
addFilter(new FtpSimplePatternFileListFilter("pattern2.*"));
addFilter(new FtpSimplePatternFileListFilter("pattern3.*"));
}})
, pollerConfiguration)
.transform(Transformers.fromStream(StandardCharsets.UTF_8.toString()))
.log(message -> "process file " + message.getHeaders().get(FileHeaders.REMOTE_FILE))
.handle(message -> {
int numericValue = messageCounter.incrementAndGet();
log.info("numeric value: {}", numericValue);
if (numericValue == 3) {
messageCounter.set(0);
log.info("launch remote shell here now"));
}
}, e -> e.advice(after()))
.get();
if I don't use the counter, I would get a remote shell call for every file and I only need it to be called once, only when the flow finished, it's scheduled based on a cronjob, so I want to call it only one time at the end.
I'm using 1s fixed delay for test, but it would only run three times a day, I have to fetch all times at every clock.
this is my pollerConfiguration for test:
sourcePollingChannelAdapterSpec -> sourcePollingChannelAdapterSpec.poller(pollerFactory -> pollerFactory.fixedRate(1000L))
UPDATE
I tried what was suggested by Artem, but I'm having a weird behavior, I'm trying to fetch all files in a certain ftp folder in one poll, so reading the docs:
if the max-messages-per-poll is set to 1 (the default), it processes only one file at a time with intervals as defined by your trigger, essentially working as “one-poll === one-file”.
For typical file-transfer use cases, you most likely want the opposite behavior: to process all the files you can for each poll and only then wait for the next poll. If that is the case, set max-messages-per-poll to -1. Then, on each poll, the adapter tries to generate as many messages as it possibly can...
so i have set max-message-per-poll to -1 so every poll gives me every file.
I added a Filter to only take .xml files and to prevent duplicates, an acceptOnceFilter, but the ftp streaming adapter is giving me unlimited times the same files which doesn't make sense, I used for this test a FixedDelay of 10s.
2019-07-23 10:32:04.308 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process2 file sample1.xml
2019-07-23 10:32:04.312 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample1.xml
2019-07-23 10:32:04.313 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.313 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.315 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample2.xml
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample2.xml
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.326 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample3.xml
2019-07-23 10:32:04.330 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample3.xml
2019-07-23 10:32:04.331 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.331 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.333 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample4.xml
2019-07-23 10:32:04.337 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample4.xml
2019-07-23 10:32:04.338 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.338 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.341 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample1.xml
2019-07-23 10:32:04.345 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample1.xml
2019-07-23 10:32:04.346 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.346 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.347 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample2.xml
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample2.xml
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.353 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample3.xml
2019-07-23 10:32:04.356 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample3.xml
2019-07-23 10:32:04.356 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.357 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.358 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample4.xml
...............................
return IntegrationFlows
.from(Ftp.inboundStreamingAdapter(testFlowTemplate())
.remoteDirectory("/inputTestFlow")
.filter(new CompositeFileListFilter<>() {{
addFilter(new AcceptOnceFileListFilter<>());
addFilter(new FtpSimplePatternFileListFilter("*.xml"));
}})
, sourcePollingChannelAdapterSpec -> sourcePollingChannelAdapterSpec.poller(pollerConfiguration.maxMessagesPerPoll(-1)))
.transform(Transformers.fromStream(StandardCharsets.UTF_8.toString()))
.log(message -> {
execution.setStartDate(new Date());
return "process file " + message.getHeaders().get(FileHeaders.REMOTE_FILE);
})
.handle(Ftp.outboundAdapter(FTPServers.PC_LOCAL.getFactory(), FileExistsMode.REPLACE)
.useTemporaryFileName(false)
.fileNameExpression("headers['" + FileHeaders.REMOTE_FILE + "']")
.remoteDirectory("/output/")
, e -> e.advice(testFlowAfter())
)
.get();
Update 2
I achieved what I needed creating this custom filter:
.filter(new FileListFilter<>() {
private final Set<String> seenSet = new HashSet<>();
private Date lastExecution;
#Override
public List<FTPFile> filterFiles(FTPFile[] files) {
return Arrays.stream(files).filter(ftpFile -> {
if (lastExecution!= null && TimeUnit.MILLISECONDS.toSeconds(new Date().getTime() - lastExecution.getTime()) >= 10L) {
this.seenSet.clear();
}
lastExecution = new Date();
if (ftpFile.getName().endsWith(".xml")) {
return this.seenSet.add(ftpFile.getRawListing());
}
return false;
}).collect(Collectors.toList());
}
})
but I used a handmade 10 Seconds Interval which is okay for my need, any other smart way to make this code better depending on the trigger ?
I think cron trigger is not a right solution here since you really would like to have a single process for all the fetched files.
I think your logic in the filterFiles() is wrong. You really would like to set a counter to the number of files it is going to process, but not the original amount:
#Override
public List<F> filterFiles(F[] files) {
List<F> filteredFiles = super.filterFiles(files);
log.info("received {} files", filteredFiles.size());
return filteredFiles;
}
and here you indeed can set a value into that messageCounter.
UPDATE
There is this functionality on filter:
/**
* Indicates that this filter supports filtering a single file.
* Filters that return true <b>must</b> override {#link #accept(Object)}.
* Default false.
* #return true to allow external calls to {#link #accept(Object)}.
* #since 5.2
* #see #accept(Object)
*/
default boolean supportsSingleFileFiltering() {
return false;
}
I think when you override it to an explicit false in your CompositeFileListFilterWithCount, you should be good. Otherwise you are indeed right: only a plain accept() is called for each file by default. Just because all your FtpSimplePatternFileListFilter comes with true by default and all of them are contribution to true on the FtpSimplePatternFileListFilter level.
Nevertheless all of that says to us that you are using already Spring Integration 5.2 :-)...
UPDATE 2
Try ChainFileListFilter isntead. Place an AcceptOnceFileListFilter in the end of the chain. Although it might be better to use a FtpPersistentAcceptOnceFileListFilter instead: it takes into account a lastmodified for the file. Also consider to include into chain some LastModifiedFileListFilter variant for the FTPFile. Something similar you have in your custom one, but as a separate filter.
Not sure, though, what you mean about making it based on trigger. There is just no any relationship between filter and trigger. You may, of course, have some common interval property and adjust it into the last modified filter value.
By the way: this your story went far away from the original at once request. An Inbound Channel Adapter is really about one file per message, so you definitely can't have a list of file in one message, like it is possible with the FtpOutboundGateway and its LS or MGET commands as I mentioned in comments below.
Regarding "how can I achieve wether all files in one message or all the messages together?" you can try property "max-messages-per-poll". It means:
"The maximum number of messages that will be produced for each poll. Defaults to
infinity (indicated by -1) for polling consumers, and 1 for polled inbound channel adapters.

Spring #Scheduled fixedDelay is not working as expected

I have 2 jobs running asynchronously one which triggers everymin and another with a fixed delay.
#Scheduled(fixedDelay=30000)
public void runJob() {
try {
JobParameters jobParameters = new JobParametersBuilder().addLong("time",System.currentTimeMillis()).toJobParameters();
JobExecution execution=jobLauncher.run(job,jobParameters);
LOGGER.info(execution.getExitStatus());}
catch (Exception e) {
try {
throw new SystemException("Scheduler ERROR :: Error coocured during Job run "+e);
} catch (SystemException e1) {
LOGGER.error("Scheduler ERROR :: Error coocured during Job run "+e);
}
}
}
#Scheduled(cron = "0 0/1 * * * ?")
public void runJob2() {
try {
JobParameters jobParameters = new JobParametersBuilder().addLong("time",System.currentTimeMillis()).toJobParameters();
JobExecution execution=jobLauncher.run(job2,jobParameters);
LOGGER.info(execution.getExitStatus());}
catch (Exception e) {
try {
throw new SystemException("ERROR:: Exception occured"+e);
} catch (SystemException e1) {
LOGGER.error("ERROR:: JOB Launching exception happened"+e);
}
}
}
as fixedDelay says "the duration between the end of last execution and the start of next execution is fixed" but for me its triggering with a fixed delay between the starts of last and next execution.
2018-05-11 **12:48:00.016** INFO 2112 --- [ taskExecutor-5] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=systemStartJob]] launched with the following parameters: [{time=1526023080016}]
2018-05-11 12:48:00.016 INFO 2112 --- [ taskExecutor-5] org.sapient.t1automation.SystemListener : Intercepting system Job Execution - Before Job!
2018-05-11 12:48:00.017 INFO 2112 --- [ taskExecutor-5] o.s.batch.core.job.SimpleStepHandler : Executing step: [systemStartStep]
.
.
.
.
2018-05-11 **12:48:24.721** INFO 2112 --- [ taskExecutor-6] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=sendMailJob]] launched with the following parameters: [{time=1526023104706}]
2018-05-11 12:48:24.737 INFO 2112 --- [ taskExecutor-6] org.sapient.t1automation.MailListener : Intercepting Job Excution - Before Job!
2018-05-11 12:48:24.737 INFO 2112 --- [ taskExecutor-6] o.s.batch.core.job.SimpleStepHandler : Executing step: [sendMailStep1]
.
.
.
2018-05-11 12:48:44.533 INFO 2112 --- [ taskExecutor-6] org.sapient.t1automation.MailListener : Intercepting Job Excution - After Job!
2018-05-11 12:48:44.533 INFO 2112 --- [ taskExecutor-6] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=sendMailJob]] completed with the following parameters: [{time=1526023104706}] and the following status: [COMPLETED]
2018-05-11 12:48:45.001 INFO 2112 --- [ taskExecutor-3] o.s.t.service.mail.MailReader : Mail:: Mails to process. 1
2018-05-11 12:48:45.017 INFO 2112 --- [ taskExecutor-3] org.sapient.t1automation.MailListener : Intercepting Job Excution - After Job!
2018-05-11 12:48:45.017 INFO 2112 --- [ taskExecutor-3] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=sendMailJob]] completed with the following parameters: [{time=1526023044672}] and the following status: [COMPLETED]
here the time between start of 2 executions is 30s instead of between end of last and start of next execution.
Check the execution time of you code. It may be possible your code executes within a second, hence you cant see the time difference.
Sample example:
#Scheduled(fixedDelay = 3000)
private void test() {
System.out.println("test -> " + new Date());
try {
Thread.sleep(2000);
}
catch (Exception e) {
System.out.println("error");
}
}
Output
test -> Fri May 11 13:45:35 IST 2018
test -> Fri May 11 13:45:40 IST 2018
test -> Fri May 11 13:45:45 IST 2018
test -> Fri May 11 13:45:51 IST 2018
test -> Fri May 11 13:45:56 IST 2018
Here you can see, difference between every print is 5 seconds instead of 3 seconds.
For debugging, you can add logs at start and end of the code. Also, Thread.sleep() for delay.

Resources