Azure Spring Cloud AppConfiguration refresh is not working - spring-boot

im having some problems refreshing anything, be it #ConfigurationProperties or #Value, using the
implementation("com.azure.spring:azure-spring-cloud-appconfiguration-config:2.1.1")
library. From what i could find and debug, the inner AppConfigurationRefresh class is called and the RefreshEvent is created reacting correctly to changes done in Azure Config Server. Problem is that, when context is updated, there also should be new values recognized by the ContextRefresher, which is not the case for me.
Spring Boot ContextRefresher
public synchronized Set<String> refreshEnvironment() {
Map<String, Object> before = extract(this.context.getEnvironment().getPropertySources());
updateEnvironment();
Set<String> keys = changes(before, extract(this.context.getEnvironment().getPropertySources())).keySet();
this.context.publishEvent(new EnvironmentChangeEvent(this.context, keys));
return keys;
}
The result of that refresh method is always empty, which means no changes were found.
Logs generated by the refresh event:
2021-11-29 19:53:03.543 INFO [] 34820 --- [ task-2] c.a.s.c.config.AppConfigurationRefresh : Configuration Refresh Event triggered by /myprefix/my.config.value
2021-11-29 19:53:53.694 INFO [] 34820 --- [ task-2] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
2021-11-29 19:53:53.719 INFO [] 34820 --- [ task-2] o.s.boot.SpringApplication : The following profiles are active: messaging,db,dev
2021-11-29 19:53:53.736 INFO [] 34820 --- [ task-2] o.s.boot.SpringApplication : Started application in 3.347 seconds (JVM running for 158.594)
2021-11-29 19:54:01.265 INFO [] 34820 --- [ task-2] o.s.c.e.event.RefreshEventListener : Refresh keys changed: []
2021-11-29 19:54:03.553 INFO [] 34820 --- [ scheduling-1] d.l.d.a.s.c.AppConfigurationUpdater : All configurations were refreshed.
bootstrap.yml
spring:
cloud:
azure:
appconfiguration:
stores:
- connection-string: ${connection-string}
selects:
- key-filter: '/myprefix/'
label-filter: dev
monitoring:
enabled: true
refresh-interval: 1s
triggers:
-
label: dev
key: /myprefix/my.config.value
I only noticed one thing that could be relevant to this, looking at log from start of the application (where everything is loaded properly) and at the point of refresh:
2021-11-29 19:51:31.578 INFO [] 34820 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/myprefix/https://my-config-store-stage.azconfig.io/dev'}, BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
2021-11-29 19:53:53.694 INFO [] 34820 --- [ task-2] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
It seems that when refreshing, the Spring is not able to locate all BootstrapPropertySources and maybe thats why there are no changes found. Am i missing some configuration somewhere to specify these or does anyone know whats the problem here. Thanks

The Problem
The changed values in azure appconfig store are triggering refresh event (either automaticaly using "web" version of library or through manual call to AppConfigurationRefresh.refreshConfigurations) and you can see it in the logs like this:
2021-11-29 19:53:03.543 INFO [] 34820 --- [ task-2] c.a.s.c.config.AppConfigurationRefresh : Configuration Refresh Event triggered by /myprefix/my.config.value
2021-11-29 19:53:53.694 INFO [] 34820 --- [ task-2] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
2021-11-29 19:53:53.719 INFO [] 34820 --- [ task-2] o.s.boot.SpringApplication : The following profiles are active: messaging,db,dev
2021-11-29 19:53:53.736 INFO [] 34820 --- [ task-2] o.s.boot.SpringApplication : Started application in 3.347 seconds (JVM running for 158.594)
2021-11-29 19:54:01.265 INFO [] 34820 --- [ task-2] o.s.c.e.event.RefreshEventListener : Refresh keys changed: []
However the Spring Boot is unable to locate any changes in PropertySources as is evident from:
2021-11-29 19:54:01.265 INFO [] 34820 --- [ task-2] o.s.c.e.event.RefreshEventListener : Refresh keys changed: []
The Research
What actually was deciding factor for me to find the issue, was indeed the difference between the found BootstrapPropertySources at the start of the application and at the refresh.
2021-11-29 19:51:31.578 INFO [] 34820 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/myprefix/https://my-config-store-stage.azconfig.io/dev'}, BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
2021-11-29 19:53:53.694 INFO [] 34820 --- [ task-2] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-/application/https://my-config-store-stage.azconfig.io/dev'}]
The culprit for the not found changes is indeed the missing BootstrapPropertySource at the update. From my testing its evident, that all configuration properties are dependant on the name of the PropertySource they came from and if its missing, they will retain their original old value.
The problem is in the way the appconfig library is locating/creating the BootstrapPropertySources and does not differentiate between startup and update.
The following code is from appconfiguration library and i only took the part that is causing the bug.
public final class AppConfigurationPropertySourceLocator implements PropertySourceLocator {
...
#Override
public PropertySource<?> locate(Environment environment) {
...
String applicationName = this.properties.getName();
if (!StringUtils.hasText(applicationName)) {
applicationName = env.getProperty(SPRING_APP_NAME_PROP);
}
...
}
...
}
The problem here is that env.getProperty(SPRING_APP_NAME_PROP); is filled with "spring.application.name" during startup, because spring loads all .yml files at once, but is not available during update. Also the AppConfigurationProperties properties.name is never mentioned in any documentation from azure, but is crutial to overcome this problem.
The Solution
If you are using custom spring.application.name include some name also into the bootstrap.yml like this:
spring:
cloud:
azure:
appconfiguration:
name: your-name #any value will work
stores:
- connection-string: ${connection-string}
selects:
- key-filter: '/myprefix/'
label-filter: dev
monitoring:
enabled: true
refresh-interval: 1s
triggers:
-
label: dev
key: /myprefix/my.config.value
This will make the library use the properties name value at all times and avoid usage of the problematic spring.application.name value.

You should be using "com.azure.spring:azure-spring-cloud-appconfiguration-config-web:2.1.1" if you want to enable auto refresh. Otherwise you have to manually trigger refresh using AzureCloudConfigRefresh's refreshConfiguration. See: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/appconfiguration/azure-spring-cloud-starter-appconfiguration-config#configuration-refresh
As for the ContextRefresher, is it inside of the refresh scope? If not then the values are unable to be changed.
Though looking at your logs you don't seem to be picking up the changes. Can you take a look at this demo?
https://github.com/Azure-Samples/azure-spring-boot-samples/tree/main/appconfiguration/azure-appconfiguration-refresh-sample
The instructions to run it are here https://github.com/Azure-Samples/azure-spring-boot-samples/pull/106/files

Related

Debezium - Oracle Connector - Service Not Starting

DebeziumEngine looking for kafka topic eventhough i have not specified KafkaOffsetBackingStore for offset.storage
Reference : DebeziumEngine Config
Config
Configuration config = Configuration.create()
.with("name", "oracle_debezium_connector")
.with("connector.class", "io.debezium.connector.oracle.OracleConnector")
.with("offset.storage", "org.apache.kafka.connect.storage.FileOffsetBackingStore")
.with("offset.storage.file.filename", "/Users/dk/Documents/work/ACET/offset.dat")
.with("offset.flush.interval.ms", 2000)
.with("database.hostname", "localhost")
.with("database.port", "1521")
.with("database.user", "pravin")
.with("database.password", "*****")
.with("database.sid", "ORCLCDB")
.with("database.server.name", "mServer")
.with("database.out.server.name", "dbzxout")
.with("database.history", "io.debezium.relational.history.FileDatabaseHistory")
.with("database.history.file.filename", "/Users/dk/Documents/work/ACET/dbhistory.dat")
.with("topic.prefix","cycowner")
.with("database.dbname", "ORCLCDB")
.build();
DebeziumEngine
DebeziumEngine<ChangeEvent<String, String>> engine = DebeziumEngine.create(Json.class)
.using(config.asProperties())
.using(connectorCallback)
.using(completionCallback)
.notifying(record -> {
System.out.println(record);
})
.build();
Error :
2022-10-29T16:06:16,457 ERROR [pool-2-thread-1] i.d.c.Configuration: The 'schema.history.internal.kafka.topic' value is invalid: A value is required
2022-10-29T16:06:16,457 ERROR [pool-2-thread-1] i.d.c.Configuration: The 'schema.history.internal.kafka.bootstrap.servers' value is invalid: A value is required**
2022-10-29T16:06:16,458 INFO [pool-2-thread-1] i.d.c.c.BaseSourceTask: Stopping down connector
2022-10-29T16:06:16,463 INFO [pool-3-thread-1] i.d.j.JdbcConnection: Connection gracefully closed
2022-10-29T16:06:16,465 INFO [pool-2-thread-1] o.a.k.c.s.FileOffsetBackingStore: Stopped FileOffsetBackingStore
connector stopped successfully
---------------------------------------------------
success status: false, message : Unable to initialize and start connector's task class 'io.debezium.connector.oracle.OracleConnectorTask' with config: {connector.class=io.debezium.connector.oracle.OracleConnector, database.history.file.filename=/Users/dkuma416/Documents/work/ACET/dbhistory.dat, database.user=pravin, database.dbname=ORCLCDB, offset.storage=org.apache.kafka.connect.storage.FileOffsetBackingStore, database.server.name=mServer, offset.flush.timeout.ms=5000, errors.retry.delay.max.ms=10000, database.port=1521, database.sid=ORCLCDB, offset.flush.interval.ms=2000, topic.prefix=cycowner, offset.storage.file.filename=/Users/dkuma416/Documents/work/ACET/offset.dat, errors.max.retries=-1, database.hostname=localhost, database.password=********, name=oracle_debezium_connector, database.out.server.name=dbzxout, errors.retry.delay.initial.ms=300, value.converter=org.apache.kafka.connect.json.JsonConverter, key.converter=org.apache.kafka.connect.json.JsonConverter, database.history=io.debezium.relational.history.MemoryDatabaseHistory}, **Error: Error configuring an instance of KafkaSchemaHistory; check the logs for details**

Quickfix/j doesn't attempt to connect to the specified socket

I am using QuickFix/J 2.3.1 (same results with 2.3.0). I have a rather straightforward spring boot application, where a FIX service is one of the beans. It creates an initiator. Until recently everything worked fine. Suddenly I stumbled into the following issue - quickfix doesn't seem to even attempt to open a connection to the specified host:port. I do suspect that this can be something to do with my code, but so far I don't have a clue on how to figure out what is going on.
Here is the initialisation code (Kotlin):
#PostConstruct
override fun start() {
logger.info("Using config file {}", config.tradingServiceConfig.quickFixConfigFile)
val sessionSettings = SessionSettings(config.tradingServiceConfig.quickFixConfigFile)
val messageStoreFactory = FileStoreFactory(sessionSettings)
val messageFactory = DefaultMessageFactory()
initiator = SocketInitiator(
this,
messageStoreFactory,
sessionSettings,
SLF4JLogFactory(sessionSettings),
messageFactory
)
logger.info("Calling initiator start")
initiator?.start()
logger.info("Initiator startup finished")
}
Here is the corresponding piece of log:
2021-12-12 22:20:48.962 INFO 94182 --- [ restartedMain] i.s.trading.gateway.service.FixService : Calling initiator start
2021-12-12 22:20:49.157 INFO 94182 --- [ restartedMain] quickfix.DefaultSessionSchedule : [FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT] daily, 08:00:00-UTC - 08:45:00-UTC
2021-12-12 22:20:49.180 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Session FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT schedule is daily, 08:00:00-UTC - 08:45:00-UTC
2021-12-12 22:20:49.181 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Session state is not current; resetting FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT
2021-12-12 22:20:49.185 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Created session: FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT
2021-12-12 22:20:49.186 INFO 94182 --- [ restartedMain] i.s.t.gateway.service.FixServiceBase : New session started: FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT}
2021-12-12 22:20:49.193 INFO 94182 --- [ restartedMain] quickfix.mina.NetworkingOptions : Socket option: SocketTcpNoDelay=true
2021-12-12 22:20:49.194 INFO 94182 --- [ restartedMain] quickfix.mina.NetworkingOptions : Socket option: SocketSynchronousWrites=false
2021-12-12 22:20:49.194 INFO 94182 --- [ restartedMain] quickfix.mina.NetworkingOptions : Socket option: SocketSynchronousWriteTimeout=30000
2021-12-12 22:20:49.276 INFO 94182 --- [ restartedMain] quickfixj.event : FIX.4.2:XXX_STAGE_UAT->YYY_XXX_STAGE_UAT: Configured socket addresses for session: [localhost/127.0.0.1:10669]
2021-12-12 22:20:49.277 INFO 94182 --- [ restartedMain] quickfix.SocketInitiator : SessionTimer started
2021-12-12 22:20:49.280 INFO 94182 --- [ restartedMain] i.s.trading.gateway.service.FixService : Initiator startup finished
2021-12-12 22:20:49.280 INFO 94182 --- [ssage Processor] quickfix.SocketInitiator : Started QFJ Message Processor
No other FIX, including quickfix, messages appear in the log. And I can see via netstat that not even an attempt is made to connect to the specified socket. I tried stopping the process in debugger to see what was going on, but couldn't see anything obvious.
As I said before, this used to work just fine a week or so ago when I last tried, that's why I'm so puzzled.
Any thoughts on how to debug the issue?
You seem to have configured the initiator to connect to the acceptor on a daily basis, between 08:00:00-UTC and 08:45:00-UTC.
Try increasing the date range (i. e. 08:00:00 to 18:00:00) and see if you get connected.
PS: If you're using quickfixj and Spring, have a look at QuickFixJ Spring Boot starter in Github https://github.com/esanchezros/quickfixj-spring-boot-starter

Retrieve all files that match a filter once

I'm trying to get the file count with my filter from my streaming inbound ftp apdater, so after i process all files, I want to launch a remote shell, or is there any other way to know that the adapter finished sending messages?
I tried already with CompositeFileListFilter overriding the public List filterFiles(F[] files) method, but it never gets called.
for now I'm using a fixed file count, but it should be dynamic.
I made an override of this method on the CompositeFileListFilter
#Override
public List<F> filterFiles(F[] files) {
log.info("received {} files", files.length);
return super.filterFiles(files);
}
I have the following integration flow, using an atomic counter until 3, it should be 3.:
AtomicInteger messageCounter = new AtomicInteger(0);
return IntegrationFlows.from(Ftp.inboundStreamingAdapter(goldv5template())
.remoteDirectory("/inputFolder")
.filter(new CompositeFileListFilterWithCount<>() {{
addFilter(new FtpSimplePatternFileListFilter("pattern1.*"));
addFilter(new FtpSimplePatternFileListFilter("pattern2.*"));
addFilter(new FtpSimplePatternFileListFilter("pattern3.*"));
}})
, pollerConfiguration)
.transform(Transformers.fromStream(StandardCharsets.UTF_8.toString()))
.log(message -> "process file " + message.getHeaders().get(FileHeaders.REMOTE_FILE))
.handle(message -> {
int numericValue = messageCounter.incrementAndGet();
log.info("numeric value: {}", numericValue);
if (numericValue == 3) {
messageCounter.set(0);
log.info("launch remote shell here now"));
}
}, e -> e.advice(after()))
.get();
if I don't use the counter, I would get a remote shell call for every file and I only need it to be called once, only when the flow finished, it's scheduled based on a cronjob, so I want to call it only one time at the end.
I'm using 1s fixed delay for test, but it would only run three times a day, I have to fetch all times at every clock.
this is my pollerConfiguration for test:
sourcePollingChannelAdapterSpec -> sourcePollingChannelAdapterSpec.poller(pollerFactory -> pollerFactory.fixedRate(1000L))
UPDATE
I tried what was suggested by Artem, but I'm having a weird behavior, I'm trying to fetch all files in a certain ftp folder in one poll, so reading the docs:
if the max-messages-per-poll is set to 1 (the default), it processes only one file at a time with intervals as defined by your trigger, essentially working as “one-poll === one-file”.
For typical file-transfer use cases, you most likely want the opposite behavior: to process all the files you can for each poll and only then wait for the next poll. If that is the case, set max-messages-per-poll to -1. Then, on each poll, the adapter tries to generate as many messages as it possibly can...
so i have set max-message-per-poll to -1 so every poll gives me every file.
I added a Filter to only take .xml files and to prevent duplicates, an acceptOnceFilter, but the ftp streaming adapter is giving me unlimited times the same files which doesn't make sense, I used for this test a FixedDelay of 10s.
2019-07-23 10:32:04.308 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process2 file sample1.xml
2019-07-23 10:32:04.312 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample1.xml
2019-07-23 10:32:04.313 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.313 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.315 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample2.xml
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample2.xml
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.326 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample3.xml
2019-07-23 10:32:04.330 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample3.xml
2019-07-23 10:32:04.331 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.331 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.333 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample4.xml
2019-07-23 10:32:04.337 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample4.xml
2019-07-23 10:32:04.338 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.338 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.341 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample1.xml
2019-07-23 10:32:04.345 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample1.xml
2019-07-23 10:32:04.346 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.346 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.347 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample2.xml
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample2.xml
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.353 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample3.xml
2019-07-23 10:32:04.356 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample3.xml
2019-07-23 10:32:04.356 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.357 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.358 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample4.xml
...............................
return IntegrationFlows
.from(Ftp.inboundStreamingAdapter(testFlowTemplate())
.remoteDirectory("/inputTestFlow")
.filter(new CompositeFileListFilter<>() {{
addFilter(new AcceptOnceFileListFilter<>());
addFilter(new FtpSimplePatternFileListFilter("*.xml"));
}})
, sourcePollingChannelAdapterSpec -> sourcePollingChannelAdapterSpec.poller(pollerConfiguration.maxMessagesPerPoll(-1)))
.transform(Transformers.fromStream(StandardCharsets.UTF_8.toString()))
.log(message -> {
execution.setStartDate(new Date());
return "process file " + message.getHeaders().get(FileHeaders.REMOTE_FILE);
})
.handle(Ftp.outboundAdapter(FTPServers.PC_LOCAL.getFactory(), FileExistsMode.REPLACE)
.useTemporaryFileName(false)
.fileNameExpression("headers['" + FileHeaders.REMOTE_FILE + "']")
.remoteDirectory("/output/")
, e -> e.advice(testFlowAfter())
)
.get();
Update 2
I achieved what I needed creating this custom filter:
.filter(new FileListFilter<>() {
private final Set<String> seenSet = new HashSet<>();
private Date lastExecution;
#Override
public List<FTPFile> filterFiles(FTPFile[] files) {
return Arrays.stream(files).filter(ftpFile -> {
if (lastExecution!= null && TimeUnit.MILLISECONDS.toSeconds(new Date().getTime() - lastExecution.getTime()) >= 10L) {
this.seenSet.clear();
}
lastExecution = new Date();
if (ftpFile.getName().endsWith(".xml")) {
return this.seenSet.add(ftpFile.getRawListing());
}
return false;
}).collect(Collectors.toList());
}
})
but I used a handmade 10 Seconds Interval which is okay for my need, any other smart way to make this code better depending on the trigger ?
I think cron trigger is not a right solution here since you really would like to have a single process for all the fetched files.
I think your logic in the filterFiles() is wrong. You really would like to set a counter to the number of files it is going to process, but not the original amount:
#Override
public List<F> filterFiles(F[] files) {
List<F> filteredFiles = super.filterFiles(files);
log.info("received {} files", filteredFiles.size());
return filteredFiles;
}
and here you indeed can set a value into that messageCounter.
UPDATE
There is this functionality on filter:
/**
* Indicates that this filter supports filtering a single file.
* Filters that return true <b>must</b> override {#link #accept(Object)}.
* Default false.
* #return true to allow external calls to {#link #accept(Object)}.
* #since 5.2
* #see #accept(Object)
*/
default boolean supportsSingleFileFiltering() {
return false;
}
I think when you override it to an explicit false in your CompositeFileListFilterWithCount, you should be good. Otherwise you are indeed right: only a plain accept() is called for each file by default. Just because all your FtpSimplePatternFileListFilter comes with true by default and all of them are contribution to true on the FtpSimplePatternFileListFilter level.
Nevertheless all of that says to us that you are using already Spring Integration 5.2 :-)...
UPDATE 2
Try ChainFileListFilter isntead. Place an AcceptOnceFileListFilter in the end of the chain. Although it might be better to use a FtpPersistentAcceptOnceFileListFilter instead: it takes into account a lastmodified for the file. Also consider to include into chain some LastModifiedFileListFilter variant for the FTPFile. Something similar you have in your custom one, but as a separate filter.
Not sure, though, what you mean about making it based on trigger. There is just no any relationship between filter and trigger. You may, of course, have some common interval property and adjust it into the last modified filter value.
By the way: this your story went far away from the original at once request. An Inbound Channel Adapter is really about one file per message, so you definitely can't have a list of file in one message, like it is possible with the FtpOutboundGateway and its LS or MGET commands as I mentioned in comments below.
Regarding "how can I achieve wether all files in one message or all the messages together?" you can try property "max-messages-per-poll". It means:
"The maximum number of messages that will be produced for each poll. Defaults to
infinity (indicated by -1) for polling consumers, and 1 for polled inbound channel adapters.

Deploy Jhipster on clevercloud

I'm deploying a Jhipster application on Clevercloud.
I have set up some configuration:
war.json
{
"build": {
"type": "maven",
"goal": "package -Pprod -DskipTests"
},
"deploy": {
"goal": "package -Pprod -DskipTests",
"container": "TOMCAT8",
"war": [
{
"file": "target/myapp-1.0.0.war"
}
]
}
}
maven.json
{
"build": {
"type": "maven",
"goal": "package -Pprod -DskipTests"
},
"deploy": {
"goal": "package -Pprod -DskipTests"
}
}
I have modified the application-prod.yml to include the url/username/password of the db add-on.
When I deploy, the deployment is successfull but the application is not running.
On the application page I have 404 error.
The DB is correctly initialised.
In the logs I have the following messages that I don't understand or I'm not able to solve:
multiple times this message
2017-09-18T09:21:22.701Z: 09:21:21.483 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiTemplate - Looking up JNDI object with name [java:comp/env/logging.exception-conversion-word]
2017-09-18T09:21:22.702Z: 09:21:21.486 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiLocatorDelegate - Converted JNDI name [java:comp/env/logging.exception-conversion-word] not found - trying original name [logging.exception-conversion-word]. javax.naming.NameNotFoundException: Name [logging.exception-conversion-word] is not bound in this Context. Unable to find [logging.exception-conversion-word].
2017-09-18T09:21:22.702Z: 09:21:21.486 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiTemplate - Looking up JNDI object with name [logging.exception-conversion-word]
2017-09-18T09:21:22.702Z: 09:21:21.486 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiPropertySource - JNDI lookup for name [logging.exception-conversion-word] threw NamingException with message: Name [logging.exception-conversion-word] is not bound in this Context. Unable to find [logging.exception-conversion-word].. Returning null.
then:
2017-09-18T09:21:22.777Z: [09:21:12.705][debug][talledLocalContainer] Connection attempt with socket Socket[unconnected], current time is 1505726472705
2017-09-18T09:21:22.778Z: [09:21:12.705][debug][talledLocalContainer] Socket Socket[unconnected] for port 8009 closed
2017-09-18T09:21:22.778Z: [09:21:13.068][debug][talledLocalContainer] Executing '/usr/x86_64-pc-linux-gnu/lib/icedtea8/jre/bin/java' with arguments:
2017-09-18T09:21:22.778Z: '-version'
2017-09-18T09:21:22.778Z: The ' characters around the executable and arguments are
2017-09-18T09:21:22.778Z: not part of the command.
2017-09-18T09:21:22.779Z: [09:21:13.085][debug][talledLocalContainer] Output appended to /tmp/cargo-jvm-version-4176730048875251522.txt
2017-09-18T09:21:22.779Z: [09:21:13.085][debug][talledLocalContainer] Error appended to /tmp/cargo-jvm-version-4176730048875251522.txt
2017-09-18T09:21:22.779Z: [09:21:13.086][debug][talledLocalContainer] Project base dir set to: /home/bas/app_4b724c3b-6703-474e-9ec4-65d775cd0013
2017-09-18T09:21:22.779Z: [09:21:13.086][debug][talledLocalContainer] Execute:Java13CommandLauncher: Executing '/usr/x86_64-pc-linux-gnu/lib/icedtea8/jre/bin/java' with arguments:
2017-09-18T09:21:22.779Z: '-version'
2017-09-18T09:21:22.779Z: The ' characters around the executable and arguments are
2017-09-18T09:21:22.779Z: not part of the command.
And multiples times:
2017-09-18T09:21:22.793Z: [09:21:13.416][debug][URLDeployableMonitor] Checking URL [http://localhost:8080/cargocpc/index.html] for status using a timeout of [120000] ms...
2017-09-18T09:21:22.794Z: [09:21:13.452][debug][URLDeployableMonitor] URL [http://localhost:8080/cargocpc/index.html] is not responding: -1 java.net.ConnectException: Connection refused (Connection refused)
2017-09-18T09:21:22.794Z: [09:21:13.452][debug][URLDeployableMonitor] Notifying monitor listener [org.codehaus.cargo.container.spi.deployer.DeployerWatchdog#7bd4937b]
Ending with:
2017-09-18T09:21:32.710Z: 2017-09-18 09:21:28.724 INFO 2232 --- [ost-startStop-1] com.bbs.dm.config.WebConfigurer : Web application configuration, using profiles: prod
2017-09-18T09:21:32.711Z: 2017-09-18 09:21:28.735 INFO 2232 --- [ost-startStop-1] com.bbs.dm.config.WebConfigurer : Web application fully configured
2017-09-18T09:21:32.711Z: 2017-09-18 09:21:28.994 DEBUG 2232 --- [ost-startStop-1] i.g.j.c.liquibase.AsyncSpringLiquibase : Starting Liquibase synchronously
2017-09-18T09:21:36.985Z: Nothing listening on 8080. Please update your configuration and redeploy
2017-09-18T09:21:52.730Z: 2017-09-18 09:21:47.492 DEBUG 2232 --- [ost-startStop-1] i.g.j.c.liquibase.AsyncSpringLiquibase : Started Liquibase in 18498 ms
2017-09-18T09:21:57.985Z: Application start successful
2017-09-18T09:21:57.985Z: No cron to setup
2017-09-18T09:21:57.986Z: Created symlink /etc/systemd/system/multi-user.target.wants/zabbix-agentd.service → /usr/x86_64-pc-linux-gnu/lib/systemd/system/zabbix-agentd.service.
I have done nothing else except following Clevercloud documentation to deploy
Have I miss something in the configuration?
(For info, the application is deploying well on other platform like Heroku or Pivotal)
To deploy a jhipster application on Clevercloud.
Here is what worked for me.
I have followed the indications given to create an application and deploy it using the CLI
Configuration files:
clevercloud/war.json
{
"build": {
"type": "maven",
"goal": "package -Pprod -DskipTests"
},
"deploy": {
"jarName": "target/myapp-1.0.0.war"
}
}
clevercloud/maven.json
{
"build": {
"type": "maven",
"goal": "package -Pprod -DskipTests"
},
"deploy": {
"goal": "package -Pprod -DskipTests"
}
}
I modified my application-prod.yml to link the db.

Why does my FlywayMigrationStrategy call afterMigrate.sql twice?

I work on a Spring-boot project and use Flyway for database migration.
While working in dev-profile I want to fill the database with dummy data.
In order to have the exact same initial data values I overrode the FlywayMigrationStrategy Bean so that it performs a flyway.clean() before the migration starts:
#Bean
#Profile("dev")
public FlywayMigrationStrategy cleanMigrateStrategy() {
FlywayMigrationStrategy strategy = new FlywayMigrationStrategy() {
#Override
public void migrate(Flyway flyway) {
flyway.clean();
flyway.migrate();
}
};
return strategy;
}
My Migration folder contains several versioned migration scripts and ONE afterMigrate callback script which adds data to the created Tables.
The problem now is, the afterMigrate.sql script gets called two times as you can see from the following log:
2017-07-03 13:12:42.332 INFO 23222 --- [ main] o.f.core.internal.command.DbClean : Successfully cleaned schema "PUBLIC" (execution time 00:00.031s)
2017-07-03 13:12:42.397 INFO 23222 --- [ main] o.f.core.internal.command.DbValidate : Successfully validated 4 migrations (execution time 00:00.044s)
2017-07-03 13:12:42.413 INFO 23222 --- [ main] o.f.c.i.metadatatable.MetaDataTableImpl : Creating Metadata table: "PUBLIC"."schema_version"
2017-07-03 13:12:42.428 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Current version of schema "PUBLIC": << Empty Schema >>
2017-07-03 13:12:42.430 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1 - create users
2017-07-03 13:12:42.449 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 2 - create address
2017-07-03 13:12:42.464 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 3 - create patient case
2017-07-03 13:12:42.475 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 4 - state machine
2017-07-03 13:12:42.498 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Successfully applied 4 migrations to schema "PUBLIC" (execution time 00:00.086s).
2017-07-03 13:12:42.499 INFO 23222 --- [ main] o.f.c.i.c.SqlScriptFlywayCallback : Executing SQL callback: afterMigrate
2017-07-03 13:12:42.502 INFO 23222 --- [ main] o.f.c.i.c.SqlScriptFlywayCallback : Executing SQL callback: afterMigrate
2017-07-03 13:12:42.917 INFO 23222 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
If I remove the flyway.clean() function call it gets called only once.
Can somebody tell me why its called twice when I call flyway.clean() and flyway.migrate() and how to prevent the second call?
It's a known issue which will be fixed in Flyway 5.0. See https://github.com/flyway/flyway/issues/1653

Resources