JPARepository deleteAllInBatch() not working as expected - spring

I am trying to delete some rows from my table using Spring JPA deleteAllInBatch() but when the number of rows to be deleted exceed some threshold value JPA throws error. I'm not certain the cause of this error but found a jira ticket : https://jira.spring.io/browse/DATAJPA-137.
I don't want to use deleteAll() as it deletes the data one by one and will lead to performance issues. Is this a drawback of JPA or there's some solution to it. I tried for some workaround but didn't found anything much useful. Please help me to get an efficient solution for this operation or some useful references. Thanks in advance...
DbIssueApplication.java
#SpringBootApplication
public class DbIssueApplication
{
public static void main(String[] args)
{
ApplicationContext context = SpringApplication.run(DbIssueApplication.class, args);
TestService service = context.getBean(TestService.class);
long st = System.currentTimeMillis();
List<Test> testList = new ArrayList<>();
for(int i=0;i<5000;i++)
{
testList.add(new Test(i,(i%2==0)?"field1":"field2"));
}
service.insert(testList);
service.deleteByName("field2");
System.err.println("The processing took = "+(System.currentTimeMillis()-st)+" ms");
}
}
Test.java
#Entity
#Table(name="test")
public class Test implements Serializable
{
private static final long serialVersionUID = -9182756617906316269L;
#Id
private Integer id;
private String name;
... getter,setter and constructors
}
TestRepository.java
public interface TestRepository extends JpaRepository<Test, Integer>
{
List<Test> findByName(String name);
}
TestService.java
public interface TestService
{
public void insert(List<Test> testList);
public void deleteByName(String name);
}
TestServiceImpl.java
#Service
public class TestServiceImpl implements TestService
{
#Autowired
TestRepository testRepository;
#Override
public void insert(List<Test> testList)
{
testRepository.deleteAllInBatch();
testRepository.saveAll(testList);
}
#Override
public void deleteByName(String name)
{
System.err.println("The number of rows to be deleted = "+testRepository.findByName(name).size());
testRepository.deleteInBatch(testRepository.findByName(name));
}
}
dbSchema
create table test
(
id int,
name varchar(40)
);
ErrorLog
[ main] o.h.e.t.i.TransactionImpl : begin
[ main] o.h.h.i.a.QueryTranslatorImpl : parse() - HQL: delete from com.example.demo.entity.Test x where x = ?1 or x = ?2 or x = ?3 or x = ?4 or ... x = ?2500
[ main] o.h.h.i.a.ErrorTracker : throwQueryException() : no errors
[ main] o.h.e.t.i.TransactionImpl : rolling back
[ Thread-14] o.h.i.SessionFactoryImpl : HHH000031: Closing
[ Thread-14] o.h.t.s.TypeConfiguration$Scope : Un-scoping TypeConfiguration [org.hibernate.type.spi.TypeConfiguration$Scope#6cf001] from SessionFactory [org.hibernate.internal.SessionFactoryImpl#1ad3d8a]
[ Thread-14] o.h.s.i.AbstractServiceRegistryImpl : Implicitly destroying ServiceRegistry on de-registration of all child ServiceRegistries
[ Thread-14] o.h.b.r.i.BootstrapServiceRegistryImpl : Implicitly destroying Boot-strap registry on de-registration of all child ServiceRegistries
=======================================================================================================================================================================================================================================================================================================
[ main] o.h.h.i.QueryTranslatorFactoryInitiator : QueryTranslatorFactory : org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory#5e167a
[ main] o.h.h.i.QueryTranslatorFactoryInitiator : HHH000397: Using ASTQueryTranslatorFactory
[ main] o.h.h.i.a.QueryTranslatorImpl : parse() - HQL: select generatedAlias0 from com.example.demo.entity.Test as generatedAlias0 where generatedAlias0.name=:param0
[ main] o.h.h.i.a.ErrorTracker : throwQueryException() : no errors
[ main] o.h.h.i.a.QueryTranslatorImpl : --- HQL AST ---
\-[QUERY] Node: 'query'
+-[SELECT_FROM] Node: 'SELECT_FROM'
| +-[FROM] Node: 'from'
| | \-[RANGE] Node: 'RANGE'
| | +-[DOT] Node: '.'
| | | +-[DOT] Node: '.'
| | | | +-[DOT] Node: '.'
| | | | | +-[DOT] Node: '.'
| | | | | | +-[IDENT] Node: 'com'
| | | | | | \-[IDENT] Node: 'example'
| | | | | \-[IDENT] Node: 'demo'
| | | | \-[IDENT] Node: 'entity'
| | | \-[IDENT] Node: 'Test'
| | \-[ALIAS] Node: 'generatedAlias0'
| \-[SELECT] Node: 'select'
| \-[IDENT] Node: 'generatedAlias0'
\-[WHERE] Node: 'where'
\-[EQ] Node: '='
+-[DOT] Node: '.'
| +-[IDENT] Node: 'generatedAlias0'
| \-[IDENT] Node: 'name'
\-[COLON] Node: ':'
\-[IDENT] Node: 'param0'
[ main] o.h.h.i.a.HqlSqlBaseWalker : select << begin [level=1, statement=select]
[ main] o.h.h.i.a.t.FromElement : FromClause{level=1} : com.example.demo.entity.Test (generatedAlias0) -> test0_
[ main] o.h.h.i.a.t.FromReferenceNode : Resolved : generatedAlias0 -> test0_.id
[ main] o.h.h.i.a.t.FromReferenceNode : Resolved : generatedAlias0 -> test0_.id
[ main] o.h.h.i.a.t.DotNode : getDataType() : name -> org.hibernate.type.StringType#d003cd
[ main] o.h.h.i.a.t.FromReferenceNode : Resolved : generatedAlias0.name -> test0_.name
[ main] o.h.h.i.a.HqlSqlBaseWalker : select : finishing up [level=1, statement=select]
[ main] o.h.h.i.a.HqlSqlWalker : processQuery() : ( SELECT ( {select clause} test0_.id ) ( FromClause{level=1} test test0_ ) ( where ( = ( test0_.name test0_.id name ) ? ) ) )
[ main] o.h.h.i.a.u.JoinProcessor : Using FROM fragment [test test0_]
[ main] o.h.h.i.a.HqlSqlBaseWalker : select >> end [level=1, statement=select]
[ main] o.h.h.i.a.QueryTranslatorImpl : --- SQL AST ---
\-[SELECT] QueryNode: 'SELECT' querySpaces (test)
+-[SELECT_CLAUSE] SelectClause: '{select clause}'
| +-[ALIAS_REF] IdentNode: 'test0_.id as id1_0_' {alias=generatedAlias0, className=com.example.demo.entity.Test, tableAlias=test0_}
| \-[SQL_TOKEN] SqlFragment: 'test0_.name as name2_0_'
+-[FROM] FromClause: 'from' FromClause{level=1, fromElementCounter=1, fromElements=1, fromElementByClassAlias=[generatedAlias0], fromElementByTableAlias=[test0_], fromElementsByPath=[], collectionJoinFromElementsByPath=[], impliedElements=[]}
| \-[FROM_FRAGMENT] FromElement: 'test test0_' FromElement{explicit,not a collection join,not a fetch join,fetch non-lazy properties,classAlias=generatedAlias0,role=null,tableName=test,tableAlias=test0_,origin=null,columns={,className=com.example.demo.entity.Test}}
\-[WHERE] SqlNode: 'where'
\-[EQ] BinaryLogicOperatorNode: '='
+-[DOT] DotNode: 'test0_.name' {propertyName=name,dereferenceType=PRIMITIVE,getPropertyPath=name,path=generatedAlias0.name,tableAlias=test0_,className=com.example.demo.entity.Test,classAlias=generatedAlias0}
| +-[ALIAS_REF] IdentNode: 'test0_.id' {alias=generatedAlias0, className=com.example.demo.entity.Test, tableAlias=test0_}
| \-[IDENT] IdentNode: 'name' {originalText=name}
\-[NAMED_PARAM] ParameterNode: '?' {name=param0, expectedType=org.hibernate.type.StringType#d003cd}
[ main] o.h.h.i.a.ErrorTracker : throwQueryException() : no errors
[ main] o.h.h.i.a.QueryTranslatorImpl : HQL: select generatedAlias0 from com.example.demo.entity.Test as generatedAlias0 where generatedAlias0.name=:param0
[ main] o.h.h.i.a.QueryTranslatorImpl : SQL: select test0_.id as id1_0_, test0_.name as name2_0_ from test test0_ where test0_.name=?
[ main] o.h.h.i.a.ErrorTracker : throwQueryException() : no errors
[ main] o.h.h.i.a.QueryTranslatorImpl : parse() - HQL: delete from com.example.demo.entity.Test x where x = ?1 or x = ?2 ... or x = ?2500
[ main] o.h.h.i.a.ErrorTracker : throwQueryException() : no errors
The Sample code is uploaded in github the link to which is : https://github.com/Anand450623/Stackoverflow

You could try using a JPQL query to make deleteAll() delete in batch rather than one by one.
However, you might actually want to drop the orm-framework entirely. A common experience is that even though it looks like a good idea in the beginning, it almost turns up with issues like the one you gave here :/ you could read https://www.toptal.com/java/how-hibernate-ruined-my-career The gist of it is: it's hard to debug, you can't avoid writing native SQL in most cases anyway, JPQL limits your expressiveness and it's extremely invasive in how you model (e.g. you can't do immutability in a lot of casee).
Spring has an excellent JdbcTemplate, but keep in mind that also have drawbacks, mainly that you have to write the mapping yourself - that said it's not that much code. That said the benefits are huge. So if a JPQL query doesn't work, consider if using JPA (hibernate?) is the right choice to begin with

The way I got around this was to implement my own deleteInBatch method with native sql,
something like
#Modifying
#Query(value="delete from table_x where id in (:ids)", nativeQuery=true)
void deleteBatch(List<String> ids);
The problem is this piece of code over here https://github.com/spring-projects/spring-data-jpa/blame/a31c39db7a12113b5adcb6fbaa2a92d97f1b3a02/src/main/java/org/springframework/data/jpa/repository/query/QueryUtils.java#L409
Which generates a awfull sql, not suited for large number of elements

Related

How do I start runBoot before starting the Cypress Tests in Gradle?

I want a spring boot by using bootRun task Gradle. When spring boot is up running should Gradle run the Cypress API test
I have the following build.gradle
plugins { id 'base'}
apply plugin: 'groovy'
apply plugin: 'java-gradle-plugin'
apply from: "$rootDir/gradle/integration-test.gradle"
apply from: "$rootDir/gradle/functional-test.gradle"
apply from: "$rootDir/buildSrc/build.gradle"
repositories {
jcenter()
}
dependencies {
localGroovy()
testCompile ('org.codehaus.groovy:groovy-all:2.5.7')
testCompile('org.spockframework:spock-core:1.3-groovy-2.5')
testImplementation gradleTestKit()
}
allprojects {
task printInfo {
doLast {
println "This is ${project.name}"
}
}
}
task systemtestDevEnv (type: Exec){
workingDir 'frontend'
commandLine 'npm test'
commandLine 'npm start'
workingDir 'functionalsystemtest'
commandLine 'npm run cypress:run'
}
systemtestDevEnv.dependsOn 'backend:runWebServer'
task functionalapitest (type: Exec) {
workingDir 'funcionalapitest'
commandLine 'npm run cypress:run'
}
functionalapitest.dependsOn 'backend:runWebServer'
The in the project is the directory stucture is
JavaProject
-- fuctionalsystemtest
-- functionalapitest
-- backend
-- frontend
-- buildSrc
When I exeute gradle functionalapitest is the bootRun been exeuted but the exeution do not exeut the next steps
workingDir 'funcionalapitest'
commandLine 'npm run cypress:run'
How sould I spesifie the ask functionalapitest so that after the spring boot prosess is up an running then the cypresse test is exeuted?
Task :backend:bootRun
. ____ _ __ _ _
/\ / ' __ _ ()_ __ __ _ \ \ \ \
( ( )_ | '_ | '| | ' / ` | \ \ \ \
\/ )| |)| | | | | || (| | ) ) ) )
' |____| .|| ||| |__, | / / / /
=========|_|==============|___/=///_/
:: Spring Boot :: (v2.1.6.RELEASE)
2019-07-25 13:48:20.457 INFO 59589 --- [ main] c.s.r.R.RestfulWebServiceApplication : Starting RestfulWebServiceApplication on Steins-MacBook-Air.local with PID 59589 (/Users/steinkorsveien/Development/TestWorkSpace/JavaProject/backend/build/classes/java/main started by steinkorsveien in /Users/steinkorsveien/Development/TestWorkSpace/JavaProject/backend)
2019-07-25 13:48:20.469 INFO 59589 --- [ main] c.s.r.R.RestfulWebServiceApplication : No active profile set, falling back to default profiles: default
2019-07-25 13:48:23.388 INFO 59589 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2019-07-25 13:48:23.466 INFO 59589 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2019-07-25 13:48:23.467 INFO 59589 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.21]
2019-07-25 13:48:23.798 INFO 59589 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-07-25 13:48:23.799 INFO 59589 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 3188 ms
2019-07-25 13:48:24.356 INFO 59589 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-07-25 13:48:24.977 INFO 59589 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2019-07-25 13:48:24.989 INFO 59589 --- [ main] c.s.r.R.RestfulWebServiceApplication : Started RestfulWebServiceApplication in 6.301 seconds (JVM running for 7.844)
<=======------> 60% EXECUTING [20s]

Retrieve all files that match a filter once

I'm trying to get the file count with my filter from my streaming inbound ftp apdater, so after i process all files, I want to launch a remote shell, or is there any other way to know that the adapter finished sending messages?
I tried already with CompositeFileListFilter overriding the public List filterFiles(F[] files) method, but it never gets called.
for now I'm using a fixed file count, but it should be dynamic.
I made an override of this method on the CompositeFileListFilter
#Override
public List<F> filterFiles(F[] files) {
log.info("received {} files", files.length);
return super.filterFiles(files);
}
I have the following integration flow, using an atomic counter until 3, it should be 3.:
AtomicInteger messageCounter = new AtomicInteger(0);
return IntegrationFlows.from(Ftp.inboundStreamingAdapter(goldv5template())
.remoteDirectory("/inputFolder")
.filter(new CompositeFileListFilterWithCount<>() {{
addFilter(new FtpSimplePatternFileListFilter("pattern1.*"));
addFilter(new FtpSimplePatternFileListFilter("pattern2.*"));
addFilter(new FtpSimplePatternFileListFilter("pattern3.*"));
}})
, pollerConfiguration)
.transform(Transformers.fromStream(StandardCharsets.UTF_8.toString()))
.log(message -> "process file " + message.getHeaders().get(FileHeaders.REMOTE_FILE))
.handle(message -> {
int numericValue = messageCounter.incrementAndGet();
log.info("numeric value: {}", numericValue);
if (numericValue == 3) {
messageCounter.set(0);
log.info("launch remote shell here now"));
}
}, e -> e.advice(after()))
.get();
if I don't use the counter, I would get a remote shell call for every file and I only need it to be called once, only when the flow finished, it's scheduled based on a cronjob, so I want to call it only one time at the end.
I'm using 1s fixed delay for test, but it would only run three times a day, I have to fetch all times at every clock.
this is my pollerConfiguration for test:
sourcePollingChannelAdapterSpec -> sourcePollingChannelAdapterSpec.poller(pollerFactory -> pollerFactory.fixedRate(1000L))
UPDATE
I tried what was suggested by Artem, but I'm having a weird behavior, I'm trying to fetch all files in a certain ftp folder in one poll, so reading the docs:
if the max-messages-per-poll is set to 1 (the default), it processes only one file at a time with intervals as defined by your trigger, essentially working as “one-poll === one-file”.
For typical file-transfer use cases, you most likely want the opposite behavior: to process all the files you can for each poll and only then wait for the next poll. If that is the case, set max-messages-per-poll to -1. Then, on each poll, the adapter tries to generate as many messages as it possibly can...
so i have set max-message-per-poll to -1 so every poll gives me every file.
I added a Filter to only take .xml files and to prevent duplicates, an acceptOnceFilter, but the ftp streaming adapter is giving me unlimited times the same files which doesn't make sense, I used for this test a FixedDelay of 10s.
2019-07-23 10:32:04.308 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process2 file sample1.xml
2019-07-23 10:32:04.312 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample1.xml
2019-07-23 10:32:04.313 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.313 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.315 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample2.xml
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample2.xml
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.326 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample3.xml
2019-07-23 10:32:04.330 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample3.xml
2019-07-23 10:32:04.331 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.331 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.333 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample4.xml
2019-07-23 10:32:04.337 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample4.xml
2019-07-23 10:32:04.338 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.338 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.341 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample1.xml
2019-07-23 10:32:04.345 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample1.xml
2019-07-23 10:32:04.346 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.346 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.347 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample2.xml
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample2.xml
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.353 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample3.xml
2019-07-23 10:32:04.356 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample3.xml
2019-07-23 10:32:04.356 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.357 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.358 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample4.xml
...............................
return IntegrationFlows
.from(Ftp.inboundStreamingAdapter(testFlowTemplate())
.remoteDirectory("/inputTestFlow")
.filter(new CompositeFileListFilter<>() {{
addFilter(new AcceptOnceFileListFilter<>());
addFilter(new FtpSimplePatternFileListFilter("*.xml"));
}})
, sourcePollingChannelAdapterSpec -> sourcePollingChannelAdapterSpec.poller(pollerConfiguration.maxMessagesPerPoll(-1)))
.transform(Transformers.fromStream(StandardCharsets.UTF_8.toString()))
.log(message -> {
execution.setStartDate(new Date());
return "process file " + message.getHeaders().get(FileHeaders.REMOTE_FILE);
})
.handle(Ftp.outboundAdapter(FTPServers.PC_LOCAL.getFactory(), FileExistsMode.REPLACE)
.useTemporaryFileName(false)
.fileNameExpression("headers['" + FileHeaders.REMOTE_FILE + "']")
.remoteDirectory("/output/")
, e -> e.advice(testFlowAfter())
)
.get();
Update 2
I achieved what I needed creating this custom filter:
.filter(new FileListFilter<>() {
private final Set<String> seenSet = new HashSet<>();
private Date lastExecution;
#Override
public List<FTPFile> filterFiles(FTPFile[] files) {
return Arrays.stream(files).filter(ftpFile -> {
if (lastExecution!= null && TimeUnit.MILLISECONDS.toSeconds(new Date().getTime() - lastExecution.getTime()) >= 10L) {
this.seenSet.clear();
}
lastExecution = new Date();
if (ftpFile.getName().endsWith(".xml")) {
return this.seenSet.add(ftpFile.getRawListing());
}
return false;
}).collect(Collectors.toList());
}
})
but I used a handmade 10 Seconds Interval which is okay for my need, any other smart way to make this code better depending on the trigger ?
I think cron trigger is not a right solution here since you really would like to have a single process for all the fetched files.
I think your logic in the filterFiles() is wrong. You really would like to set a counter to the number of files it is going to process, but not the original amount:
#Override
public List<F> filterFiles(F[] files) {
List<F> filteredFiles = super.filterFiles(files);
log.info("received {} files", filteredFiles.size());
return filteredFiles;
}
and here you indeed can set a value into that messageCounter.
UPDATE
There is this functionality on filter:
/**
* Indicates that this filter supports filtering a single file.
* Filters that return true <b>must</b> override {#link #accept(Object)}.
* Default false.
* #return true to allow external calls to {#link #accept(Object)}.
* #since 5.2
* #see #accept(Object)
*/
default boolean supportsSingleFileFiltering() {
return false;
}
I think when you override it to an explicit false in your CompositeFileListFilterWithCount, you should be good. Otherwise you are indeed right: only a plain accept() is called for each file by default. Just because all your FtpSimplePatternFileListFilter comes with true by default and all of them are contribution to true on the FtpSimplePatternFileListFilter level.
Nevertheless all of that says to us that you are using already Spring Integration 5.2 :-)...
UPDATE 2
Try ChainFileListFilter isntead. Place an AcceptOnceFileListFilter in the end of the chain. Although it might be better to use a FtpPersistentAcceptOnceFileListFilter instead: it takes into account a lastmodified for the file. Also consider to include into chain some LastModifiedFileListFilter variant for the FTPFile. Something similar you have in your custom one, but as a separate filter.
Not sure, though, what you mean about making it based on trigger. There is just no any relationship between filter and trigger. You may, of course, have some common interval property and adjust it into the last modified filter value.
By the way: this your story went far away from the original at once request. An Inbound Channel Adapter is really about one file per message, so you definitely can't have a list of file in one message, like it is possible with the FtpOutboundGateway and its LS or MGET commands as I mentioned in comments below.
Regarding "how can I achieve wether all files in one message or all the messages together?" you can try property "max-messages-per-poll". It means:
"The maximum number of messages that will be produced for each poll. Defaults to
infinity (indicated by -1) for polling consumers, and 1 for polled inbound channel adapters.

SQL Error: 900 when running update query with multiple fields

Running JPA native query with multiple fields from SQL developer console works, but the same query results in SQL Error: 900, SQLState: 42000 from JPA repository.
Query in JPA -
#Query(value = "UPDATE SUBSCRIPTIONFILE SET DESCRIPTION = ?1, FILENAME = ?2, VERSION = ?3 WHERE (PLATFORM = ?4 AND PRODUCTSKU = ?5)", nativeQuery = true)
SUBSCRIPTIONFILE updateUsingEmbdedKey(String DESCRIPTION, String FILENAME, String VERSION, String PLATFORM, String PRODUCTSKU);
And as the debug console shows -
2018-12-03 18:37:02.734 DEBUG 5180 --- [ main] org.hibernate.SQL : UPDATE SUBSCRIPTIONFILE SET DESCRIPTION = ?, FILENAME = ?, VERSION = ? WHERE (PLATFORM = ? AND PRODUCTSKU = ?)
2018-12-03 18:37:04.405 TRACE 5180 --- [ main] o.h.type.descriptor.sql.BasicBinder : binding parameter [1] as [VARCHAR] - [newDescription!]
2018-12-03 18:37:04.427 TRACE 5180 --- [ main] o.h.type.descriptor.sql.BasicBinder : binding parameter [2] as [VARCHAR] - [bla bla bla]
2018-12-03 18:37:04.437 TRACE 5180 --- [ main] o.h.type.descriptor.sql.BasicBinder : binding parameter [3] as [VARCHAR] - [bla]
2018-12-03 18:37:04.445 TRACE 5180 --- [ main] o.h.type.descriptor.sql.BasicBinder : binding parameter [4] as [VARCHAR] - [xyz]
2018-12-03 18:37:04.455 TRACE 5180 --- [ main] o.h.type.descriptor.sql.BasicBinder : binding parameter [5] as [VARCHAR] - [testSave]
My questions:
1 - is the query syntax is OK?
2- is there a better way to do it using a built-in JpaRepository query?
Entire JpaRepository -
public interface SubscriptionRepo extends JpaRepository<SUBSCRIPTIONFILE, SUBSCRIPTIONFILE_KEY>{
#Query(value = "UPDATE SUBSCRIPTIONFILE SET DESCRIPTION = ?1, FILENAME = ?2, VERSION = ?3 WHERE (PLATFORM = ?4 AND PRODUCTSKU = ?5)", nativeQuery = true)
SUBSCRIPTIONFILE updateUsingEmbdedKey(String DESCRIPTION, String FILENAME, String VERSION, String PLATFORM, String PRODUCTSKU);
}
Since this is an update you need a #Modifying annotation to go with your #Query annotation.

Why does my FlywayMigrationStrategy call afterMigrate.sql twice?

I work on a Spring-boot project and use Flyway for database migration.
While working in dev-profile I want to fill the database with dummy data.
In order to have the exact same initial data values I overrode the FlywayMigrationStrategy Bean so that it performs a flyway.clean() before the migration starts:
#Bean
#Profile("dev")
public FlywayMigrationStrategy cleanMigrateStrategy() {
FlywayMigrationStrategy strategy = new FlywayMigrationStrategy() {
#Override
public void migrate(Flyway flyway) {
flyway.clean();
flyway.migrate();
}
};
return strategy;
}
My Migration folder contains several versioned migration scripts and ONE afterMigrate callback script which adds data to the created Tables.
The problem now is, the afterMigrate.sql script gets called two times as you can see from the following log:
2017-07-03 13:12:42.332 INFO 23222 --- [ main] o.f.core.internal.command.DbClean : Successfully cleaned schema "PUBLIC" (execution time 00:00.031s)
2017-07-03 13:12:42.397 INFO 23222 --- [ main] o.f.core.internal.command.DbValidate : Successfully validated 4 migrations (execution time 00:00.044s)
2017-07-03 13:12:42.413 INFO 23222 --- [ main] o.f.c.i.metadatatable.MetaDataTableImpl : Creating Metadata table: "PUBLIC"."schema_version"
2017-07-03 13:12:42.428 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Current version of schema "PUBLIC": << Empty Schema >>
2017-07-03 13:12:42.430 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 1 - create users
2017-07-03 13:12:42.449 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 2 - create address
2017-07-03 13:12:42.464 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 3 - create patient case
2017-07-03 13:12:42.475 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "PUBLIC" to version 4 - state machine
2017-07-03 13:12:42.498 INFO 23222 --- [ main] o.f.core.internal.command.DbMigrate : Successfully applied 4 migrations to schema "PUBLIC" (execution time 00:00.086s).
2017-07-03 13:12:42.499 INFO 23222 --- [ main] o.f.c.i.c.SqlScriptFlywayCallback : Executing SQL callback: afterMigrate
2017-07-03 13:12:42.502 INFO 23222 --- [ main] o.f.c.i.c.SqlScriptFlywayCallback : Executing SQL callback: afterMigrate
2017-07-03 13:12:42.917 INFO 23222 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
If I remove the flyway.clean() function call it gets called only once.
Can somebody tell me why its called twice when I call flyway.clean() and flyway.migrate() and how to prevent the second call?
It's a known issue which will be fixed in Flyway 5.0. See https://github.com/flyway/flyway/issues/1653

How to rasie the log level for logs stored in elk stack

Is it possible to raise the log level for logs stored on ELK stack? Now I found all log levels are stored on my elk stack, I only want those warning anf error logs are stored in the stack, how to do that?
I think you're looking for the logstash drop filter, which lets you filter out logs based on some criteria, in your case debug, info and the like. From the docs, a filter might look like:
filter {
if [loglevel] == "debug" {
drop { }
}
}
https://www.elastic.co/guide/en/logstash/current/plugins-filters-drop.html
Also, your question looks similar to this one:
Logstash drop filter for event
If you have a log file test.log like below:
DEBUG | 2008-09-06 10:51:44,817 | DefaultBeanDefinitionDocumentReader.java | 86 | Loading bean definitions
WARN | 2008-09-06 10:51:44,848 | AbstractBeanDefinitionReader.java | 185 | Loaded 5 bean definitions from location pattern [samContext.xml]
INFO | 2008-09-06 10:51:44,848 | XmlBeanDefinitionReader.java | 323 | Loading XML bean definitions from class path resource [tmfContext.xml]
DEBUG | 2008-09-06 10:51:44,848 | DefaultDocumentLoader.java | 72 | Using JAXP provider [com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl]
ERROR | 2008-09-06 10:51:44,848 | BeansDtdResolver.java | 72 | Found beans DTD [http://www.springframework.org/dtd/spring-beans.dtd] in classpath: spring-beans.dtd
ERROR | 2008-09-06 10:51:44,864 | DefaultBeanDefinitionDocumentReader.java | 86 | Loading bean definitions
DEBUG | 2008-09-06 10:51:45,458 | AbstractAutowireCapableBeanFactory.java | 411 | Finished creating instance of bean 'MS-SQL'
You can define an if conditional on the message you want to keep and drop others:
input {
file {
path => "/your/path/test.log"
sincedb_path => "/your/path/test.idx"
start_position => "beginning"
}
}
filter {
if [message] =~ "WARN" or [message] =~ "ERROR" {
} else {
drop {}
}
}
output {
stdout {
codec => rubydebug
}
}
Then, your result will look like:
{
"message" => "WARN | 2008-09-06 10:51:44,848 | AbstractBeanDefinitionReader.java | 185 | Loaded 5 bean definitions from location pattern [samContext.xml]",
"#version" => "1",
"#timestamp" => "2015-09-17T18:30:24.897Z",
"host" => "MacBook-Pro-de-Alain.local",
"path" => "/Users/Alain/Workspace/elk/logstash-1.5.4/config/filter/test.log"
}
{
"message" => "ERROR | 2008-09-06 10:51:44,848 | BeansDtdResolver.java | 72 | Found beans DTD [http://www.springframework.org/dtd/spring-beans.dtd] in classpath: spring-beans.dtd",
"#version" => "1",
"#timestamp" => "2015-09-17T18:30:24.898Z",
"host" => "MacBook-Pro-de-Alain.local",
"path" => "/Users/Alain/Workspace/elk/logstash-1.5.4/config/filter/test.log"
}
{
"message" => "ERROR | 2008-09-06 10:51:44,864 | DefaultBeanDefinitionDocumentReader.java | 86 | Loading bean definitions",
"#version" => "1",
"#timestamp" => "2015-09-17T18:30:24.899Z",
"host" => "MacBook-Pro-de-Alain.local",
"path" => "/Users/Alain/Workspace/elk/logstash-1.5.4/config/filter/test.log"
}
Regards,
Alain

Resources