I am not clear how the executeAsync works in Camunda 7.15.0 version.
Using Java code in spring-boot application, I am trying to migrate few process instances from one process version to another using migration plan.
In java code when I use execute() method then the code is obviously executed immediately.
import org.camunda.bpm.engine.RuntimeService;
final MigrationPlan migrationPlan = runtimeService.createMigrationPlan(fromProcessDefinitionId, toProcessDefinitionId).mapEqualActivities().build();
final ProcessInstanceQuery processQuery = runtimeService.createProcessInstanceQuery().processDefinitionId(fromProcessDefinitionId);
runtimeService.newMigration(migrationPlan).processInstanceQuery(processQuery).executeAsync();
But When I use executeAsync() method then I see the batch job waiting in the batches section but does not complete. How to know when will it execute?
Issue can be recreated in https://github.com/firstpostt/camunda-demo-migration. It needs postgres database and credentials need to be given in application.yml
There is an entry in act_ru_batch table. I don't see any entry in act_ru_job table
Can I configure in bpm-platform.xml file to make sure my migration plan runs within next 15 minutes when I use executeAsync() method?
Is there any option to force-trigger the batch from the admin cockpit when needed?
Found the problem. The issue was with Postgres database but I am not sure what is the exact root cause because the issue is not easy to recreate. I dropped the database and created it again from scratch which seems to resolve the issue (I used the same sql scripts again using flyway so nothing changed w.r.t database schema. The only difference I can think of is that after some data is populated already into the tables then I created the unique constraint on camunda tables which might have caused the issue. Now I created unique constraint immediately before populating any data into the tables)
P.S:
I was using unique constraint in postgres Database https://docs.camunda.org/manual/7.5/user-guide/process-engine/database/#additional-database-schema-configuration
I figured out that the issue was with database because I tried with h2 filesystem database(camunda.bpm.database.schema-update as true and spring.datasource.url=jdbc:h2:~/camunda;DB_CLOSE_ON_EXIT=false) and batch worked fine.
Then I used postgres database (without the unique constraint script) and batch worked fine. When I created a new database schema with the unique constraint script then the batch did not work and even if I dropped the constraint the batch did not work anymore
So I dropped the database and created a new database again without unique constraint and then the batch worked fine. After that I added the unique constraint and the batch still works fine.
I am not able to recreate the issue consistently but my guess is that it has something to do with the unique constraint. If you are not using this unique constraint then I am sure that this problem will never occur at all
Related
When I tried to upgrade my Oracle JDBC driver from ojdbc8 19.6.0.0 to ojdbc11 21.6.0.0.1 I started to receive the following exception when trying to do a batch insert with ids being generated in the database:
java.lang.AssertionError: autoKeyInfo is not initialized
I created a reproducer for this.
How can I resolve this problem?
This seems to be a bug in the later versions of the Oracle JDBC drivers.
They have the findings in the issue tracker, but I have nothing to link to since it is not public.
The recommended workaround is to use the LTS driver version, which is ojdbc 19.15.0.0.1
The reproducer is not just doing a batch update, it uses a Keyholder and tries to return the IDs generated by the database. The problem is not the generation of values, the problem is the attempt to return them.
https://docs.oracle.com/en/database/oracle/oracle-database/21/jjdbc/Oracle-extensions.html#GUID-9EC82134-1206-4325-A17B-9FA7610F0169 says "DML returning cannot be combined with batch update". "DML returning" refers to "returning into" clauses used by the driver to return values for out parameters. Looks like a conscious regression in the newer driver versions. Spring uses Keyholders to handle out parameters.
When creating the keys in the database with identity columns or triggers or with <sequence>.nextval in the SQL statement and not trying to return it the generated value, batch inserts should work.
I'm using Flyway and Spring Boot to perform migrations of my database on application startup, as well as Testcontainers to test the migration scripts during integration tests. This works great for verifying the SQL in the migrations works, but has the catch of always running against a clean database since the migration is baselined from a blank schema, and between test cases all data is cleared down.
How can I reliably test that a script isn't going to break in cases where there is data in the database? For example, let's say I want to change a table's primary key but it turns out there are duplicates? Or I want to delete some rows which have foreign key dependencies? These changes might work on an empty data set but would cause runtime errors which it would be good to capture during testing.
I do have a test and staging environment with more representative data but this means a longer feedback cycle while I wait for a deployment.
I had considered adding additional migration scripts under the /src/test/resources path which could be interleaved with the actual migration scripts, is this a well supported approach?
I'm seeing flyway seemingly skip migration "V.*" scripts and fail on a later script. Script V1 creates a table, then script V2 alters it. Except that no table gets created, the database is a fresh DB2 instance then it legitimately complains when an alter is attempted.
I have a related question still open, but it would be helpful to get Spring-Boot/Flyway to log the specific SQL that was tried. Not looking for a log of the SQL migration, but the exact SQL in the migration that was attempted.
I tried adding -Dlogging.level.flyway.core.dbsupport.SqlScript=DEBUG but didn't see any change.
In my Spring-Boot project when:
javers.sqlSchemaManagementEnabled=true
The Javers tables are created on the first execution (when the tables do not exist on the database) and the code runs as expected, however from the second execution onwards an exception is thrown describing that the tables cannot be created because them already exist. This is the situation that I cannot understand, isn't Javers supposed to know that the tables already exist and do not attempt to create the tables?
javers.sqlSchemaManagementEnabled=false
If the tables where already created on the database, manually or executing the application with this option as 'true' at least once, the application executes as expected.
What am I supposed to do?
Is there something wrong with my Spring-Boot configuration? The application was supposed to run with 'sqlSchemaManagementEnabled=true' even with the tables already created?
I expected is to leave the 'sqlSchemaManagementEnabled=false' and create the tables manually?
I had the same problem, when using other than public schema in PostgreSQL.
I solved it by switching to public schema, now it works correctly with javers.sqlSchemaManagementEnabled=true.
For other schemas, you should somehow specify the schema name in org.javers.repository.sql.schema.TableNameProvider
If javers.sqlSchemaManagementEnabled=true, Javers creates SQL tables if they do not exists already.
It's checked here:
https://github.com/javers/javers/blob/master/javers-persistence-sql/src/main/java/org/javers/repository/sql/schema/JaversSchemaManager.java#L215
It's hard to say why it doesn't work in your case, try to debug this code using the latest Javers version.
In my project, current approach is to create database if not already exists using CreateDatabaseIfNotExists and doing seeding initial data from that Intializer as well. I also added Code First Migration support after upgrade to Entity 4.4, so that in the future when we change the modle/database structure we can update client side database without drop their exist database.
However it didn't seems to working well, for example, I am now stuck on design time where forms wouldn't load and the error message is something like The model backing the 'myEntities' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269).. But the model and database is indeed the updated version, just seems Migration didn't recognize the database generated by CreateDatabaseIfNotExists, but at the same time all seems working well at run time.
Also after that I noticed that if I let CreateDatabaseIfNotExists initialize a database, Add-migration afterwards will fail and complain that pending migration and ask me to do a update-database. When I try to do a the update-database, it will fail as well because the migration path seems assume the database is in initial setup state and will trying to running all the migration scripts while none should be run as the database generated by CreateDatabaseIfNotExists is indeed sync with current model and should not be migrated at all.
I discovered that there is a MigrationHistory table in System Tables, that table will always save the database initialize history regardless of whether the initializer is CreateDatabaseIfNotExists or MigrateDatabaseToLatestVersion. The difference is that if database is initialized by CreateDatabaseIfNotExists1, everytime the database initialized, the migriationId for that initialize record will be different, butMigrateDatabaseToLatestVersion` will always save same set of migrationId for each of the Migration steps. I guess that is how Entity Framework 5.0 works.
So in the end, I give up and rewrite my DB Access code to seeding the initial database data
in other part of my code rather than the CreateDatabaseIfNotExists or MigrationsConfiguration` class because either suit my need.