Where can I found the sql to create in my database all the metadata sctructures needed by The Spring Batch framework.
The application don't have the permission on the database to outgenerate them.
As suggested in the documentation I've try to use the sql provided in the org.springframework.batch.core repository
but it's seems to be uncomplete.
For example it misses the column BATCH_JOB_EXECUTION .JOB_CONFIGURATION_LOCATION VARCHAR(2500), which on the other hand it's present in the documentation snippet
I've understand that this structure comes with the newest vesion 4.1 but also the (migration script)[https://github.com/spring-projects/spring-batch/blob/main/spring-batch-core/src/main/resources/org/springframework/batch/core/migration/4.1/migration-oracle.sql] doesn't seem to be right since it supposed that the column it's already there:
ALTER TABLE BATCH_JOB_EXECUTION MODIFY JOB_CONFIGURATION_LOCATION VARCHAR2(2500 char);
Where can I found the complete oracle sql script that creates the structure needed by the framework version 4.3.6 ?
If you look at https://github.com/spring-projects/spring-batch/blob/4.3.x/spring-batch-core/src/main/resources/org/springframework/batch/core/schema-oracle10g.sql you can see the BATCH_JOB_EXECUTION.JOB_CONFIGURATION_LOCATION column. So you can use /org/springframework/batch/core/schema-oracle10g.sql from the classpath since it is included in spring-batch-core-4.3.6.jar.
Related
I have taken a DB dump from my dev database to QA database ( oracle) for testing. My application is a spring boot application and uses hibernate envers for auditing. I get above error when trying to insert data to the tables. I tried removing data from all the audit tables and revinfo table. But the issue is still there. Anybody has any idea on this?
Same error after updating the version of hibernate-envers for springboot3+. One quick solution for me was to (first backup) remove all tables related to audit + remove revinfo table and also the sequencer (revinfo_seq if you created some). Then let the ddl-auto property do the work by setting in application.yml
jpa.hibernate.ddl-auto: update
It creates all needed tables and sequencers by its needed definition, and after that inserts were committed.
I'm trying to populate my database with around 150 different values (one for each row).
So far, I've found two different ways to implement the inserts, but none of them seems to be the best way to do it.
Flyway + Postgres: One of them is to create a migration file and make use of the COPY command from postgres but to do so, I need to give superuser permissions to the user and that doesn't seem to be a good choice.
Spring boot: place a data.sql file in the classpath with a lot of inserts. If I'm not wrong I would have to write 150 insert into... statements.
In previous projects, I have used liquibase and it has a loadData command which is very convenient to do what is says it does. You just give the file, table name and that's it. You end up with your csv file values in your table rows.
Is there an alike way to do that in flyway? What is the best way to populate the database?
Actually there is a way, you can find more info on the official documentation's page
You need to add some spring boot properties too:
spring.flyway.enabled=true
spring.flyway.locations=classpath:/db/migration
spring.flyway.schemas=public
Properties details here
In my case, a use Repetables scripts by my needs but take care with the prefixes
Flyway is a direct competitor of liquidbase, so if you need to track the status of migrations, manage distributed migration (many instances of the same service start simultaneously, and only one instance should actually execute a migration), check upon startup which migration should be applied and execute only relevant migrations, and all other benefits that you have previously expected from "migration management system", then you should use Flyway rather than managing SQLs directly.
Spring boot has integrations with both Flyway and Liquidbase, so you can place your migrations in the "resources" folder, define a couple of properties and spring boot will run Flyway automatically.
For example, here you can find a tutorial of Flyway integration with spring boot.
Since flyway's migrations are SQL files- you can place there whatever you want (even plSQL I believe), it will even manage transaction per migration guaranteeing that the migration "atomicity" (all or nothing, no partial migration).
So the straightforward approach would be creating a SQL file file with 150 inserts and running it via flyway in spring or even via maven depending on your actual setup.
If you want more fine-grained control and the SQL is not flexible enough, its possible to implement Migration in Java Code. See Official Flyway Documentation
I would like to use liquibase in my spring boot app. My requirement is that I have a dummy schema which is populated with tables every time I change the entity classes. This is done by hibernate's ddl create. There are many identical schemas to the dummy schema with data. I want those schemas to be compared with the dummy schema on update and be synced without affecting my data. How can I achieve this? I could not find a tutorial anywhere. If there is one please do give me the link.
I think this tutorial explains what your are looking for
baeldung maven liquibase plugin
In section 5.3 is a description on how you can get a changlog file with differences between two databases.
so im quite new to all spring and hibernate so i used a feature in myeclipse called generate CRUD application (it uses spring and hibernate for the heart of the application and JSF for presentation objects)that im intended to make changes so that i can work with .. my question is the following .. after i made the application that works fine by the way , i discovered that there are fields and probably even tables to be added to the database(an oracle 11g instance database)..so my questions are the following:
if i create the classes and update the existing .. will it be written directly in the database?
if not is there any way to do it because i dont think a direct update in the database will be a good idea ..
thank you in advance ..
If I understand correctly, you want to know whether the database schema can be created/updated automatically from your #Entity classes, and how to enable/disable such creation. Yes, it's possible by using some property. The name of the property would depend on your project kind. For example, in a default Spring Boot application, you can have
spring.jpa.hibernate.ddl-auto: update
in application.properties. The value update above will have the schema automatically created on first run and then updated on subsequently. validate instead of update won't alter the schema, but just validate it.
This stackoverflow post lists the possible values and their behaviour.
I was using flyway through the CL to migrate my production DB (mySql), while I was using a fixed SQL query to create the DB, tables, etc. in my unit tests, using H2. I'd like now to better integrate flyway and create/delete DB after each unit test.
I have a DB factory and inside its build method I'm using the following code:
flyway.setLocations("filesystem:sql/migrations/common","filesystem:sql/migrations/h2");
flyway.setSchemas("MYSERVER");
flyway.setDataSource(
p.getProperty(DB_URL.getName()),
p.getProperty(USERNAME.getName()),
p.getProperty(PASSWORD.getName()));
flyway.setInitOnMigrate(true);
flyway.migrate();
The migrations seems to apply correctly, as I can see my SQL code from the flyway logs. But when I start using the DB, immediately afterwards, I get TABLE NOT FOUND errors. I'm using h2 as in memory DB with the following URL to initialize my client:
jdbc:h2:mem:MYSERVER;MVCC=true
Do you have any idea on what I might be making wrong?
The problem is your JDBC url.
MYSERVER is the name of the database, not the schema.
The easiest thing to do is to let flyway use the same url, and not set a schema. This way you'll find everything in the public schema of the MYSERVER database.
Having the same issue - after digging around, it appears that the H2 database is being recreated between migration scripts somehow.
You can confirm this happens by putting a SELECT * FROM migrations in the first two migrations - it will succeed in the first and fail in the second.
According to the H2 Documentation, specifying a database name and ;DB_CLOSE_DELAY=-1 option should suffice to allow concurrent and subsequent accesses to the in-memory database, however this is not the case.
Upgrading Flyway from 3.1 to 3.2.1 has fixed the problem.