We currently have two separate databases/schemas for our application. We have run into data inconsistencies with this setup, so we either need a transaction spanning both databases or to merge the databases. We don't want to use JTA transactions, since we are using a normal Tomcat. So our approach would be to merge the two databases/schemas into one.
Both databases/schemas are currently managed via Liquibase and we would like to maintain two separate ChangeLogs, since one set of entities is from a thirdparty tool and the other set is managed by us. We don't have any name conflicts, other than that liquibase uses its default tablenames.
So my question is, is using the liquibase.databaseChangeLogTableName and liquibase.databaseChangeLogLockTableName properties to define different tablenames for liquibase the best approach for this scenario?
http://forum.liquibase.org/topic/configurable-databasechangelog-table-name
That does look like a reasonable approach - you would have to make sure that when you ran any liquibase commands that you always used the correct liquibase.databaseChangeLogTableName and liquibase.databaseChangeLogLockTableName with the corresponding changelog.
Related
I have a use case where I need to create exact same postgresql database in two different regions. Everything is same in these two databases i.e same schema and same tables and same data.
I have a use to achieve distributed transaction. So if a request land in region-a and write to region-a database to let's say Person table, then exact same record must be either written in Person table in both these database or if there is any error, write attempt should be rolled back.
I am trying to figure out if I can attach two different datasources with same Person Entity and CRUD repository in spring so the respoistory.save() method can write to Person table in both the databases.
So far, I have come across AbstractRoutingDataSource but that is for achieving multi tenancy in the databases. Other solutions are found are slightly different where use case is to write different records in different database (mostly sharding based on various data points).
Does spring provide any out of the box solution so I can achieve transactional write to same table in two different databases.
Does spring provide any out of the box solution so I can achieve transactional write to same table in two different databases.
Depends on your definition of "out of the box" - it doesn't itself implement distributed transactions, but does have support for using libraries that do. It is however relatively complicated to get everything working correctly, and requires additional components to be carefully configured in your runtime environment.
Spring Boot 2.x documentation on distributed transactions is here: https://docs.spring.io/spring-boot/docs/2.7.x/reference/htmlsingle/#io.jta
The Spring Boot 3.x documentation is here: https://docs.spring.io/spring-boot/docs/current/reference/html/io.html#io.jta but it's also worth noting that for 3.x, the Spring Boot team have changed direction and decided that integrated support should be provided by the relevant JTA provider (cf. https://github.com/spring-projects/spring-boot/issues/28589 ), and so there's projects like https://github.com/snowdrop/narayana-spring-boot
I am new to Springboot and trying to build a small rest-service. We have a DB deployed on different environments (e.g. DEV, TEST). The rest-service will make a call to the appropriate database based on the received query param (e.g. ?env=TEST). The schemas of the deployed database are the same, the difference is only in connection string. I have some questions related to this task.
I read a few articles how to work with multiple databases using Spring JPA (for example this one: https://www.baeldung.com/spring-data-jpa-multiple-databases). It did work, but in the given example they get different entites from different databases using different queries, in my case the entity and the query is the same, but I still have to duplicate repositories, transactionManagers, entityManagers etc because of different datasources. And this is just two environments and I have more of them.
I have another thought that I might need to recreate the repository each time I process a request (to make the repository non-singleton). I am not sure if it is a good practice.
Maybe it worth to use JDBCTemplate instead of Spring JPA in this case?
Could you please suggest something how to approach such a task?
I have a multi-module Spring boot application (for simplicity it is just moduleA and moduleB). Both modules access the same DB. Also, I use flyway to initialise the tables and populates initial data to the DB.
When I use the naming schema of flyway I run into the problem that
V1_0__init.sql in moduleA clashes with V1_0__init.sql in moduleB.
I know that I could rename one of the files to V1_1__init.sql and it works. But the idea is that the modules can co-exist without knowing how the migration-scripts are named in the other module. Is this possible with flyway?
Names cannot conflict as Flyway is creating a historical log and two files of the same name with different definitions would create a non-idempotent execution of the log.
But let's get back to the real problem...why are you writing two modules with one database? This is more problematic than the Flyway naming abuse. I can envision ways around this problem but I don't want to mention any idea when the foundation is flawed. If one module needs data from the other you need to build interfaces between them (or pull out into a third) as you are violating the modular separation of concerns. Simply put: don't do this.
I'm trying to populate my database with around 150 different values (one for each row).
So far, I've found two different ways to implement the inserts, but none of them seems to be the best way to do it.
Flyway + Postgres: One of them is to create a migration file and make use of the COPY command from postgres but to do so, I need to give superuser permissions to the user and that doesn't seem to be a good choice.
Spring boot: place a data.sql file in the classpath with a lot of inserts. If I'm not wrong I would have to write 150 insert into... statements.
In previous projects, I have used liquibase and it has a loadData command which is very convenient to do what is says it does. You just give the file, table name and that's it. You end up with your csv file values in your table rows.
Is there an alike way to do that in flyway? What is the best way to populate the database?
Actually there is a way, you can find more info on the official documentation's page
You need to add some spring boot properties too:
spring.flyway.enabled=true
spring.flyway.locations=classpath:/db/migration
spring.flyway.schemas=public
Properties details here
In my case, a use Repetables scripts by my needs but take care with the prefixes
Flyway is a direct competitor of liquidbase, so if you need to track the status of migrations, manage distributed migration (many instances of the same service start simultaneously, and only one instance should actually execute a migration), check upon startup which migration should be applied and execute only relevant migrations, and all other benefits that you have previously expected from "migration management system", then you should use Flyway rather than managing SQLs directly.
Spring boot has integrations with both Flyway and Liquidbase, so you can place your migrations in the "resources" folder, define a couple of properties and spring boot will run Flyway automatically.
For example, here you can find a tutorial of Flyway integration with spring boot.
Since flyway's migrations are SQL files- you can place there whatever you want (even plSQL I believe), it will even manage transaction per migration guaranteeing that the migration "atomicity" (all or nothing, no partial migration).
So the straightforward approach would be creating a SQL file file with 150 inserts and running it via flyway in spring or even via maven depending on your actual setup.
If you want more fine-grained control and the SQL is not flexible enough, its possible to implement Migration in Java Code. See Official Flyway Documentation
We're refactoring our app to allow for multiple database support, mainly MySQL (current) and MongoDB (in dev). For MySQL we have to shade in tomcat jdbc pool code, and for Mongo we shade in the java driver. I need to produce two builds - one for MySQL, one for Mongo.
I'm trying to find a way to do this with maven - only shade tomcat-jdbc or mongo and some internal code needed for that database system. However, I'm not seeing much in the way of multiple builds and have seen several SO answers advising not to use it.
I've also considered separating the database code into different projects entirely - so we can provide the product, and a second jar with either mysql, or mongo code. It's a bit messier for end users, but cleaner for us.
Suggestions?
There is a very important rule in maven (or any other dependency management system):
Different results should always have different coordinates,
or the other way round:
Building from the same sourcecode state should always result in the same artifact, regardless of profiles or target systems.
So, if you want to produce two different results, you need them to have different coordinates. Your two options are:
a) one project creating different classifiers (which is ugly but workable)
b) or create two submodules myproject-mysql and myproject-mongodb (which in turn shade your main code and the respective db-code) (different artifactIds)
There is nothing against shading to create a complete jar, in your case just create two separate jars, which are fully workable by themselves.