Multiple datasources migrations using Flyway in a Spring Boot application - spring-boot

We use Flyway for db migration in our Spring Boot based app and now we have a requirement to introduce multi tenancy support while using multiple datasources strategy. As part of that we also need to support migration of multiple data sources. All data sources should maintain the same structure so same migration scripts should be used for migrating of all data sources. Also, migrations should occur upon application startup (as opposed to build time, whereas it seems that the maven plugin can be configured to migrate multiple data sources). What is the best approach to use in order to achieve this? The app already has data source beans defined but Flyway executes the migration only for the primary data source.

To make #Roger Thomas answer more the Spring Boot way:
Easiest solution is to annotate your primary datasource with #Primary (which you already did) and just let bootstrap migrate your primary datasource the 'normal' way.
For the other datasources, migrate those sources by hand:
#Configuration
public class FlywaySlaveInitializer {
#Autowired private DataSource dataSource2;
#Autowired private DataSource dataSource3;
//other datasources
#PostConstruct
public void migrateFlyway() {
Flyway flyway = new Flyway();
//if default config is not sufficient, call setters here
//source 2
flyway.setDataSource(dataSource2);
flyway.setLocations("db/migration_source_2");
flyway.migrate();
//source 3
flyway.setDataSource(dataSource3);
flyway.setLocations("db/migration_source_3");
flyway.migrate();
}
}

Flyway supports migrations coded within Java and so you can start Flyway during your application startup.
https://flywaydb.org/documentation/migration/java
I am not sure how you would config Flyway to target a number of data sources via the its config files. My own development is based around using Java to call Flyway once per data source I need to work against. Spring Boot supports the autowiring of beans marked as #FlywayDataSource, but I have not looked into how this could be used.
For an in-java solution the code can be as simple as
Flyway flyway = new Flyway();
// Set the data source
flyway.setDataSource(dataSource);
// Where to search for classes to be executed or SQL scripts to be found
flyway.setLocations("net.somewhere.flyway");
flyway.setTarget(MigrationVersion.LATEST);
flyway.migrate();

Having your same problem... I looked into the spring-boot-autoconfigure artifact for V 2.2.4 in the org.springframework.boot.autoconfigure.flyway package and I found an annotation FlywayDataSource.
Annotating ANY datasource you want to be used by Flyway should do the trick.
Something like this:
#FlywayDataSource
#Bean(name = "someDatasource")
public DataSource someDatasource(...) {
<build and return your datasource>
}

Found an easy solution for that - I added the step during the creation of my emf:
#Qualifier(EMF2)
#Bean(name = EMF2)
public LocalContainerEntityManagerFactoryBean entityManagerFactory2(
final EntityManagerFactoryBuilder builder
) {
final DataSource dataSource = dataSource2();
Flyway.configure()
.dataSource(dataSource)
.locations("db/migration/ds2")
.load()
.migrate();
return builder
.dataSource(dataSource)
.packages(Role.class)
.properties(jpaProperties2().getProperties())
.persistenceUnit("domain2")
.build();
}
I disabled spring.flyway.enabled for that.
SQL files live in resources/db/migration/ds1/... and resources/db/migration/ds2/...

This worked for me.
import javax.annotation.PostConstruct;
import org.flywaydb.core.Flyway;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;
#Configuration
public class FlywaySlaveInitializer {
#Value("${firstDatasource.db.url}")
String firstDatasourceUrl;
#Value("${firstDatasource.db.user}")
String firstDatasourceUser;
#Value("${firstDatasource.db.password}")
String firstDatasourcePassword;
#Value("${secondDatasource.db.url}")
String secondDatasourceUrl;
#Value("${secondDatasource.db.user}")
String secondDatasourceUser;
#Value("${secondDatasource.db.password}")
String secondDatasourcePassword;
#PostConstruct
public void migrateFlyway() {
Flyway flywayIntegration = Flyway.configure()
.dataSource(firstDatasourceUrl, firstDatasourceUser, firstDatasourcePassword)
.locations("filesystem:./src/main/resources/migration.first")
.load();
Flyway flywayPhenom = Flyway.configure()
.dataSource(secondDatasourceUrl, secondDatasourceUser, secondDatasourcePassword)
.locations("filesystem:./src/main/resources/migration.second")
.load();
flywayIntegration.migrate();
flywayPhenom.migrate();
}
}
And in my application.yml this property:
spring:
flyway:
enabled: false

Related

Unable to configure head collection and snapshot collection name in Javers framework in spring boot

I am using spring boot(v 2.6.6), java11 and javers-spring-boot-starter-mongo(6.9.1). I tried changing the name of the collections that Javers creates by default in MongoDB as mentioned in the javers docs.
This is what my configuration code looks like.
#Configuration
#ComponentScan(basePackages = "org.javers.spring")
#EnableMongoRepositories("org.javers.spring")
#EnableAspectJAutoProxy
public class JaversSpringMongoApplicationConfig {
#Autowired
MongoClient mongoClient;
#Value("${db.name}")
String dbName;
#Bean
public void mongoConfigJavers(){
MongoRepository mongoRepository = new MongoRepository(getMongoDb(dbName),
mongoRepositoryConfiguration()
.withSnapshotCollectionName("jv_custom_snapshots")
.withHeadCollectionName("jv_custom_head_id")
.build());
}
private MongoDatabase getMongoDb(String dbName) {
return mongoClient.getDatabase(dbName);
}
}
On running the code, the collections name are not changing. I have tried dropping the previous collection and running the code again. But, the names of collections are still coming as jv_snapshots and jv_head_id instead of jv_custom_snapshots and jv_custom_head_id.
What else do I need to do or how can I find where am I going wrong??
Fixed it by changing the Bean name from mongoConfigJavers to Javers. Basically Javers framewrok automatically configures this Bean and this can be overwritten by defining a new bean with the same name. Reference
New code looks something like this:
#Bean
public Javers javers(){
MongoRepository mongoRepository = new MongoRepository(getMongoDb(dbName),
mongoRepositoryConfiguration()
.withSnapshotCollectionName("jv_custom_snapshots")
.withHeadCollectionName("jv_custom_head_id")
.build());
return JaversBuilder.javers()
.registerJaversRepository(mongoRepository)
.build();
}
An example bean configuration of Javers. Further details
#Bean(name = "JaversFromStarter")
#ConditionalOnMissingBean
public Javers javers() {
logger.info("Starting javers-spring-boot-starter-mongo ...");
MongoDatabase mongoDatabase = initJaversMongoDatabase();
MongoRepository javersRepository = createMongoRepository(mongoDatabase);
JaversBuilder javersBuilder = TransactionalMongoJaversBuilder.javers()
.registerJaversRepository(javersRepository)
.withTxManager(mongoTransactionManager.orElse(null))
.withProperties(javersMongoProperties)
.withObjectAccessHook(javersMongoProperties.createObjectAccessHookInstance());
plugins.forEach(plugin -> plugin.beforeAssemble(javersBuilder));
return javersBuilder.build();
}
Just for information with the release of 6.10.0, these collection names are also now configurable via Spring boot configuration files(application.yml)
6.10.0
released on 2023-02-15
1254
Added the possibility to disable the schema management in Mongo Repository via application code.
Usage:
MongoRepository mongoRepository = new MongoRepository(getMongoDb(),
mongoRepositoryConfiguration()
.withSnapshotCollectionName("jv_custom_snapshots_")
.withHeadCollectionName("jv_custom_head_id_")
.withSchemaManagementEnabled(false)
.build())
See also MongoE2EWithSchemaEnabledTest.groovy
Made Mongo repository configuration parameters(snapshotCollectionName, headCollectionName, schemaManagementEnabled) configurable through Spring boot configuration files (application.yml)
Usage:
javers:
snapshotCollectionName: "jv_custom_snapshots"
headCollectionName: "jv_custom_head_id"
schemaManagementEnabled: false

Caused by: com.datastax.oss.driver.api.core.InvalidKeyspaceException: Invalid keyspace mykeyspace in Spring Boot Cassandra

I devised a project with the usage of Spring boot and Cassandra which runs on Docker Container.
After I implemented the configuration of Cassandra, I ran the project and it threw an error shown below.
Caused by: com.datastax.oss.driver.api.core.InvalidKeyspaceException: Invalid keyspace mykeyspace
How can I fix the issue?
Here is my application.properties file.
spring.cassandra.contactpoints=127.0.0.1
spring.cassandra.port=9042
spring.data.cassandra.keyspace-name=mykeyspace
spring.cassandra.basepackages=com.springboot.cassandra
Here is a configuration file of Cassandra
#Configuration
#EnableCassandraRepositories
public class CassandraConfiguration extends AbstractCassandraConfiguration {
#Value("${spring.cassandra.contactpoints}")
private String contactPoint;
#Value("${spring.cassandra.port}")
private int port;
#Value("${spring.data.cassandra.keyspace-name}")
private String keyspaceName;
#Value("${spring.cassandra.basepackages}")
private String basePackages;
#Override
protected String getKeyspaceName() {
return keyspaceName;
}
#Override
protected int getPort() {
return port;
}
#Override
protected String getContactPoints() {
return contactPoint;
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.CREATE_IF_NOT_EXISTS;
}
#Override
public String[] getEntityBasePackages() {
return new String[] {basePackages};
}
}
By default Spring Data will not create or alter the schema for you. This is a good thing for most use cases as normally you do not want your schema to be created based on a java class. Altering would be even worse in general and also difficult for Cassandra in particular.
If you want for spring to create it you need:
spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS
I would still recommend not using this in production.
When talking about keyspaces however, from my knowledge and from the wording of the documentation spring will not create the keyspace even if you use the code above. This makes a lot of sense for Cassandra as a keyspace needs info such as replication strategy and replication factor, the latter being changed for things such as new Data centers being added or removed. This things are administrative tasks that should not be left up to Spring.
If you can delete - character, I think you can run code without errors.
Replace this line:
#Value("${spring.data.cassandra.keyspace-name}")
with this one:
#Value("${spring.data.cassandra.keyspacename}")
With spring-boot there's no need to extend AbstractCassandraConfiguration, it can auto-configure Cassandra for you. You can just remove your configuration class and everything will work just fine. However, even in this case you need to do some work to automatically create the keyspace. I've settled on an auto-configuration added to our company's spring-boot starter, but you can also define it as a regular configuration in your application.
/**
* create the configured keyspace before the first cqlSession is instantiated. This is guaranteed by running this
* autoconfiguration before the spring-boot one.
*/
#ConditionalOnClass(CqlSession.class)
#ConditionalOnProperty(name = "spring.data.cassandra.create-keyspace", havingValue = "true")
#AutoConfigureBefore(CassandraAutoConfiguration.class)
public class CassandraCreateKeyspaceAutoConfiguration {
private static final Logger logger = LoggerFactory.getLogger(CassandraCreateKeyspaceAutoConfiguration.class);
public CassandraCreateKeyspaceAutoConfiguration(CqlSessionBuilder cqlSessionBuilder, CassandraProperties properties) {
// It's OK to mutate cqlSessionBuilder because it has prototype scope.
try (CqlSession session = cqlSessionBuilder.withKeyspace((CqlIdentifier) null).build()) {
logger.info("Creating keyspace {} ...", properties.getKeyspaceName());
session.execute(CreateKeyspaceCqlGenerator.toCql(
CreateKeyspaceSpecification.createKeyspace(properties.getKeyspaceName()).ifNotExists()));
}
}
}
In my case I've also added a configuration property to control the creation, spring.data.cassandra.create-keyspace, you may leave it out if you don't need the flexibility.
Note that spring-boot auto-configuration depends on certain configuration properties, here's what I have in my dev environment:
spring:
data:
cassandra:
keyspace-name: mykeyspace
contact-points: 127.0.0.1
port: 9042
local-datacenter: datacenter1
schema-action: CREATE_IF_NOT_EXISTS
create-keyspace: true
More details: spring-boot and Cassandra

getTableMappings method is not present in Hibernate 5

I want to upgrade my hibernate version 4.x.x to 5.x.x. But after upgrading i can not find getTableMappings method in Configuration class of hibernate 5. I need this before building sessionfactory. Earlier it was available in hibernate. What can be the right solution for it ?
I guess you can achieve what you want only by refusing usage legacy bootstrapping:
Configuration is semi-deprecated but still available for use, in a limited form that eliminates these drawbacks. "Under the covers", Configuration uses the new bootstrapping code, so the things available there are also available here in terms of auto-discovery.
As described here, you can bootstrap a Hibernate SessionFactory in the following way:
import org.hibernate.boot.Metadata;
import org.hibernate.boot.MetadataSources;
import org.hibernate.boot.registry.StandardServiceRegistry;
import org.hibernate.boot.registry.StandardServiceRegistryBuilder;
import org.hibernate.mapping.PersistentClass;
// ...
StandardServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder()
.configure("hibernate.cfg.xml")
.build();
MetadataSources metadata = new MetadataSources(serviceRegistry);
Metadata meta = metadata.buildMetadata();
// Retrieves the PersistentClass entity metadata representation for all known entities.
for (PersistentClass pClass : meta.getEntityBindings())
{
// ...
}
sessionFactory = meta.buildSessionFactory();

Spring data with multiple modules not working

I'm trying to set up a project with two data sources, one is MongoDB and the other is Postgres. I have repositories for each data source in different packages and I annotated my main class as follows:
#Import({MongoDBConfiguration.class, PostgresDBConfiguration.class})
#SpringBootApplication(exclude = {
MongoRepositoriesAutoConfiguration.class,
JpaRepositoriesAutoConfiguration.class
})
public class TemporaryRunner implements CommandLineRunner {
...
}
MongoDBConfiguration:
#Configuration
#EnableMongoRepositories(basePackages = {
"com.example.datastore.mongo",
"com.atlassian.connect.spring"})
public class MongoDBConfiguration {
...
}
PostgresDBConfiguration:
#Configuration
#EnableJpaRepositories(basePackages = {
"com.example.datastore.postgres"
})
public class PostgresDBConfiguration {
...
}
And even though I specified the base packages as described in documentation, I still get those messages in the console:
13:10:44.238 [main] [] INFO o.s.d.r.c.RepositoryConfigurationDelegate - Multiple Spring Data modules found, entering strict repository configuration mode!
13:10:44.266 [main] [] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.atlassian.connect.spring.AtlassianHostRepository.
I managed to solve this issue for all my repositories by using MongoRepository and JpaRepository but AtlassianHostRepository comes from an external lib and it is a regular CrudRepository (which totally makes sense because the consumer of the lib can decide what type of DB he would like to use). Anyway it looks that basePackages I specified are completely ignored and not used in any way, even though I specified com.atlassian.connect.spring package only in #EnableMongoRepositories Spring Data somehow can't figure out which data module should be used.
Am I doing something wrong? Is there any other way I could tell spring data to use mongo for AtlassianHostRepository without changing the AtlassianHostRepository.class itself?
The only working solution I found was to let spring data ignore AtlassianHostRepository (because it couldn't figure out which data source to use) then create a separate configuration for it, and simply create it by hand:
#Configuration
#Import({MongoDBConfiguration.class})
public class AtlassianHostRepositoryConfiguration {
private final MongoTemplate mongoTemplate;
#Autowired
public AtlassianHostRepositoryConfiguration(final MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public AtlassianHostRepository atlassianHostRepository() {
RepositoryFactorySupport factory = new MongoRepositoryFactory(mongoTemplate);
return factory.getRepository(AtlassianHostRepository.class);
}
}
This solution works fine for a small or limited number of repositories used from a library, it would be rather cumbersome to create all the repositories by hand when there are more of them, but after reading the source code of spring-data I see no way to make it work with basePackages as stated in documentation (I may be wrong though).

Using Elasticsearch java config client in Spring Batch

I am trying to write a custom writer of elasticsearch which would index data in a spring batch implementation.
I could find the below code as Java config for the elasticsearch.
Anyone who has used this, Can please share where to call this configuration?
#Configuration
#EnableElasticsearchRepositories(basePackages = "org/springframework/data/elasticsearch/repositories")
static class Config {
#Value("${esearch.port}") int port;
#Value("${esearch.host}") String hostname;
#Bean
public ElasticsearchOperations elasticsearchTemplate() {
return new ElasticsearchTemplate(client());
}
#Bean
public Client client(){
TransportClient client= new TransportClient();
TransportAddress address = new InetSocketTransportAddress(hostname, port);
client.addTransportAddress(address);
return client;
}
}
The code that you listed above is basically your implementation details of a - Transport Client element pointing to an instance of Elasticsearch Server i.e. this defines your persistent layer by using Spring Data.
This code will be used by your elasticsearch repositories i.e. repositories that you define by extending - ElasticsearchRepository from Spring Data.
You need to edit #EnableElasticsearchRepositories in code listed by you to actually point to package where you are keeping your repository definitions - no other call would be needed.
When you are going to write/index data to elasticsearch, you work with ElasticsearchRepository interface and you need to define your own repositories and these repositories work with instances as per listing in your code.
Hope it helps !!

Resources