Unable to configure head collection and snapshot collection name in Javers framework in spring boot - javers

I am using spring boot(v 2.6.6), java11 and javers-spring-boot-starter-mongo(6.9.1). I tried changing the name of the collections that Javers creates by default in MongoDB as mentioned in the javers docs.
This is what my configuration code looks like.
#Configuration
#ComponentScan(basePackages = "org.javers.spring")
#EnableMongoRepositories("org.javers.spring")
#EnableAspectJAutoProxy
public class JaversSpringMongoApplicationConfig {
#Autowired
MongoClient mongoClient;
#Value("${db.name}")
String dbName;
#Bean
public void mongoConfigJavers(){
MongoRepository mongoRepository = new MongoRepository(getMongoDb(dbName),
mongoRepositoryConfiguration()
.withSnapshotCollectionName("jv_custom_snapshots")
.withHeadCollectionName("jv_custom_head_id")
.build());
}
private MongoDatabase getMongoDb(String dbName) {
return mongoClient.getDatabase(dbName);
}
}
On running the code, the collections name are not changing. I have tried dropping the previous collection and running the code again. But, the names of collections are still coming as jv_snapshots and jv_head_id instead of jv_custom_snapshots and jv_custom_head_id.
What else do I need to do or how can I find where am I going wrong??

Fixed it by changing the Bean name from mongoConfigJavers to Javers. Basically Javers framewrok automatically configures this Bean and this can be overwritten by defining a new bean with the same name. Reference
New code looks something like this:
#Bean
public Javers javers(){
MongoRepository mongoRepository = new MongoRepository(getMongoDb(dbName),
mongoRepositoryConfiguration()
.withSnapshotCollectionName("jv_custom_snapshots")
.withHeadCollectionName("jv_custom_head_id")
.build());
return JaversBuilder.javers()
.registerJaversRepository(mongoRepository)
.build();
}

An example bean configuration of Javers. Further details
#Bean(name = "JaversFromStarter")
#ConditionalOnMissingBean
public Javers javers() {
logger.info("Starting javers-spring-boot-starter-mongo ...");
MongoDatabase mongoDatabase = initJaversMongoDatabase();
MongoRepository javersRepository = createMongoRepository(mongoDatabase);
JaversBuilder javersBuilder = TransactionalMongoJaversBuilder.javers()
.registerJaversRepository(javersRepository)
.withTxManager(mongoTransactionManager.orElse(null))
.withProperties(javersMongoProperties)
.withObjectAccessHook(javersMongoProperties.createObjectAccessHookInstance());
plugins.forEach(plugin -> plugin.beforeAssemble(javersBuilder));
return javersBuilder.build();
}
Just for information with the release of 6.10.0, these collection names are also now configurable via Spring boot configuration files(application.yml)
6.10.0
released on 2023-02-15
1254
Added the possibility to disable the schema management in Mongo Repository via application code.
Usage:
MongoRepository mongoRepository = new MongoRepository(getMongoDb(),
mongoRepositoryConfiguration()
.withSnapshotCollectionName("jv_custom_snapshots_")
.withHeadCollectionName("jv_custom_head_id_")
.withSchemaManagementEnabled(false)
.build())
See also MongoE2EWithSchemaEnabledTest.groovy
Made Mongo repository configuration parameters(snapshotCollectionName, headCollectionName, schemaManagementEnabled) configurable through Spring boot configuration files (application.yml)
Usage:
javers:
snapshotCollectionName: "jv_custom_snapshots"
headCollectionName: "jv_custom_head_id"
schemaManagementEnabled: false

Related

Spring data with multiple modules not working

I'm trying to set up a project with two data sources, one is MongoDB and the other is Postgres. I have repositories for each data source in different packages and I annotated my main class as follows:
#Import({MongoDBConfiguration.class, PostgresDBConfiguration.class})
#SpringBootApplication(exclude = {
MongoRepositoriesAutoConfiguration.class,
JpaRepositoriesAutoConfiguration.class
})
public class TemporaryRunner implements CommandLineRunner {
...
}
MongoDBConfiguration:
#Configuration
#EnableMongoRepositories(basePackages = {
"com.example.datastore.mongo",
"com.atlassian.connect.spring"})
public class MongoDBConfiguration {
...
}
PostgresDBConfiguration:
#Configuration
#EnableJpaRepositories(basePackages = {
"com.example.datastore.postgres"
})
public class PostgresDBConfiguration {
...
}
And even though I specified the base packages as described in documentation, I still get those messages in the console:
13:10:44.238 [main] [] INFO o.s.d.r.c.RepositoryConfigurationDelegate - Multiple Spring Data modules found, entering strict repository configuration mode!
13:10:44.266 [main] [] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.atlassian.connect.spring.AtlassianHostRepository.
I managed to solve this issue for all my repositories by using MongoRepository and JpaRepository but AtlassianHostRepository comes from an external lib and it is a regular CrudRepository (which totally makes sense because the consumer of the lib can decide what type of DB he would like to use). Anyway it looks that basePackages I specified are completely ignored and not used in any way, even though I specified com.atlassian.connect.spring package only in #EnableMongoRepositories Spring Data somehow can't figure out which data module should be used.
Am I doing something wrong? Is there any other way I could tell spring data to use mongo for AtlassianHostRepository without changing the AtlassianHostRepository.class itself?
The only working solution I found was to let spring data ignore AtlassianHostRepository (because it couldn't figure out which data source to use) then create a separate configuration for it, and simply create it by hand:
#Configuration
#Import({MongoDBConfiguration.class})
public class AtlassianHostRepositoryConfiguration {
private final MongoTemplate mongoTemplate;
#Autowired
public AtlassianHostRepositoryConfiguration(final MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public AtlassianHostRepository atlassianHostRepository() {
RepositoryFactorySupport factory = new MongoRepositoryFactory(mongoTemplate);
return factory.getRepository(AtlassianHostRepository.class);
}
}
This solution works fine for a small or limited number of repositories used from a library, it would be rather cumbersome to create all the repositories by hand when there are more of them, but after reading the source code of spring-data I see no way to make it work with basePackages as stated in documentation (I may be wrong though).

Multi-tenancy: Managing multiple datasources with Spring Data JPA

I need to create a service that can manage multiple datasources.
These datasources do not necessarily exist when the app when first running the app, actually an endpoint will create new databases, and I would like to be able to switch to them and create data.
For example, let's say that I have 3 databases, A, B and C, then I start the app, I use the endpoint that creates D, then I want to use D.
Is that possible?
I know how to switch to other datasources if those exist, but I can't see any solutions for now that would make my request possible.
Have you got any ideas?
Thanks
To implement multi-tenancy with Spring Boot we can use AbstractRoutingDataSource as base DataSource class for all 'tenant databases'.
It has one abstract method determineCurrentLookupKey that we have to override. It tells the AbstractRoutingDataSource which of the tenant datasource it has to provide at the moment to work with. Because it works in the multi-threading environment, the information of the chosen tenant should be stored in ThreadLocal variable.
The AbstractRoutingDataSource stores the info of the tenant datasources in its private Map<Object, Object> targetDataSources. The key of this map is a tenant identifier (for example the String type) and the value - the tenant datasource. To put our tenant datasources to this map we have to use its setter setTargetDataSources.
The AbstractRoutingDataSource will not work without 'default' datasource which we have to set with method setDefaultTargetDataSource(Object defaultTargetDataSource).
After we set the tenant datasources and the default one, we have to invoke method afterPropertiesSet() to tell the AbstractRoutingDataSource to update its state.
So our 'MultiTenantManager' class can be like this:
#Configuration
public class MultiTenantManager {
private final ThreadLocal<String> currentTenant = new ThreadLocal<>();
private final Map<Object, Object> tenantDataSources = new ConcurrentHashMap<>();
private final DataSourceProperties properties;
private AbstractRoutingDataSource multiTenantDataSource;
public MultiTenantManager(DataSourceProperties properties) {
this.properties = properties;
}
#Bean
public DataSource dataSource() {
multiTenantDataSource = new AbstractRoutingDataSource() {
#Override
protected Object determineCurrentLookupKey() {
return currentTenant.get();
}
};
multiTenantDataSource.setTargetDataSources(tenantDataSources);
multiTenantDataSource.setDefaultTargetDataSource(defaultDataSource());
multiTenantDataSource.afterPropertiesSet();
return multiTenantDataSource;
}
public void addTenant(String tenantId, String url, String username, String password) throws SQLException {
DataSource dataSource = DataSourceBuilder.create()
.driverClassName(properties.getDriverClassName())
.url(url)
.username(username)
.password(password)
.build();
// Check that new connection is 'live'. If not - throw exception
try(Connection c = dataSource.getConnection()) {
tenantDataSources.put(tenantId, dataSource);
multiTenantDataSource.afterPropertiesSet();
}
}
public void setCurrentTenant(String tenantId) {
currentTenant.set(tenantId);
}
private DriverManagerDataSource defaultDataSource() {
DriverManagerDataSource defaultDataSource = new DriverManagerDataSource();
defaultDataSource.setDriverClassName("org.h2.Driver");
defaultDataSource.setUrl("jdbc:h2:mem:default");
defaultDataSource.setUsername("default");
defaultDataSource.setPassword("default");
return defaultDataSource;
}
}
Brief explanation:
map tenantDataSources it's our local tenant datasource storage which we put to the setTargetDataSources setter;
DataSourceProperties properties is used to get Database Driver Class name of tenant database from the spring.datasource.driverClassName of the 'application.properties' (for example, org.postgresql.Driver);
method addTenant is used to add a new tenant and its datasource to our local tenant datasource storage. We can do this on the fly - thanks to the method afterPropertiesSet();
method setCurrentTenant(String tenantId) is used to 'switch' onto datasource of the given tenant. We can use this method, for example, in the REST controller when handling a request to work with database. The request should contain the 'tenantId', for example in the X-TenantId header, that we can retrieve and put to this method;
defaultDataSource() is build with in-memory H2 Database to avoid using the default database on the working SQL server.
Note: you must set spring.jpa.hibernate.ddl-auto parameter to none to disable the Hibernate make changes in the database schema. You have to create a schema of tenant databases beforehand.
A full example of this class and more you can find in my repo.
UPDATED
This branch demonstrates an example of using the dedicated database to store tenant DB properties instead of property files (see the question of #MarcoGustavo below).

How to add Cassandra MaxRequestsPerConnection using properties file in Spring boot

I have a Spring boot project, in which I use Cassandra as a database.
Currently, I am getting Cassandra instance by auto-wiring CassandraOperations.
My question is:
How can we set MaxRequestsPerConnection using a property file?
# spring.data.cassandra.keyspace-name=event
# spring.data.cassandra.contact-points=localhost
# spring.data.cassandra.port=9042
Currently, I have these properties on my property file, but I didn't found any property for setting MaxRequestsPerConnection
Spring Boot does not offer a configuration of all properties. You can define a ClusterBuilderCustomizer bean to customize Cluster instances.
Try the following code to declare a customizer bean which gets properties injected that can be provided via a properties file (more generally speaking, any property source available to Spring Boot):
#Configuration
public class MyConfiguration {
#Bean
ClusterBuilderCustomizer clusterBuilderCustomizer(
#Value("${spring.data.cassandra.pool.max-requests-local:10}") int local,
#Value("${spring.data.cassandra.pool.max-requests-remote:5}") int remote) {
PoolingOptions options = new PoolingOptions();
options.setMaxRequestsPerConnection(HostDistance.LOCAL, local);
options.setMaxRequestsPerConnection(HostDistance.REMOTE, remote);
return builder -> builder.withPoolingOptions(options);
}
}
An alternative to #Value is using a configuration class (annotated with #ConfigurationProperties which gives you IDE support (such as property-name auto-completion).
Step No : 1
In application.properties file we have to declare local and remote pool size (required size value )
# spring.data.cassandra.keyspace-name=event
# spring.data.cassandra.contact-points=localhost
# spring.data.cassandra.port=9042
# spring.data.cassandra.pool.max-requests-local:20
# spring.data.cassandra.pool.max-requests-remote:10
Step No:2
in the Bean Configuration :
#Bean
ClusterBuilderCustomizer please get the values by using the following code (using #value annotation):
#Value("${spring.data.cassandra.pool.max-requests-local}")
private int localPool;
#Value("${spring.data.cassandra.pool.max-requests-remote}")
private int remotePool;
By using this PoolingOptions class set the setMaxRequestsPerConnections for local and remote
HostDistance.LOCAL -- localPool
HostDistance.REMOTE -- remotePool
As per Spring Boot 2.3.0 release notes, ClusterBuilderCustomizer has been replaced with DriverConfigLoaderBuilderCustomizer and CqlSessionBuilderCustomizer. As said in anwser, You just need to declare two beans having these types:
#Bean
public CqlSessionBuilderCustomizer cqlSessionBuilderCustomizer() {
return cqlSessionBuilder -> cqlSessionBuilder
.withNodeStateListener(new MyNodeStateListener())
.withSchemaChangeListener(new MySchemChangeListener());
}
#Bean
public DriverConfigLoaderBuilderCustomizer driverConfigLoaderBuilderCustomizer() {
return loaderBuilder -> loaderBuilder
.withDuration(DefaultDriverOption.REQUEST_TIMEOUT, Duration.ofSeconds(10));
}
}

Using Elasticsearch java config client in Spring Batch

I am trying to write a custom writer of elasticsearch which would index data in a spring batch implementation.
I could find the below code as Java config for the elasticsearch.
Anyone who has used this, Can please share where to call this configuration?
#Configuration
#EnableElasticsearchRepositories(basePackages = "org/springframework/data/elasticsearch/repositories")
static class Config {
#Value("${esearch.port}") int port;
#Value("${esearch.host}") String hostname;
#Bean
public ElasticsearchOperations elasticsearchTemplate() {
return new ElasticsearchTemplate(client());
}
#Bean
public Client client(){
TransportClient client= new TransportClient();
TransportAddress address = new InetSocketTransportAddress(hostname, port);
client.addTransportAddress(address);
return client;
}
}
The code that you listed above is basically your implementation details of a - Transport Client element pointing to an instance of Elasticsearch Server i.e. this defines your persistent layer by using Spring Data.
This code will be used by your elasticsearch repositories i.e. repositories that you define by extending - ElasticsearchRepository from Spring Data.
You need to edit #EnableElasticsearchRepositories in code listed by you to actually point to package where you are keeping your repository definitions - no other call would be needed.
When you are going to write/index data to elasticsearch, you work with ElasticsearchRepository interface and you need to define your own repositories and these repositories work with instances as per listing in your code.
Hope it helps !!

Multiple datasources migrations using Flyway in a Spring Boot application

We use Flyway for db migration in our Spring Boot based app and now we have a requirement to introduce multi tenancy support while using multiple datasources strategy. As part of that we also need to support migration of multiple data sources. All data sources should maintain the same structure so same migration scripts should be used for migrating of all data sources. Also, migrations should occur upon application startup (as opposed to build time, whereas it seems that the maven plugin can be configured to migrate multiple data sources). What is the best approach to use in order to achieve this? The app already has data source beans defined but Flyway executes the migration only for the primary data source.
To make #Roger Thomas answer more the Spring Boot way:
Easiest solution is to annotate your primary datasource with #Primary (which you already did) and just let bootstrap migrate your primary datasource the 'normal' way.
For the other datasources, migrate those sources by hand:
#Configuration
public class FlywaySlaveInitializer {
#Autowired private DataSource dataSource2;
#Autowired private DataSource dataSource3;
//other datasources
#PostConstruct
public void migrateFlyway() {
Flyway flyway = new Flyway();
//if default config is not sufficient, call setters here
//source 2
flyway.setDataSource(dataSource2);
flyway.setLocations("db/migration_source_2");
flyway.migrate();
//source 3
flyway.setDataSource(dataSource3);
flyway.setLocations("db/migration_source_3");
flyway.migrate();
}
}
Flyway supports migrations coded within Java and so you can start Flyway during your application startup.
https://flywaydb.org/documentation/migration/java
I am not sure how you would config Flyway to target a number of data sources via the its config files. My own development is based around using Java to call Flyway once per data source I need to work against. Spring Boot supports the autowiring of beans marked as #FlywayDataSource, but I have not looked into how this could be used.
For an in-java solution the code can be as simple as
Flyway flyway = new Flyway();
// Set the data source
flyway.setDataSource(dataSource);
// Where to search for classes to be executed or SQL scripts to be found
flyway.setLocations("net.somewhere.flyway");
flyway.setTarget(MigrationVersion.LATEST);
flyway.migrate();
Having your same problem... I looked into the spring-boot-autoconfigure artifact for V 2.2.4 in the org.springframework.boot.autoconfigure.flyway package and I found an annotation FlywayDataSource.
Annotating ANY datasource you want to be used by Flyway should do the trick.
Something like this:
#FlywayDataSource
#Bean(name = "someDatasource")
public DataSource someDatasource(...) {
<build and return your datasource>
}
Found an easy solution for that - I added the step during the creation of my emf:
#Qualifier(EMF2)
#Bean(name = EMF2)
public LocalContainerEntityManagerFactoryBean entityManagerFactory2(
final EntityManagerFactoryBuilder builder
) {
final DataSource dataSource = dataSource2();
Flyway.configure()
.dataSource(dataSource)
.locations("db/migration/ds2")
.load()
.migrate();
return builder
.dataSource(dataSource)
.packages(Role.class)
.properties(jpaProperties2().getProperties())
.persistenceUnit("domain2")
.build();
}
I disabled spring.flyway.enabled for that.
SQL files live in resources/db/migration/ds1/... and resources/db/migration/ds2/...
This worked for me.
import javax.annotation.PostConstruct;
import org.flywaydb.core.Flyway;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;
#Configuration
public class FlywaySlaveInitializer {
#Value("${firstDatasource.db.url}")
String firstDatasourceUrl;
#Value("${firstDatasource.db.user}")
String firstDatasourceUser;
#Value("${firstDatasource.db.password}")
String firstDatasourcePassword;
#Value("${secondDatasource.db.url}")
String secondDatasourceUrl;
#Value("${secondDatasource.db.user}")
String secondDatasourceUser;
#Value("${secondDatasource.db.password}")
String secondDatasourcePassword;
#PostConstruct
public void migrateFlyway() {
Flyway flywayIntegration = Flyway.configure()
.dataSource(firstDatasourceUrl, firstDatasourceUser, firstDatasourcePassword)
.locations("filesystem:./src/main/resources/migration.first")
.load();
Flyway flywayPhenom = Flyway.configure()
.dataSource(secondDatasourceUrl, secondDatasourceUser, secondDatasourcePassword)
.locations("filesystem:./src/main/resources/migration.second")
.load();
flywayIntegration.migrate();
flywayPhenom.migrate();
}
}
And in my application.yml this property:
spring:
flyway:
enabled: false

Resources