Spring data with multiple modules not working - spring-boot

I'm trying to set up a project with two data sources, one is MongoDB and the other is Postgres. I have repositories for each data source in different packages and I annotated my main class as follows:
#Import({MongoDBConfiguration.class, PostgresDBConfiguration.class})
#SpringBootApplication(exclude = {
MongoRepositoriesAutoConfiguration.class,
JpaRepositoriesAutoConfiguration.class
})
public class TemporaryRunner implements CommandLineRunner {
...
}
MongoDBConfiguration:
#Configuration
#EnableMongoRepositories(basePackages = {
"com.example.datastore.mongo",
"com.atlassian.connect.spring"})
public class MongoDBConfiguration {
...
}
PostgresDBConfiguration:
#Configuration
#EnableJpaRepositories(basePackages = {
"com.example.datastore.postgres"
})
public class PostgresDBConfiguration {
...
}
And even though I specified the base packages as described in documentation, I still get those messages in the console:
13:10:44.238 [main] [] INFO o.s.d.r.c.RepositoryConfigurationDelegate - Multiple Spring Data modules found, entering strict repository configuration mode!
13:10:44.266 [main] [] INFO o.s.d.r.c.RepositoryConfigurationExtensionSupport - Spring Data MongoDB - Could not safely identify store assignment for repository candidate interface com.atlassian.connect.spring.AtlassianHostRepository.
I managed to solve this issue for all my repositories by using MongoRepository and JpaRepository but AtlassianHostRepository comes from an external lib and it is a regular CrudRepository (which totally makes sense because the consumer of the lib can decide what type of DB he would like to use). Anyway it looks that basePackages I specified are completely ignored and not used in any way, even though I specified com.atlassian.connect.spring package only in #EnableMongoRepositories Spring Data somehow can't figure out which data module should be used.
Am I doing something wrong? Is there any other way I could tell spring data to use mongo for AtlassianHostRepository without changing the AtlassianHostRepository.class itself?

The only working solution I found was to let spring data ignore AtlassianHostRepository (because it couldn't figure out which data source to use) then create a separate configuration for it, and simply create it by hand:
#Configuration
#Import({MongoDBConfiguration.class})
public class AtlassianHostRepositoryConfiguration {
private final MongoTemplate mongoTemplate;
#Autowired
public AtlassianHostRepositoryConfiguration(final MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public AtlassianHostRepository atlassianHostRepository() {
RepositoryFactorySupport factory = new MongoRepositoryFactory(mongoTemplate);
return factory.getRepository(AtlassianHostRepository.class);
}
}
This solution works fine for a small or limited number of repositories used from a library, it would be rather cumbersome to create all the repositories by hand when there are more of them, but after reading the source code of spring-data I see no way to make it work with basePackages as stated in documentation (I may be wrong though).

Related

Quarkus extension using a repository based on PanacheMongoRepository

I'm currently working on a Quarkus extension which is basically a filter that is using a PanacheMongoRepository. Here is a code snippet (this is in the runtime part of the extension) :
#Provider
#Priority(Priorities.AUTHORIZATION)
#AuthorizationSecured
public class AuthorizationFilter implements ContainerRequestFilter {
// Some injection here
#Inject
UserRepository userRepository;
#Override
public void filter(ContainerRequestContext requestContext) throws IOException {
// Some business logic here...
UserEntity userEntity = userRepository.findByName(name);
// Some business logic here...
}
}
The repository :
#ApplicationScoped
public class UserRepository implements PanacheMongoRepository<UserEntity> {
public UserEntity findByName(String name) {
return find("some query...", name).firstResult();
}
}
When the repository is called, I get the following exception:
org.jboss.resteasy.spi.UnhandledException: java.lang.IllegalStateException: This method is normally automatically overridden in subclasses...
java.lang.IllegalStateException: This method is normally automatically overridden in subclasses\n\tat io.quarkus.mongodb.panache.common.runtime.MongoOperations.implementationInjectionMissing(MongoOperations.java:765)\n\tat io.quarkus.mongodb.panache.PanacheMongoRepositoryBase.find(PanacheMongoRepositoryBase.java:119)
The processor
class AuthorizeProcessor {
private static final String FEATURE = "authorize";
#BuildStep
FeatureBuildItem feature() {
return new FeatureBuildItem(FEATURE);
}
#BuildStep(onlyIf = IsAuthorizeEnabled.class)
void registerAuthorizeFilter(
BuildProducer<AdditionalBeanBuildItem> additionalBeanProducer,
BuildProducer<ResteasyJaxrsProviderBuildItem> resteasyJaxrsProviderProducer
) {
additionalBeanProducer.produce(new AdditionalBeanBuildItem(UserRepository.class));
additionalBeanProducer.produce(new AdditionalBeanBuildItem(AuthorizationFilter.class));
resteasyJaxrsProviderProducer.produce(new ResteasyJaxrsProviderBuildItem(AuthorizationFilter.class.getName()));
}
}
Any idea ?
Thanks for your help :)
MongoDB with Panache (and the same for Hibernate with Panache) uses bytecode enhancement at build time. When this enhancement didn't occurs it leads to the exception you mentionned at runtime: java.lang.IllegalStateException: This method is normally automatically overridden in subclasses
It can occurs only when the repository or entity is not in the Jandex index. Jandex is used to index all the code of your application to avoid using reflection and classpath scanning to discover classes. If your entity / repository is not in the index this means it's not part of your application as we automatically index the classes of your application, so it must be inside an external JAR.
Usually, this is solved by adding the Jandex plugin to index the code of the external JAR (in fact there is multiple way to do this, see How to Generate a Jandex Index).
An extension suffer from the same issue as extensions are not indexed by default. But from an extension you can index the needed classes via a build step wich is more easy and avoid polluting the index with classes that are not needed.
This can be done by generating a new AdditionalIndexedClassesBuildItem(UserRepository.class.getName()) inside a build step.

Primary/secondary datasource failover in Spring MVC

I have a java web application developed on Spring framework which uses mybatis. I see that the datasource is defined in beans.xml. Now I want to add a secondary data source too as a backup. For e.g, if the application is not able to connect to the DB and gets some error, or if the server is down, then it should be able to connect to a different datasource. Is there a configuration in Spring to do this or we will have to manually code this in the application?
I have seen primary and secondary notations in Spring boot but nothing in Spring. I could achieve these in my code where the connection is created/retrieved, by connecting to the secondary datasource if the connection to the primary datasource fails/timed out. But wanted to know if this can be achieved by making changes just in Spring configuration.
Let me clarify things one-by-one-
Spring Boot has a #Primary annotation but there is no #Secondary annotation.
The purpose of the #Primary annotation is not what you have described. Spring does not automatically switch data sources in any way. #Primary merely tells the spring which data source to use in case we don't specify one in any transaction. For more detail on this- https://www.baeldung.com/spring-data-jpa-multiple-databases
Now, how do we actually switch datasources when one goes down-
Most people don't manage this kind of High-availability in code. People usually prefer to 2 master database instances in an active-passive mode which are kept in sync. For auto-failovers, something like keepalived can be used. This is also a high subjective and contentious topic and there are a lot of things to consider here like can we afford replication lag, are there slaves running for each master(because then we have to switch slaves too as old master's slaves would now become out of sync, etc. etc.) If you have databases spread across regions, this becomes even more difficult(read awesome) and requires yet more engineering, planning, and design.
Now since, the question specifically mentions using application code for this. There is one thing you can do. I don't advice to use it in production though. EVER. You can create an ASPECTJ advice around your all primary transactional methods using your own custom annotation. Lets call this annotation #SmartTransactional for our demo.
Sample Code. Did not test it though-
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
public #interface SmartTransactional {}
public class SomeServiceImpl implements SomeService {
#SmartTransactional
#Transactional("primaryTransactionManager")
public boolean someMethod(){
//call a common method here for code reusability or create an abstract class
}
}
public class SomeServiceSecondaryTransactionImpl implements SomeService {
#Transactional("secondaryTransactionManager")
public boolean usingTransactionManager2() {
//call a common method here for code reusability or create an abstract class
}
}
#Component
#Aspect
public class SmartTransactionalAspect {
#Autowired
private ApplicationContext context;
#Pointcut("#annotation(...SmartTransactional)")
public void smartTransactionalAnnotationPointcut() {
}
#Around("smartTransactionalAnnotationPointcut()")
public Object methodsAnnotatedWithSmartTransactional(final ProceedingJoinPoint joinPoint) throws Throwable {
Method method = getMethodFromTarget(joinPoint);
Object result = joinPoint.proceed();
boolean failure = Boolean.TRUE;// check if result is failure
if(failure) {
String secondaryTransactionManagebeanName = ""; // get class name from joinPoint and append 'SecondaryTransactionImpl' instead of 'Impl' in the class name
Object bean = context.getBean(secondaryTransactionManagebeanName);
result = bean.getClass().getMethod(method.getName()).invoke(bean);
}
return result;
}
}

Injection of bean inside ClientHeadersFactory doesn't work

I'm building a Quarkus app which handles http requests with resteasy and calls another api with restclient and I need to propagate a header and add another one on the fly so I added a class that implements ClientHeadersFactory.
Here's the code:
#ApplicationScoped
public abstract class MicroServicesHeaderHandler implements ClientHeadersFactory {
#Inject
MicroServicesConfig config;
#Override
public MultivaluedMap<String, String> update(MultivaluedMap<String, String> incomingHeaders,
MultivaluedMap<String, String> clientOutgoingHeaders) {
// Will be merged with outgoing headers
return new MultivaluedHashMap<>() {{
put("Authorization", Collections.singletonList("Bearer " + config.getServices().get(getServiceName()).getAccessToken()));
put("passport", Collections.singletonList(incomingHeaders.getFirst("passport")));
}};
}
protected abstract String getServiceName();
My issue is that the injection of the config doesn't work. I tried both with #Inject and #Context, as mentioned in the javadoc of ClientHeadersFactory. I also tried to make the class non abstract but it doesn't change anything.
MicroServicesConfig is a #Startup bean because it needs to be initialized before Quarkus.run() is called, otherwise the hot reload doesn't work anymore, since it's required to handle requests.
Here's the code FYI:
#Getter
#Startup
#ApplicationScoped
public final class MicroServicesConfig {
private final Map<String, MicroService> services;
MicroServicesConfig(AKV akv, ABS abs) {
// some code to retrieve an encrypted file from a secure storage, decrypt it and initialize the map out of it
}
It appears to be an issue with ClientHeadersFactory because if I inject my bean in my main class (#QuarkusMain), it works. I'm then able to assign the map to a public static map that I can then access from my HeaderHandler with Application.myPublicStaticMap but that's ugly so I would really prefer to avoid that.
I've searched online and saw several people having the same issue but according to this blogpost, or this one, it should work as of Quarkus 1.3 and MicroProfile 3.3 (RestClient 1.4) and I'm using Quarkus 1.5.2.
Even the example in the second link doesn't work for me with the injection of UriInfo so the issue doesn't come from the bean I'm trying to inject.
I've been struggling with this for weeks and I'd really like to get rid of my workaround now.
I'm probably just missing something but it's driving me crazy.
Thanks in advance for your help.
This issue has finally been solved in Quarkus 1.8.

Spring Data Redis - #Transactional support on Repository

We're using spring-boot-starter-parent 1.4.1 together with spring-boot-starter-redis and spring-boot-starter-data-redis. We use redis for (a) message passing to an external app and (b) to store some information in a repository. Our redis config looks like this
#Configuration
#EnableRedisRepositories
open class RedisConfig {
#Bean // for message passing
#Profile("test")
open fun testRedisChannelProvider(): RedisParserChannelProvider {
return RedisParserChannelProvider("test_parser:parse.job", "test_parser:parse.joblist")
}
#Bean // for message passing
#Profile("!test")
open fun productionRedisChannelProvider(): RedisParserChannelProvider {
return RedisParserChannelProvider("parser:parse.job", "parser:parse.joblist")
}
#Bean // for message passing
open fun parseJobTemplate(connectionFactory: RedisConnectionFactory): RedisTemplate<String, ParseJob> {
val template = RedisTemplate<String, ParseJob>()
template.connectionFactory = connectionFactory
template.valueSerializer = Jackson2JsonRedisSerializer<ParseJob>(ParseJob::class.java)
return template
}
//#Bean // for message passing
//open fun parseJobListTemplate ...
// no template for repository
With this config the message passing is working nicely as well as writing to/reading from the repository. Now I am trying to get #Transactional working for communication with the repository, but I have not succeeded so far. I already followed the example config in the docs and manually enabled transaction support on it:
#Bean
open fun redisTemplate(): RedisTemplate<*, *> {
val template = RedisTemplate<ByteArray, ByteArray>()
template.setEnableTransactionSupport(true)
return template
}
...but this is apparently not the way to go. Currently, everything written to the repository (in particular during tests) stays there.
#Transactional use of Redis repositories is not possible, and I doubt it will work at all.
The reason behind is how Spring Data Redis repository support works:
RedisKeyValueAdapter relies on results of write and read operations that are issued while persisting an object.
Redis transactions behave more like deferred batches, so it's not possible to wrap Redis repository support inside a transaction but require a different approach and impose several limitations.

Multiple datasources migrations using Flyway in a Spring Boot application

We use Flyway for db migration in our Spring Boot based app and now we have a requirement to introduce multi tenancy support while using multiple datasources strategy. As part of that we also need to support migration of multiple data sources. All data sources should maintain the same structure so same migration scripts should be used for migrating of all data sources. Also, migrations should occur upon application startup (as opposed to build time, whereas it seems that the maven plugin can be configured to migrate multiple data sources). What is the best approach to use in order to achieve this? The app already has data source beans defined but Flyway executes the migration only for the primary data source.
To make #Roger Thomas answer more the Spring Boot way:
Easiest solution is to annotate your primary datasource with #Primary (which you already did) and just let bootstrap migrate your primary datasource the 'normal' way.
For the other datasources, migrate those sources by hand:
#Configuration
public class FlywaySlaveInitializer {
#Autowired private DataSource dataSource2;
#Autowired private DataSource dataSource3;
//other datasources
#PostConstruct
public void migrateFlyway() {
Flyway flyway = new Flyway();
//if default config is not sufficient, call setters here
//source 2
flyway.setDataSource(dataSource2);
flyway.setLocations("db/migration_source_2");
flyway.migrate();
//source 3
flyway.setDataSource(dataSource3);
flyway.setLocations("db/migration_source_3");
flyway.migrate();
}
}
Flyway supports migrations coded within Java and so you can start Flyway during your application startup.
https://flywaydb.org/documentation/migration/java
I am not sure how you would config Flyway to target a number of data sources via the its config files. My own development is based around using Java to call Flyway once per data source I need to work against. Spring Boot supports the autowiring of beans marked as #FlywayDataSource, but I have not looked into how this could be used.
For an in-java solution the code can be as simple as
Flyway flyway = new Flyway();
// Set the data source
flyway.setDataSource(dataSource);
// Where to search for classes to be executed or SQL scripts to be found
flyway.setLocations("net.somewhere.flyway");
flyway.setTarget(MigrationVersion.LATEST);
flyway.migrate();
Having your same problem... I looked into the spring-boot-autoconfigure artifact for V 2.2.4 in the org.springframework.boot.autoconfigure.flyway package and I found an annotation FlywayDataSource.
Annotating ANY datasource you want to be used by Flyway should do the trick.
Something like this:
#FlywayDataSource
#Bean(name = "someDatasource")
public DataSource someDatasource(...) {
<build and return your datasource>
}
Found an easy solution for that - I added the step during the creation of my emf:
#Qualifier(EMF2)
#Bean(name = EMF2)
public LocalContainerEntityManagerFactoryBean entityManagerFactory2(
final EntityManagerFactoryBuilder builder
) {
final DataSource dataSource = dataSource2();
Flyway.configure()
.dataSource(dataSource)
.locations("db/migration/ds2")
.load()
.migrate();
return builder
.dataSource(dataSource)
.packages(Role.class)
.properties(jpaProperties2().getProperties())
.persistenceUnit("domain2")
.build();
}
I disabled spring.flyway.enabled for that.
SQL files live in resources/db/migration/ds1/... and resources/db/migration/ds2/...
This worked for me.
import javax.annotation.PostConstruct;
import org.flywaydb.core.Flyway;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;
#Configuration
public class FlywaySlaveInitializer {
#Value("${firstDatasource.db.url}")
String firstDatasourceUrl;
#Value("${firstDatasource.db.user}")
String firstDatasourceUser;
#Value("${firstDatasource.db.password}")
String firstDatasourcePassword;
#Value("${secondDatasource.db.url}")
String secondDatasourceUrl;
#Value("${secondDatasource.db.user}")
String secondDatasourceUser;
#Value("${secondDatasource.db.password}")
String secondDatasourcePassword;
#PostConstruct
public void migrateFlyway() {
Flyway flywayIntegration = Flyway.configure()
.dataSource(firstDatasourceUrl, firstDatasourceUser, firstDatasourcePassword)
.locations("filesystem:./src/main/resources/migration.first")
.load();
Flyway flywayPhenom = Flyway.configure()
.dataSource(secondDatasourceUrl, secondDatasourceUser, secondDatasourcePassword)
.locations("filesystem:./src/main/resources/migration.second")
.load();
flywayIntegration.migrate();
flywayPhenom.migrate();
}
}
And in my application.yml this property:
spring:
flyway:
enabled: false

Resources