I'm trying to setup a spring-boot application with two CacheManagers, with code as below:
#SpringBootApplication
#EnableCaching
public class TestApplication {
...
}
#Configuration
public class TestGuavaCacheConfig extends CachingConfigurerSupport {
...
}
#Configuration
public class TestRedisCacheConfig extends CachingConfigurerSupport {
...
}
But when I start the app, it always fails with the following error:
Caused by: java.lang.IllegalStateException: 2 implementations of
CachingConfigurer were found when only 1 was expected. Refactor the
configuration such that CachingConfigurer is implemented only once or
not at all. at
org.springframework.cache.annotation.AbstractCachingConfiguration.setConfigurers(AbstractCachingConfiguration.java:71)
~[spring-context-4.2.4.RELEASE.jar:4.2.4.RELEASE] at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.8.0_66] at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
~[na:1.8.0_66] at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.8.0_66] at java.lang.reflect.Method.invoke(Method.java:497)
~[na:1.8.0_66] at
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:654)
~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE] at
org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:88)
~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE] at
org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:331)
~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
... 59 common frames omitted
It seems that spring-boot can't support two CacheManager(s). Is this true ?
TL;DR CachingConfigurer is meant to configure the default cache settings.
This has nothing to do with Spring Boot, that interface (and the related exception) comes straight from Spring Framework.
CachingConfigurer allows you to specify the default CacheManager that your application should be using. As the exception states, you can't have two of them. That does not mean you can't have two cache managers of course.
What are you trying to do exactly? If you want to define two cache managers and use the cacheManager attribute of the #CacheConfig or #Cacheable annotations, then your (only) CacheConfigurer implementation should define the default one and you should create the other like any other bean that you'll reference in the annotation.
If you want to switch from one cache to the other, consider implementing the CacheResolver instead and wrap your CacheManager instances in it. Based on a custom annotation and/or a cache name, you'll be able to return the cache(s) to use with some custom code of yours.
Related
I have implemnted an interface for using MapStruct:
#Mapper(componentModel = "spring")
public interface MapStructMapper {
MapStructMapper INSTANCE = Mappers.getMapper( MapStructMapper.class );
MyApiModel myInternalClassToMyApiModel(MyDocument.MyInternalClass myInternalClass);
MyDocument.MyInternalClass myApiModelToMyInternalClass(MyApiModel myApiModel);
}
When running the gradle build I get the following exception when tests are executed:
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'MapStructMapper' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {#org.springframework.beans.factory.annotation.Autowired(required=true)}
In my test class I currently have only:
#Autowired
protected MapStructMapper mapper;
and in my build.gradle
implementation 'org.mapstruct:mapstruct:1.4.2.Final'
annotationProcessor 'org.mapstruct:mapstruct-processor:1.4.2.Final'
How can I solve this problem and how can I invoke the mapping if I use MapStruct using an interface?
Based on the information you provided, it's hard to give a definitive answer.
Please check that you not only included the mapstruct dependencies but also the annotation processor in your build such that the MapStructMapperImpl is actually generated.
If it is indeed generated, you must make sure that it is included in the application context of your test. If you use #SpringBootTest, you need to make sure that the interface is declared in a package that is scanned by the component scan. If you construct a dedicated context with #ContextConfiguration, you need to list MapStructMapperImpl.class in the list that you pass to the parameter classes like you would with other classes annotated with #Component.
I am porting an existing JBOSS JEE application to Quarkus. I am using a number of HV custom validators that require injection.
For that purpose I've defined all custom validators that require injection as bean in my libraries like this:
#ApplicationScoped
public class SomeValidator implements ConstraintValidator<SomeValidation, AnObject> {
#Inject
public BeanUsingEntityManager bean;
Note: It is common code, so it should remain working on JBOSS as well
Next I defined a REST service. The REST service makes use of an application scoped bean like this.
#ApplicationScoped
public class ApplicationContext {
#PersistenceContext( unitName = "A" )
EntityManager em;
#Produces
#EnityManagerA // required qualifier to make datasource unique in JEE context (there are more)
public EntityManager produce() {
return em;
}
// NOTE: quarkus does not allow the #Produces on a field, which is allowed in JBOSS hence the method
#Produces
public BeanUsingEntityManager createBeanUsingEntityManager () {
// some logic that requires the entity manager.
}
}
Now the problem is simplified, but I keep on running into an error message.
Caused by: javax.enterprise.inject.CreationException: Synthetic bean instance for javax.persistence.EntityManager not initialized yet: javax_persistence_EntityManager_b60c51739990fc921960fc78caeb075a811a91a6
- a synthetic bean initialized during RUNTIME_INIT must not be accessed during STATIC_INIT
- RUNTIME_INIT build steps that require access to synthetic beans initialized during RUNTIME_INIT should consume the SyntheticBeansRuntimeInitBuildItem
at javax.persistence.EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.create(EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.zig:167)
at javax.persistence.EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.create(EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.zig:190)
at io.quarkus.arc.impl.AbstractSharedContext.createInstanceHandle(AbstractSharedContext.java:96)
at io.quarkus.arc.impl.AbstractSharedContext.access$000(AbstractSharedContext.java:14)
at io.quarkus.arc.impl.AbstractSharedContext$1.get(AbstractSharedContext.java:29)
at io.quarkus.arc.impl.AbstractSharedContext$1.get(AbstractSharedContext.java:26)
at io.quarkus.arc.impl.LazyValue.get(LazyValue.java:26)
at io.quarkus.arc.impl.ComputingCache.computeIfAbsent(ComputingCache.java:69)
at io.quarkus.arc.impl.AbstractSharedContext.get(AbstractSharedContext.java:26)
at javax.persistence.EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.get(EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.zig:222)
at javax.persistence.EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.get(EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.zig:238)
at nl.bro.gm.gmw.dispatch.resources.ApplicationContext_Bean.create(ApplicationContext_Bean.zig:131)
... 59 more
I'm new to Quarkus. So, not sure to how to handle this issue or even if I make the correct assumptions. I can imagine that Quarkus wants to give me a fresh entitymanager each request (which I understand), but that poses a problem for my application scoped beans.
What am I doing wrong here?
So, the full answer is that the EntityManager is created at the runtime init phase whereas the ValidatorFactory (and the ConstraintValidators) are created at static init time.
The Quarkus bootstrap goes static init -> runtime init.
So in your case, you can't access a #Singleton bean which uses the EntityManager during static init as it's not yet available.
Making your bean #ApplicationScoped will create a proxy and avoid this chicken and egg problem.
You will have only one BeanUsingEntityManager for your whole application.
The EntityManager is a bit different because we wrap it and you will get one new EntityManager/Session per transaction, which is what is expected as EntityManagers/Sessions are not thread safe.
Java configuration allows us to manage bean creation within a configuration file. Annotated #Component, #Service classes used with component scanning does the same. However, I'm concerned about using these two mechanisms at the same time.
Should Java configuration and annotated component scans be avoided in the same project? I ask because the result is unclear in the following scenario:
#Configuration
public class MyConfig {
#Bean
public Foo foo() {
return new Foo(500);
}
}
...
#Component
public class Foo {
private int value;
public Foo() {
}
public Foo(int value) {
this.value = value;
}
}
...
public class Consumer {
#Autowired
Foo foo;
...
}
So, in the above situation, will the Consumer get a Foo instance with a 500 value or 0 value? I've tested locally and it appears that the Java configured Foo (with value 500) is created consistently. However, I'm concerned that my testing isn't thorough enough to be conclusive.
What is the real answer? Using both Java config and component scanning on #Component beans of the same type seems like a bad thing.
I think your concern is more like raised by the following use case:
You have a custom spring-starter-library that have its own #Configuration classes and #Bean definitions, BUT if you have #Component/#Service in this library, you will need to explicitly #ComponentScan these packages from your service, since the default #ComponentScan (see #SpringBootApplication) will perform component scanning from the main class, to all sub-packages of your app, BUT not the packages inside the external library. For that purpose, you only need to have #Bean definitions in your external library, and to inject these external configurations via #EnableSomething annotation used on your app's main class (using #Import(YourConfigurationAnnotatedClass.class) OR via using spring.factories in case you always need the external configuration to be used/injected.
Of course, you CAN have #Components in this library, but the explicit usage of #ComponentScan annotation may lead to unintended behaviour in some cases, so I would recommend to avoid that.
So, to answer your question -> You can have both approaches of defining beans, only if they're inside your app, but bean definitions outside your app (e.g. library) should be explicitly defined with #Bean inside a #Configuration class.
It is perfectly valid to have Java configuration and annotated component scans in the same project because they server different purposes.
#Component (#Service,#Repository etc) are used to auto-detect and auto-configure beans.
#Bean annotation is used to explicitly declare a single bean, instead of letting Spring do it automatically.
You can do the following with #Bean. But, this is not possible with #Component
#Bean
public MyService myService(boolean someCondition) {
if(someCondition) {
return new MyServiceImpl1();
}else{
return new MyServiceImpl2();
}
}
Haven't really faced a situation where both Java config and component scanning on the bean of the same type were required.
As per the spring documentation,
To declare a bean, simply annotate a method with the #Bean annotation.
When JavaConfig encounters such a method, it will execute that method
and register the return value as a bean within a BeanFactory. By
default, the bean name will be the same as the method name
So, As per this, it is returning the correct Foo (with value 500).
In general, there is nothing wrong with component scanning and explicit bean definitions in the same application context. I tend to use component scanning where possible, and create the few beans that need more setup with #Bean methods.
There is no upside to include classes in the component scan when you create beans of their type explicitly. Component scanning can easily be targeted at certain classes and packages. If you design your packages accordingly, you can component scan only the packages without "special" bean classes (or else use more advanced filters on scanning).
In a quick look I didn't find any clear information about bean definition precedence in such a case. Typically there is a deterministic and fairly stable order in which these are processed, but if it is not documented it maybe could change in some future Spring version.
I came across a tutorial which seemed to be fitting my usecase and tried implementing it. I failed but wasn't sure why. So I tried to find another example with similar code and looked at the book "Spring in Action, Fourth Edition by Craig Walls"
The books describes at page 300 the same basic approach. Define a JdbcTemplate Bean first.
#Bean
NamedParameterJdbcTemplate jdbcTemplate(DataSource dataSource) {
return new NamedParameterJdbcTemplate(dataSource);
}
Then a Repository implementing an Interface
#Repository
public class CustomRepositoryImpl implements CustomRepository {
private final NamedParameterJdbcOperations jdbcOperations;
private static final String TEST_STRING = "";
#Autowired
public CustomRepositoryImpl(NamedParameterJdbcOperations jdbcOperations) {
this.jdbcOperations = jdbcOperations;
}
So I did like the example in the book suggests, wrote a test but got the error message
Error creating bean with name 'de.myproject.config.SpringJPAPerformanceConfigTest': Unsatisfied dependency expressed through field 'abc'; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'de.myproject.CustomRepository' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {#org.springframework.beans.factory.annotation.Autowired(required=true)}
To my understanding as book and tutorial describe, the Repository should be recognized as a Bean definition by the component scan.
To test this I created an context and asked for all registered Beans.
AnnotationConfigApplicationContext
context = new AnnotationConfigApplicationContext();
context.getBeanDefinitionNames()
As assumed my Repository wasn't among them. So I increased, for test purposes only, scope of the search in my project, and set it to the base package. Every other Bean was shown, except the Repository.
As an alternative to component scanning and autowiring, the books describes the possibility to simply declare the Repository as a Bean, which I did.
#Bean
public CustomRepository(NamedParameterJdbcOperations jdbcOperations) {
return new CustomRepositoryImpl(jdbcOperations);
}
After that Spring was able to wire the Repository. I looked at the github code of the book in hope for a better understanding, but unfortunately only the Bean solution, which runs, is implemented there.
So here are my questions:
1.) what possible reasons are there for a Bean definition, is a scenario like this one, not to be recognized by the component scan?
2.) this project already uses Spring JPA Data Repositories, are there any reasons not to use both approaches at the same time?
The problem is naming of your classes. There are many things to understand here.
You define a repository Interface #Repository is optional provided it extends CRUDRepository or one of the repositories provided by spring-data. In this class you can declare methods(find By....). And spring-data will formulate the query based on the underlying database. You can also specify your query using #Query.
Suppose you have a method which involves complex query or something which spring-data cannot do out of the box, in such case we can use the underlying template class for example JdbcTemplate or MongoTemplate..
The procedure to do this is to create another interface and a Impl class. The naming of this interface should be exactly like Custom and your Impl class should be named Impl.. And all should be in same package.
For example if your Repository name is AbcRepository then Your custom repository should be named AbcRepositoryCustom and the implementation should be named AbcRepositoryImpl.. AbcRepository extends AbcRepositoryCustom(and also other spring-data Repositories). And AbcRepositoryImpl implements AbcRepositoryCustom
I was able to "solve" the problem myself.
As we also have a front end class annotated with the same basePackage for the #ComponentScan
#EnableWebMvc
#Configuration
#ComponentScan(basePackages = {"de.myproject.*"})
so there were actually two identical #ComponentScans annotations which I wasn't aware off and this did lead to a conflict. It seams the ordering how the whole application had to be loaded had changed, but thats only me guessing.
I simply moved my Repository and its Impl to a subpackage, and changed the
#ComponentScan(basePackages = {"de.myproject.subpackage.*"})
and now everything works fine. Though it escapes me, what the exact reason behind this behavior is.
I am using two transaction managers in my code. One is marked as primary. Complete configuration is using Spring annotation.
Table that I am trying to update using NON-primary transaction manager is not able to save data in the database. If I mark the same transaction manager as primary then it starts inserting data into database.
In case of non primary transaction, I am getting below error
org.springframework.dao.InvalidDataAccessApiUsageException: no transaction is in progress; nested exception is javax.persistence.TransactionRequiredException: no transaction is in progress
at org.springframework.orm.jpa.EntityManagerFactoryUtils.convertJpaAccessExceptionIfPossible(EntityManagerFactoryUtils.java:413)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:157)
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.translateExceptionIfPossible(AbstractEntityManagerFactoryBean.java:417)
at org.springframework.dao.support.ChainedPersistenceExceptionTranslator.translateExceptionIfPossible(ChainedPersistenceExceptionTranslator.java:59)
at org.springframework.dao.support.DataAccessUtils.translateIfNecessary(DataAccessUtils.java:213)
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:147)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodIntercceptor.invoke(CrudMethodMetadataPostProcessor.java:111)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy39.flush(Unknown Source)
at com.ashish.business.CreditCardManagementImpl.addCreditCardUser(CreditCardManagementImpl.java:44)
at com.ashish.business.CreditCardManagementImpl.addCrCardCustomerData(CreditCardManagementImpl.java:30)
My code is present in the following location
https://github.com/ashismo/repositoryForMyBlog/tree/master/spring/SpringJPARepoDistributedTransaction
In my code, com.ashish.appConfig package has three classes called AppConfig.java, CreditCardTransactionConfig.java and DebitCardTransactionConfig.java. Where AppConfig has included other two classes and CreditCardTransactionConfig.java has NON primary transaction manager configuration
I am running this code from junit file. And the above mentioned error is coming as soon as I am trying to do creditUserDetailRepository.flush();. If I do not have this line in my code then I am not getting any error but data is not getting inserted into the table.
Can someone help me please?
In your DebitCardTransactionConfig class you have annotated the debitEntityManagerFactory method with the #Primary annotation. That annotation means, that this instance should be used whenever there are multiple candidates for autowiring based on type. You are creating the PlatformTransactionManagers like this:
#Bean(name="creditTransactionManager")
#Qualifier("creditTransactionManager")
public PlatformTransactionManager creditTransactionManager(
EntityManagerFactory emf) {
//...
}
This means, that Spring should choose the EntityManagerFactory automatically. Hence, both transaction managers are handed the debit entity manager factory.
What you should do for a quick fix, is to change the declaration of the creditTransactionManager like this:
#Bean(name="creditTransactionManager")
#Qualifier("creditTransactionManager")
// #Primary
public PlatformTransactionManager creditTransactionManager() {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(creditEntityManagerFactory().getObject());
return transactionManager;
}
As calling a different #Bean method in a #Configuration class means retrieving the instance from scope, this should work without problems. Unfortunately, I was unable to test this. See side notes.
SIDE NOTES
Here are some offtopic notes about your code.
Your CreditCardTransactionConfig, and DebitCardTransactionConfig classes are very simillar. Consider extracting a super class that would eliminate most of the boilerplate.
The bean name is taken from the name of the #Bean annotated method. You do not need to specify it in the #Bean and #Qualifier annotations.
While it is a good idea to share a repository so that anyone can clone your code and test it, I was unable to run your test as it expects a MySQL database. Consider using an in-memory database like H2 for experimenting.