Different Cache Managers in Different Configurations fail on getbean multiple beans - spring

I have Two CacheManagers defined in different places (by different teams):
eg:
#Configuration
#EnableCaching
public ConfigClassOne {
#Bean
public CacheManager myCacheManager() {
return new ConcurrentMapCacheManager("someCacheName");
}
}
and elsewhere in the code (that my app consumes but is not responsible for):
#Configuration
#EnableCaching
public class CacheConfiguration {
#Bean
public CacheManager otherCacheManager () {
return new ConcurrentMapCacheManager(CacheConstants.SOME_CACHE_NAME,
CacheConstants.SOME_OTHER_NAME);
}
}
I get unsatisfied bean exception when trying to do getBean because there are two CacheManager.class results.
The problem is, because this is external to me, I do not want to rely on them making the cache, nor set mine to primary as they may receive it.
I've tried setting bean name, bean qualifier but all result in either no results or 2 results (both exceptions).
I tried creating a CompositeManager (primary) that takes theirs on with ours but this is not an ideal solution..
Even stranger, when running integration test I see the following:
In debug I can see that both autowired beans (classes that use them) on each side (mine and theirs) get their respective managers correctly even without primary, but the test fails because at some point it runs some refresh when autowiring to another class the class that holds the bean instance and THEN it tries to do getbean and fails (perhaps something with mockito?)
Is there a way to resolve this issue?

Related

Hack a #Component bean in the context at runtime and override one of its particular field-injected dependencies (no test-scope)

I have a case where an Spring AutoConfiguration class is getting its dependencies through field injection and creating and exposing certain beans after interacting with them.
I would like to override one of its dependencies so the exposed beans are initialized in the way I expect.
Obviously I can disable the Autoconfiguration class and duplicate it completely locally with my desired dependency, but that would not be a maintainable solution since the amount of behaviour to reproduce is huge, and it might break on each spring update.
Is there any easy mechanisme to let the autconfiguration be loaded, and later on use the BeanFactory or something to reinject a particular instance into a particular bean?
I cannot guarantee that this is the ideal solution since this is for topics, instead of classes, but for most cases, it will do the trick.
The AutoConfiguration can be disabled in one topic, and any bean in the topic can be initialized using a particular method in the class Configuration (as usual).
List of AutoConfigurations classes (=topics)
Syntax (to exclude from autoconfiguration):
#Configuration
#EnableAutoConfiguration(exclude={DataSourceAutoConfiguration.class})
public class MyConfiguration {
#bean
public SpecificClass getSpecificClass() {
//init the instance as you want
}
}

Injecting an entitymanager in #ApplicationScoped bean

I am porting an existing JBOSS JEE application to Quarkus. I am using a number of HV custom validators that require injection.
For that purpose I've defined all custom validators that require injection as bean in my libraries like this:
#ApplicationScoped
public class SomeValidator implements ConstraintValidator<SomeValidation, AnObject> {
#Inject
public BeanUsingEntityManager bean;
Note: It is common code, so it should remain working on JBOSS as well
Next I defined a REST service. The REST service makes use of an application scoped bean like this.
#ApplicationScoped
public class ApplicationContext {
#PersistenceContext( unitName = "A" )
EntityManager em;
#Produces
#EnityManagerA // required qualifier to make datasource unique in JEE context (there are more)
public EntityManager produce() {
return em;
}
// NOTE: quarkus does not allow the #Produces on a field, which is allowed in JBOSS hence the method
#Produces
public BeanUsingEntityManager createBeanUsingEntityManager () {
// some logic that requires the entity manager.
}
}
Now the problem is simplified, but I keep on running into an error message.
Caused by: javax.enterprise.inject.CreationException: Synthetic bean instance for javax.persistence.EntityManager not initialized yet: javax_persistence_EntityManager_b60c51739990fc921960fc78caeb075a811a91a6
- a synthetic bean initialized during RUNTIME_INIT must not be accessed during STATIC_INIT
- RUNTIME_INIT build steps that require access to synthetic beans initialized during RUNTIME_INIT should consume the SyntheticBeansRuntimeInitBuildItem
at javax.persistence.EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.create(EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.zig:167)
at javax.persistence.EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.create(EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.zig:190)
at io.quarkus.arc.impl.AbstractSharedContext.createInstanceHandle(AbstractSharedContext.java:96)
at io.quarkus.arc.impl.AbstractSharedContext.access$000(AbstractSharedContext.java:14)
at io.quarkus.arc.impl.AbstractSharedContext$1.get(AbstractSharedContext.java:29)
at io.quarkus.arc.impl.AbstractSharedContext$1.get(AbstractSharedContext.java:26)
at io.quarkus.arc.impl.LazyValue.get(LazyValue.java:26)
at io.quarkus.arc.impl.ComputingCache.computeIfAbsent(ComputingCache.java:69)
at io.quarkus.arc.impl.AbstractSharedContext.get(AbstractSharedContext.java:26)
at javax.persistence.EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.get(EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.zig:222)
at javax.persistence.EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.get(EntityManager_e1903961aa3b05f292293ca76e991dd812f3e90e_Synthetic_Bean.zig:238)
at nl.bro.gm.gmw.dispatch.resources.ApplicationContext_Bean.create(ApplicationContext_Bean.zig:131)
... 59 more
I'm new to Quarkus. So, not sure to how to handle this issue or even if I make the correct assumptions. I can imagine that Quarkus wants to give me a fresh entitymanager each request (which I understand), but that poses a problem for my application scoped beans.
What am I doing wrong here?
So, the full answer is that the EntityManager is created at the runtime init phase whereas the ValidatorFactory (and the ConstraintValidators) are created at static init time.
The Quarkus bootstrap goes static init -> runtime init.
So in your case, you can't access a #Singleton bean which uses the EntityManager during static init as it's not yet available.
Making your bean #ApplicationScoped will create a proxy and avoid this chicken and egg problem.
You will have only one BeanUsingEntityManager for your whole application.
The EntityManager is a bit different because we wrap it and you will get one new EntityManager/Session per transaction, which is what is expected as EntityManagers/Sessions are not thread safe.

Gemfire NoSuchBeanDefinitionException Autowiring Cache (Spring 5.0.2 / Gemfirev9.2.7)

We are migrating from Gemfire 8.2.7 to 9.2.1
As part of Gemfire startup, we leverage SpringContextBootstrappingInitializer to initialize the spring-beans which #Autowire the Cache.
The same code when migrated to Gemfire 9.2.1 (along with the other stack) is failing on server startup with below error.
Gemfire 8.2.7 --> Gemfire 9.2.1
Spring-data-Gemfire 1.8.4 --> 2.0.2
Spring-Boot 1.4.7 --> 2.0.0.M7
Spring --> 5.0.2
Caused by:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
qualifying bean of type 'org.apache.geode.cache.Cache' available:
expected at least 1 bean which qualifies as autowire candidate.
Dependency annotations:
{#org.springframework.beans.factory.annotation.Autowired(required=true)}
Any pointers / changes required for GemfireConfig? Below is our JavaConfig.
#Bean
public CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
Looks like the ComponentScan is kicking in prior to Configuration processor. Any idea on controlling this behavior? This was lasted tested to work in Spring-Boot 1.4.6 (Spring- 4.3.8) and gets resolved with a #Depends option - but just wanted to understand if there are any fundamental changes with the ordering of bean initialization with newer Spring version.
#Configuration
#EnableAutoConfiguration(exclude = { HibernateJpaAutoConfiguration.class, BatchAutoConfiguration.class })
#Import(value = { GemfireServerConfig.class, JpaConfiguration.class, JpaConfigurableProperties.class })
#ComponentScan(basePackages = "com.test.gemfire", excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class) )
To begin, let me give you some tips since there are 3 issues with your problem statement above...
1) First, you have not made it clear why or how you are using the o.s.d.g.support.SpringContextBootstrappingInitializer Docs here.
I can only assume it is because you are launching your GemFire servers with Gfsh
using the following command...
gfsh> start server --name=MyServer --cache-xml-file=/path/to/cache.xml ...
Where your cache.xml is defined similar to this. After all, this was the original intent for using the SpringContextBootstrappingInitializer.
If this is the case, why not use the Gfsh, start server command, --spring-xml-location option instead. For example:
gfsh> start server --name=MyServer --spring-xml-location=/by/default/a/classpath/to/applicationContext.xml --classpath=/path/to/spring-data-gemfire-2.0.2.RELEASE.jar:...
By doing so, you no longer need to provide cache.xml just to declare the SpringContextBootstrappingInitializer in order to bootstrap a Spring container inside the GemFire JVM process. You can simply use the --spring-xml-location option and put SDG on the server's classpath when starting the server.
2) Second, it is not apparent what type of application component/bean you are injecting a GemFire Cache reference into (e.g. a Region or another application component class, like a DAO, etc). Providing a snippet of code showing how you injected the Cache reference, i.e. the injection point using the #Autowired annotation would have been helpful. For example:
#Service
class MyService {
#Autowired
private Cache gemfireCache;
...
}
3) #2 would have been more apparent if you included the full stack trace rather than just the NoSuchBeanDefinitionException message.
Despite the issues with your problem statement, I can infer the following:
Clearly, you are using "classpath component scanning" (with the #ComponentScan annotation) and are auto-wiring "by type"; which maybe key actually; I will come back to this later below.
You are using Spring's #Autowired annotation on a bean class field (field injection) or property (setter injection), maybe even a constructor.
The type of this field/property (or constructor parameter) is definitely org.apache.geode.cache.Cache.
Moving on...
In general, Spring will follow dependency order first and foremost. That is, if A depends on B, then B must be created before and destroyed after A. Typically, Spring will and can honor this without incident.
Beyond "dependency order" bean creation and satisfying dependencies between beans (including with the #DependsOn annotation), the order of bean creation is pretty loosely defined.
There are several factors that can influence it, such as "registration order" (i.e. the order in which bean definitions are declared, which is particularly true for beans defined in XML), "import order" (when using the #Import annotation on #Configuration classes), Java reflection (includes #Bean definitions declared in #Configuration classes), etc. Configuration organization is definitely important and should not be taken lightly.
This is 1 reason why I am not a big proponent of "classpath component scanning. While it may be convenient, it is always better, IMO, to be more "explicit" in your configuration, and the organization of your configuration, for reasons outlined here in addition to other non-apparent limitations. At worst, you should definitely be limiting the scope of the scan.
Ironically, you excluded/filtered the 1 thing that could actually help your organizational concerns... components of type #Configuration:
... excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class)
NOTE: given the exclusion, are you certain you did not exclude the the 1 #Configuration class containing your CacheFactoryBean definition? I suppose not since you say this worked after including the #DependsOn annotation.
Clearly there is a dependency defined between some application component of yours (??) and a bean of type o.a.g.cache.Cache (using #Autowired), yet Spring is failing to resolve it.
My thinking is, Spring cannot resolve the Cache dependency because 1) the GemFire cache bean either has not been created yet and 2) Spring cannot find an appropriate bean definition of the desired type (i.e. o.a.g.cache.Cache) in your configuration that would resolve the dependency and force the GemFire Cache to be created first, or 3) the GemFire Cache bean has been created first but Spring is unable to resolve the type as o.a.g.cache.Cache.
I have encountered both scenarios before and it is not exactly clear to me when each scenario happens because I simply have not traced this through yet. I have simply corrected it and moved on. I have noticed that it is version related though.
There are several ways to solve this problem.
If the problem is the later, 3), then simply declaring your dependency as type o.a.g.cache.GemFireCache should resolve the problem. So, for example:
#Repository
class MyDataAccessObject {
#Autowired
private GemFireCache gemfireCache;
...
}
The reason for this is because the o.s.d.g.CacheFactoryBean class's getObjectType() method returns a Class type generically extending o.a.g.cache.GemFireCache. This was by design since o.s.d.g.client.ClientCacheFactoryBean extends o.s.d.g.CacheFactoryBean, though I probably would not have done it that way if I had created these classes. However, it is consistent with the fact that the actual cache type in GemFire is o.a.g.internal.cache.GemFireCacheImpl which indirectly implements both the o.a.g.cache.Cache interface as well as the o.a.g.cache.client.ClientCache interface.
If your problem is the former (1) + 2), which is a bit trickier), then I would suggest you employ a smarter organization of your configuration, separated by concern. For example, you can encapsulate your GemFire configuration with:
#Configuration
class GemFireConfiguration {
// define GemFire components (e.g. CacheFactoryBean) here
}
Then, your application components, where some are dependent on GemFire components, can be defined with:
#Configuration
#Import(GemFireConfiguration.class)
class ApplicationConfiguration {
// define application beans, including beans dependent on GemFire components
}
By importing the GemFireConfiguration you are ensuring the GemFire components/beans are created (instantiated, configured and initialized) first.
You can even employ more targeted, limited "classpath component scanning" at the ApplicationConfiguration class-level in cases where you have a large number of application components (services, DAO, etc).
Then, you can have your main, Spring Boot application class drive all this:
#Configuration
#Import(ApplicationConfiguration.class)
class MySpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(MySpringBootApplication.class, args);
}
}
The point is, you can be as granular as you choose. I like to encapsulate configuration by concern and clearly organize the configuration (using imports) to reflect the order in which I want my components created (constructed, configured and initialized).
Honestly, I basically organize my configuration in the order of dependencies. If my application ultimately depends on a data store and cannot function without that data store, then it makes since to ensure that is initialized first, otherwise, what is the point of starting the application.
Finally, you can always rely on the #DependsOn annotation, as you have appropriately done, to ensure that Spring will create the component before the component that expects it.
Based on the fact that the #DependsOn annotation solved your problem, then I would say this is an organizational problem and falls under the 1) / 2) category I outlined above.
I am going to dig into this a bit deeper and respond to my answer in comments with what I find.
Hope this helps!
-John

Java Configuration vs Component Scan Annotations

Java configuration allows us to manage bean creation within a configuration file. Annotated #Component, #Service classes used with component scanning does the same. However, I'm concerned about using these two mechanisms at the same time.
Should Java configuration and annotated component scans be avoided in the same project? I ask because the result is unclear in the following scenario:
#Configuration
public class MyConfig {
#Bean
public Foo foo() {
return new Foo(500);
}
}
...
#Component
public class Foo {
private int value;
public Foo() {
}
public Foo(int value) {
this.value = value;
}
}
...
public class Consumer {
#Autowired
Foo foo;
...
}
So, in the above situation, will the Consumer get a Foo instance with a 500 value or 0 value? I've tested locally and it appears that the Java configured Foo (with value 500) is created consistently. However, I'm concerned that my testing isn't thorough enough to be conclusive.
What is the real answer? Using both Java config and component scanning on #Component beans of the same type seems like a bad thing.
I think your concern is more like raised by the following use case:
You have a custom spring-starter-library that have its own #Configuration classes and #Bean definitions, BUT if you have #Component/#Service in this library, you will need to explicitly #ComponentScan these packages from your service, since the default #ComponentScan (see #SpringBootApplication) will perform component scanning from the main class, to all sub-packages of your app, BUT not the packages inside the external library. For that purpose, you only need to have #Bean definitions in your external library, and to inject these external configurations via #EnableSomething annotation used on your app's main class (using #Import(YourConfigurationAnnotatedClass.class) OR via using spring.factories in case you always need the external configuration to be used/injected.
Of course, you CAN have #Components in this library, but the explicit usage of #ComponentScan annotation may lead to unintended behaviour in some cases, so I would recommend to avoid that.
So, to answer your question -> You can have both approaches of defining beans, only if they're inside your app, but bean definitions outside your app (e.g. library) should be explicitly defined with #Bean inside a #Configuration class.
It is perfectly valid to have Java configuration and annotated component scans in the same project because they server different purposes.
#Component (#Service,#Repository etc) are used to auto-detect and auto-configure beans.
#Bean annotation is used to explicitly declare a single bean, instead of letting Spring do it automatically.
You can do the following with #Bean. But, this is not possible with #Component
#Bean
public MyService myService(boolean someCondition) {
if(someCondition) {
return new MyServiceImpl1();
}else{
return new MyServiceImpl2();
}
}
Haven't really faced a situation where both Java config and component scanning on the bean of the same type were required.
As per the spring documentation,
To declare a bean, simply annotate a method with the #Bean annotation.
When JavaConfig encounters such a method, it will execute that method
and register the return value as a bean within a BeanFactory. By
default, the bean name will be the same as the method name
So, As per this, it is returning the correct Foo (with value 500).
In general, there is nothing wrong with component scanning and explicit bean definitions in the same application context. I tend to use component scanning where possible, and create the few beans that need more setup with #Bean methods.
There is no upside to include classes in the component scan when you create beans of their type explicitly. Component scanning can easily be targeted at certain classes and packages. If you design your packages accordingly, you can component scan only the packages without "special" bean classes (or else use more advanced filters on scanning).
In a quick look I didn't find any clear information about bean definition precedence in such a case. Typically there is a deterministic and fairly stable order in which these are processed, but if it is not documented it maybe could change in some future Spring version.

How can a test 'dirty' a spring application context?

The spring framework documentation states:
In the unlikely case that a test may
'dirty' the application context,
requiring reloading - for example, by
changing a bean definition or the
state of an application object -
Spring's testing support provides
mechanisms to cause the test fixture
to reload the configurations and
rebuild the application context before
executing the next test.
Can someone elaborate this? I am just not getting it. Examples would be nice.
Each JUnit test method is assumed to be isolated, that is does not have any side effects that could cause another test method to behave differently. This can be achieved by modifying the state of beans that are managed by spring.
For example, say you have a bean managed by spring of class MySpringBean which has a string property with a value of "string". The following test method testBeanString will have a different result depending if it is called before or after the method testModify.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations={"/base-context.xml"})
public class SpringTests {
#Autowired
private MySpringBean bean;
#Test public void testModify() {
// dirties the state of a managed bean
bean.setString("newSring");
}
#Test public void testBeanString() {
assertEquals("string", bean.getString());
}
}
use the #DirtiesContext annotation to indicate that the test method may change the state of spring managed beans.

Resources