We are migrating from Gemfire 8.2.7 to 9.2.1
As part of Gemfire startup, we leverage SpringContextBootstrappingInitializer to initialize the spring-beans which #Autowire the Cache.
The same code when migrated to Gemfire 9.2.1 (along with the other stack) is failing on server startup with below error.
Gemfire 8.2.7 --> Gemfire 9.2.1
Spring-data-Gemfire 1.8.4 --> 2.0.2
Spring-Boot 1.4.7 --> 2.0.0.M7
Spring --> 5.0.2
Caused by:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
qualifying bean of type 'org.apache.geode.cache.Cache' available:
expected at least 1 bean which qualifies as autowire candidate.
Dependency annotations:
{#org.springframework.beans.factory.annotation.Autowired(required=true)}
Any pointers / changes required for GemfireConfig? Below is our JavaConfig.
#Bean
public CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
Looks like the ComponentScan is kicking in prior to Configuration processor. Any idea on controlling this behavior? This was lasted tested to work in Spring-Boot 1.4.6 (Spring- 4.3.8) and gets resolved with a #Depends option - but just wanted to understand if there are any fundamental changes with the ordering of bean initialization with newer Spring version.
#Configuration
#EnableAutoConfiguration(exclude = { HibernateJpaAutoConfiguration.class, BatchAutoConfiguration.class })
#Import(value = { GemfireServerConfig.class, JpaConfiguration.class, JpaConfigurableProperties.class })
#ComponentScan(basePackages = "com.test.gemfire", excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class) )
To begin, let me give you some tips since there are 3 issues with your problem statement above...
1) First, you have not made it clear why or how you are using the o.s.d.g.support.SpringContextBootstrappingInitializer Docs here.
I can only assume it is because you are launching your GemFire servers with Gfsh
using the following command...
gfsh> start server --name=MyServer --cache-xml-file=/path/to/cache.xml ...
Where your cache.xml is defined similar to this. After all, this was the original intent for using the SpringContextBootstrappingInitializer.
If this is the case, why not use the Gfsh, start server command, --spring-xml-location option instead. For example:
gfsh> start server --name=MyServer --spring-xml-location=/by/default/a/classpath/to/applicationContext.xml --classpath=/path/to/spring-data-gemfire-2.0.2.RELEASE.jar:...
By doing so, you no longer need to provide cache.xml just to declare the SpringContextBootstrappingInitializer in order to bootstrap a Spring container inside the GemFire JVM process. You can simply use the --spring-xml-location option and put SDG on the server's classpath when starting the server.
2) Second, it is not apparent what type of application component/bean you are injecting a GemFire Cache reference into (e.g. a Region or another application component class, like a DAO, etc). Providing a snippet of code showing how you injected the Cache reference, i.e. the injection point using the #Autowired annotation would have been helpful. For example:
#Service
class MyService {
#Autowired
private Cache gemfireCache;
...
}
3) #2 would have been more apparent if you included the full stack trace rather than just the NoSuchBeanDefinitionException message.
Despite the issues with your problem statement, I can infer the following:
Clearly, you are using "classpath component scanning" (with the #ComponentScan annotation) and are auto-wiring "by type"; which maybe key actually; I will come back to this later below.
You are using Spring's #Autowired annotation on a bean class field (field injection) or property (setter injection), maybe even a constructor.
The type of this field/property (or constructor parameter) is definitely org.apache.geode.cache.Cache.
Moving on...
In general, Spring will follow dependency order first and foremost. That is, if A depends on B, then B must be created before and destroyed after A. Typically, Spring will and can honor this without incident.
Beyond "dependency order" bean creation and satisfying dependencies between beans (including with the #DependsOn annotation), the order of bean creation is pretty loosely defined.
There are several factors that can influence it, such as "registration order" (i.e. the order in which bean definitions are declared, which is particularly true for beans defined in XML), "import order" (when using the #Import annotation on #Configuration classes), Java reflection (includes #Bean definitions declared in #Configuration classes), etc. Configuration organization is definitely important and should not be taken lightly.
This is 1 reason why I am not a big proponent of "classpath component scanning. While it may be convenient, it is always better, IMO, to be more "explicit" in your configuration, and the organization of your configuration, for reasons outlined here in addition to other non-apparent limitations. At worst, you should definitely be limiting the scope of the scan.
Ironically, you excluded/filtered the 1 thing that could actually help your organizational concerns... components of type #Configuration:
... excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class)
NOTE: given the exclusion, are you certain you did not exclude the the 1 #Configuration class containing your CacheFactoryBean definition? I suppose not since you say this worked after including the #DependsOn annotation.
Clearly there is a dependency defined between some application component of yours (??) and a bean of type o.a.g.cache.Cache (using #Autowired), yet Spring is failing to resolve it.
My thinking is, Spring cannot resolve the Cache dependency because 1) the GemFire cache bean either has not been created yet and 2) Spring cannot find an appropriate bean definition of the desired type (i.e. o.a.g.cache.Cache) in your configuration that would resolve the dependency and force the GemFire Cache to be created first, or 3) the GemFire Cache bean has been created first but Spring is unable to resolve the type as o.a.g.cache.Cache.
I have encountered both scenarios before and it is not exactly clear to me when each scenario happens because I simply have not traced this through yet. I have simply corrected it and moved on. I have noticed that it is version related though.
There are several ways to solve this problem.
If the problem is the later, 3), then simply declaring your dependency as type o.a.g.cache.GemFireCache should resolve the problem. So, for example:
#Repository
class MyDataAccessObject {
#Autowired
private GemFireCache gemfireCache;
...
}
The reason for this is because the o.s.d.g.CacheFactoryBean class's getObjectType() method returns a Class type generically extending o.a.g.cache.GemFireCache. This was by design since o.s.d.g.client.ClientCacheFactoryBean extends o.s.d.g.CacheFactoryBean, though I probably would not have done it that way if I had created these classes. However, it is consistent with the fact that the actual cache type in GemFire is o.a.g.internal.cache.GemFireCacheImpl which indirectly implements both the o.a.g.cache.Cache interface as well as the o.a.g.cache.client.ClientCache interface.
If your problem is the former (1) + 2), which is a bit trickier), then I would suggest you employ a smarter organization of your configuration, separated by concern. For example, you can encapsulate your GemFire configuration with:
#Configuration
class GemFireConfiguration {
// define GemFire components (e.g. CacheFactoryBean) here
}
Then, your application components, where some are dependent on GemFire components, can be defined with:
#Configuration
#Import(GemFireConfiguration.class)
class ApplicationConfiguration {
// define application beans, including beans dependent on GemFire components
}
By importing the GemFireConfiguration you are ensuring the GemFire components/beans are created (instantiated, configured and initialized) first.
You can even employ more targeted, limited "classpath component scanning" at the ApplicationConfiguration class-level in cases where you have a large number of application components (services, DAO, etc).
Then, you can have your main, Spring Boot application class drive all this:
#Configuration
#Import(ApplicationConfiguration.class)
class MySpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(MySpringBootApplication.class, args);
}
}
The point is, you can be as granular as you choose. I like to encapsulate configuration by concern and clearly organize the configuration (using imports) to reflect the order in which I want my components created (constructed, configured and initialized).
Honestly, I basically organize my configuration in the order of dependencies. If my application ultimately depends on a data store and cannot function without that data store, then it makes since to ensure that is initialized first, otherwise, what is the point of starting the application.
Finally, you can always rely on the #DependsOn annotation, as you have appropriately done, to ensure that Spring will create the component before the component that expects it.
Based on the fact that the #DependsOn annotation solved your problem, then I would say this is an organizational problem and falls under the 1) / 2) category I outlined above.
I am going to dig into this a bit deeper and respond to my answer in comments with what I find.
Hope this helps!
-John
Related
I have a case where an Spring AutoConfiguration class is getting its dependencies through field injection and creating and exposing certain beans after interacting with them.
I would like to override one of its dependencies so the exposed beans are initialized in the way I expect.
Obviously I can disable the Autoconfiguration class and duplicate it completely locally with my desired dependency, but that would not be a maintainable solution since the amount of behaviour to reproduce is huge, and it might break on each spring update.
Is there any easy mechanisme to let the autconfiguration be loaded, and later on use the BeanFactory or something to reinject a particular instance into a particular bean?
I cannot guarantee that this is the ideal solution since this is for topics, instead of classes, but for most cases, it will do the trick.
The AutoConfiguration can be disabled in one topic, and any bean in the topic can be initialized using a particular method in the class Configuration (as usual).
List of AutoConfigurations classes (=topics)
Syntax (to exclude from autoconfiguration):
#Configuration
#EnableAutoConfiguration(exclude={DataSourceAutoConfiguration.class})
public class MyConfiguration {
#bean
public SpecificClass getSpecificClass() {
//init the instance as you want
}
}
I am trying to find a way how to programatically create bean in quarkus DI, but without success. Is it possible in this framework? It seems that BeanManager does not implement the needed method yet.
First, we should clarify what "programatically create bean" exactly means.
But first of all, we should define what "bean" means. In CDI, we talk about beans in two meanings:
Component metadata - this one describes the component attributes and how a component instance is created; the SPI is javax.enterprise.inject.spi.Bean
Component instance - the real instance used in application; in the spec we call it "contextual reference".
The metadata is usually derived from the application classes. Such metadata are "backed by a class". By "backed by a class" I mean all the kinds described in the spec. That is class beans, producer methods and producer fields.
Now, if you want to programatically obtain a component instance (option 2), you can:
Inject javax.enterprise.inject.Instance; see for example the Weld docs
Make use of CDI.current().select(Foo.class).get()
Make use of quarkus-specific Arc.container().instance(Foo.class).get()
However, if you want to add/register a component metadata that is not backed by a class (option 2), you need to add an extension that makes use of quarkus-specific SPIs, such as BeanRegistrar.
If you are looking for Quarkus equivalent of Spring #Configuration then you want "bean producer" (as mentioned in comments above)
Here is an example(koltin) of how to manually register a clock:
import java.time.Clock
import javax.enterprise.context.ApplicationScoped
import javax.enterprise.inject.Produces
#ApplicationScoped
class AppConfig {
#Produces
#ApplicationScoped
fun utcClock(): Clock {
return Clock.systemUTC()
}
}
#Produces is actually not required if method is already annotated with #ApplicationScoped
#ApplicationScoped at class level of AppConfig is also not required
Although, I find those extra annotations useful, especially if are used to Spring.
You can inject your beans using Instance:
#Inject
public TestExecutorService(final ManagedExecutor managedExecutor,
final Instance<YourTask> YourTask) {
this.managedExecutor = managedExecutor;
this.YourTask= YourTask;
}
And if you need to create more than one Instance you can use the managed executor:
tasks.forEach(task -> managedExecutor.submit(task::execute));
Keep in mind that depending on the way you start the bean you may need to destroy it and only the "creator class" has its reference, meaning you have to create and destroy the bean in the same classe (you can use something like events to handle that).
For more information please check: CDI Documentation
Java configuration allows us to manage bean creation within a configuration file. Annotated #Component, #Service classes used with component scanning does the same. However, I'm concerned about using these two mechanisms at the same time.
Should Java configuration and annotated component scans be avoided in the same project? I ask because the result is unclear in the following scenario:
#Configuration
public class MyConfig {
#Bean
public Foo foo() {
return new Foo(500);
}
}
...
#Component
public class Foo {
private int value;
public Foo() {
}
public Foo(int value) {
this.value = value;
}
}
...
public class Consumer {
#Autowired
Foo foo;
...
}
So, in the above situation, will the Consumer get a Foo instance with a 500 value or 0 value? I've tested locally and it appears that the Java configured Foo (with value 500) is created consistently. However, I'm concerned that my testing isn't thorough enough to be conclusive.
What is the real answer? Using both Java config and component scanning on #Component beans of the same type seems like a bad thing.
I think your concern is more like raised by the following use case:
You have a custom spring-starter-library that have its own #Configuration classes and #Bean definitions, BUT if you have #Component/#Service in this library, you will need to explicitly #ComponentScan these packages from your service, since the default #ComponentScan (see #SpringBootApplication) will perform component scanning from the main class, to all sub-packages of your app, BUT not the packages inside the external library. For that purpose, you only need to have #Bean definitions in your external library, and to inject these external configurations via #EnableSomething annotation used on your app's main class (using #Import(YourConfigurationAnnotatedClass.class) OR via using spring.factories in case you always need the external configuration to be used/injected.
Of course, you CAN have #Components in this library, but the explicit usage of #ComponentScan annotation may lead to unintended behaviour in some cases, so I would recommend to avoid that.
So, to answer your question -> You can have both approaches of defining beans, only if they're inside your app, but bean definitions outside your app (e.g. library) should be explicitly defined with #Bean inside a #Configuration class.
It is perfectly valid to have Java configuration and annotated component scans in the same project because they server different purposes.
#Component (#Service,#Repository etc) are used to auto-detect and auto-configure beans.
#Bean annotation is used to explicitly declare a single bean, instead of letting Spring do it automatically.
You can do the following with #Bean. But, this is not possible with #Component
#Bean
public MyService myService(boolean someCondition) {
if(someCondition) {
return new MyServiceImpl1();
}else{
return new MyServiceImpl2();
}
}
Haven't really faced a situation where both Java config and component scanning on the bean of the same type were required.
As per the spring documentation,
To declare a bean, simply annotate a method with the #Bean annotation.
When JavaConfig encounters such a method, it will execute that method
and register the return value as a bean within a BeanFactory. By
default, the bean name will be the same as the method name
So, As per this, it is returning the correct Foo (with value 500).
In general, there is nothing wrong with component scanning and explicit bean definitions in the same application context. I tend to use component scanning where possible, and create the few beans that need more setup with #Bean methods.
There is no upside to include classes in the component scan when you create beans of their type explicitly. Component scanning can easily be targeted at certain classes and packages. If you design your packages accordingly, you can component scan only the packages without "special" bean classes (or else use more advanced filters on scanning).
In a quick look I didn't find any clear information about bean definition precedence in such a case. Typically there is a deterministic and fairly stable order in which these are processed, but if it is not documented it maybe could change in some future Spring version.
Please consider following situation with spring 4.0.7
For Eclipselink, we use a load-time-weaver. So we wanted to experiment with Springs #Configurable annotation using #EnableSpringConfigured with #EnableLoadTimeWeaving at the same time.
This is fully functional, and Spring-Beans are injected perfectly into POJOs during construction. This functionality is helpful for us, because we want to keep some code regarding validation of these POJOs inside these and not somewhere else in a Bean.
SOme of our Spring Context contains Beans that do not implement any interface, because they are local to some package and used only there. Lets say, FooLogicBean is one of them. If this is to be injected into another Bean, and some Spring-AOP-Aspect (not-aspectj) like some performance measurement aspect is in the execution path, Spring will create a CGLIB autoproxy for the FooLogicBean and inject that. This is fully functional and works perfectly too.
Problems arise, when we want to actually use a POJO that is annotated with #Configurable as a parameter in a method of FooLogicBean (like fooLogicBean.doValidate(myPojo); ), respectively a CGLIB Proxy. In this case, some non-trivial magic stops that POJO from being woven thru aspectj (AnnotationBeanConfigurerAspect from spring-aspects). It is even never woven anywhere in the code regardless of calling the aforementioned doValidate() Method.
If we create that POJO inside FooLogicBean, but dont use it as a method Parameter, it gets woven again due to #Configurable.
Without knowing the Autoproxy creation code, we assume some fancy marking routine from hindering a class from being detected by aspectj, if that class was already used in spring-aop. use in this case means Access.
Did anyone experiment with such obscure constellation and knows a solution for that?
Thanks in advance!
Proxies are created by subclassing, i.e. when you create a proxy of an annotated class Foo:
class Foo {
#Bar
void bar() { }
}
the proxy is created by implementing a class
class Foo$Proxy extends Foo {
#Override
void bar() {
// Proxy logic
}
// Overridden methods of Object
}
This makes the Foo$Proxy class a valid Liskov substitute for Foo. However, normal override semantics apply, i.e. non-inherited annotations such as #Bar are not longer present for the overridden methods. Once another annotation-based API processes your beans, all annotations have disappeared which leads to your outcome. This is true for all kinds of annotations. Those on types, methods and method parameters.
But is it avoidable? It most certainly is by using a proxy generator that was written recently and is built with knowledge of what annotations are, something that is not true for cglib which was first shipped in the early days of the Java virtual machine. However, as long as Spring does not move away from cglib, you will have to live with today's limitations.
Do I use dependency injection in the following example:
#Scope("prototype")
#Component
public class Order{
#Autowired
public Order(User user,List<OrderItem> items,.......){
Now somewhere else:
#Component
public class PersistOrder{
#Autowired
Provider<Order> orderProvider;
public void prepareOrder() {
Order order = orderProvider.get();
AS JSR-330 states, the get() method of Provider returns a new instance of Order, but what objects are passed to the new order's constructor? As you can see the order has its own dependencies that has to be injected before the actual order object is retrieved to the method.
Without DI I simply create all neccessary arguments and pass them to the constructor of the new order. So should I use DI here?
EDIT:
This is what the code would seem without DI:
#Component
public class PersistOrder{
public void prepareOrder() {
User user=userDao.get(userId);
List<OrderItem> orderItems=orderItemDao.getAll(orderItemIds);
Order order = new SmartPhoneOrder(user,orderItems);
As you see I have the ids of user and order items and can get their instances from DAOs. Then I pass these instances to the constructor of a subclass of the Order to get an instance. The process seems very clear to me. I mean how can I do this work with DI so that my code enjoy decoupling between PersistOrder and Order classes? Or using DI in this example makes the logic more complex?
Without dependency injection, domain objects can be considered anaemic, and this is arguably an anti-pattern. You're losing many of the benefits of OO, by having data structures, without associated behaviors. For example, to evaluate Order.isValid() you may need to inject some dependency to perform the validation.
You can get dependency injection on classes outside the context of the Spring container by using:
#Configurable
You then declare the recipe for the bean as a protoype, and Spring will use AOP to swizzle the constructor and look up the required dependencies. . Even though its looking up the dependencies, the net effect for you is dependency injection, because you'd only have to change one line to inject something else that evaluates Order.isValid().
This way when you do new Order, or get it from you persistence factory, it already has the dependencies "injected".
To do this requires aspectJ weaving and not just Proxy/CGLib weaving, which is the default. You can use either runtime weaving with a Java agent or build-time weaving. . . I recommend the former for integration tests and the latter for deployment. . .
Service vs Domain Object:
Now that you have the option of rich domain objects the question becomes where to put stuff. Service vs Entity? My take is that the service should orchestrate a use-case specific process based on reusable domain objects. The service is your non-OO gateway for outside subscribers into you system.
. . Google Spring #Configurable for more information and tutorials on AspectJ-based dependency injection of domain classes.