Description
The general use case scenario is - in the world of application package dependency graph, we want to have a collection in parent package and we want to make it available for all children packages to add elements to the list, in other words, extending the list for higher level execution in the parent package.
The goal is let downstream applications able to inject elements to this higher level applications predefined collection so that we achieve federated model for elements while keeping overall execution control in the parent application package.
Example
Say we have 2 application packages
- parent package
- child/children package(s)
The children packages child listed parent package as build dependency
In parent package's spring configuration xml, we have a list that need to be injected with instances of a class really.fun.processor
<util:list id="myProcessors" value-type="really.fun.processor" />
If we host the classes and their instances (beans) in the child package (such as below beans), is it possible to inject back to the parent's list?
<bean name="funProcessor1" class="really.fun.processor"/>
<bean name="funProcessor2" class="really.fun.processor"/>
...
<bean name="funProcessorN" class="really.fun.processor"/>
Question
Is this possible in Spring? If so, what's recommended approaches for this use case?
Figured out the solution:
ComponentScan https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/context/annotation/ComponentScan.html
EnableAutoConfiguration https://docs.spring.io/spring-boot/docs/1.3.8.RELEASE/reference/html/using-boot-auto-configuration.html
Both are designed to solve the above use case very nicely.
Related
I'm trying to understand how much time component scanning is adding application context creation. Currently, it takes ~100 seconds to create the application context and I suspect that component scanning for component definitions is costly. I have a series of questions as follows:
How do I measure the total time spent component scanning?
Does the number of base context:component-scan entries impact the search space, I'm assuming component scanning uses PathMatchingResourcePatternResolver to scan each entry on the classpath and then finding classes that match the base-package regex. Is it more efficient to structure the metadata like:
<context:component-scan base-package="foo" />
<context:component-scan base-package="bar" />
<context:component-scan base-package="baz" />
or
<context:component-scan base-package="foo, bar, baz" />
I'm also assuming that the number of classes that PathMatchingResourcePatternResolver influences component scanning as the check for corresponding component annotations requires the class file to be inspected. So is it good practice to only keep classes with annotations in a well-defined package to reduce the number classes to inspect?
Is there known best practices listed somewhere on what considerations to make in the design to get the most optimal component scanning performance?
Auto scan classes requires to scan all classes in the specified package(s) and can take a long time. If in your package almost all classes are defined as Bean then can use single component scan.
If there are some packages where classes are defined as Bean, then definitely multiple component scan of only that packages should define to reduce the auto scan time.
<context:component-scan base-package="foo" />
<context:component-scan base-package="bar" />
<context:component-scan base-package="baz" />
Else all beans define in spring configuration instead of Auto scan, but it can increase large size of your file.
I want to make my spring-boot configuration class A dependent on another configuration class B, i.e. A configuration is evaluated only if B configuration is evaluated.
In the real context, I have hundreds of Ai configurations and only one B, and I want to implement a way to exclude all the Ai configs by excluding only B during tests.
I tried the following:
#Configuration
#ConditionalOnBean(type = "org.my.B")
public class A1AutoConfiguration {
// ...
}
Where B is a unconditioned configuration class.
But when I run mvn spring-boot:run -Ddebug=true I see that A is never evaluated because B is missing. While the beans created inside B are in the application context, B itself is not.
I though I can make the Ai configuration classes dependent on beans created inside B but I don't like so much this solution.
Is there a cleaner (and working) way to implement such a dependency mechanism?
The key is to make sure that things are ordered correctly. It does not make any sense to request A to only apply if B is present if you can't make sure that B is evaluated first.
The hundreds part frightens me a bit. If As and B are auto-configuration, you can use the following
#AutoconfigureAfter(B.class)
#ConditionalOnBean(B.class)
public class A123AutoConfiguration { ...}
If As and B are not auto-configuration, you need to make sure B is processed first so you can't rely on regular classpath scanning for those.
I would say that such group of beans is suitable for separate library or sub-module, so that they are independent. Including mechanism can be component scanning on root package of such library or sub-module.
Other option is to use Spring profiles. Mark your beans with #Profile annotation and use #ActiveProfiles to enable certain group of beans during test.
Is there a way to make the component scan configurable externally or through an intermediate resolver class? My requirement is that a common library should include one or more of other smaller facilities (each having their own controller, services etc.) depending on whether those are "configured" or needed - e.g. in application properties.
The closest I can see a possibility of designing this is to declare a #Configuration class in the common library and keep it in the component scan class path (always). In this class I need some way to say that the following are the allowed scan paths (based on how downstream projects have configured their application properties).
Seems like TypeFilter custom implementation should do it. But how do I read application properties from inside the type filter implementation (annotation takes only the .class, so Spring must be initializing it.
Any other ways? Thanks!
Regards,
Arnab.
This document describes how to create your own Auto-Configuration. It allows you to read properties and utilize several variations of #Conditional annotation.
I would like to know, whether this is a valid practice to use "new" in spring to create a Object?
You can either use xml->bean to create a object using xml file or use annotations to create a object.
This question arises when I am working on a project and where I want to create a object of a property class(which contains properties and setter/getter method of those properties).
I am able to create a object using new and its working fine but if spring has capability to create and manage object lifecycle then which way I need to go create a object and why?
I think the confusion may arise because of the (over)usage of spring as DI mechanism. Spring is a framework providing many services. Bean or dependency injection is just on of those.
I would say that for POJOs which have just setter and getters without much logic in them you can safely create objects using new keyword. For example, in case of value objects and data classes which do not have much configuration or life cycle events to worry about, go ahead and crate those using new keyword. If you repetitively create these objects and have fields which are not changing often, then I would use spring because it will lessen some of the repetitive code and object creation can be considered externalized or separated from your object usage.
Classes instantiated using spring bean definition xml/annotations are basically 'Spring-Managed' beans which mostly means that their life cycle, scope, etc are managed by spring. Spring manages objects which are beans, which may have some life cycle methods and APIs. These beans are dependencies for the classes in which the are set. The parent objects call some API of these dependencies to fulfil some business cases.
Hope this helps.
The Dependency Injection concept in spring is more useful when we need to construct an object that depends upon many objects, because it saves you time and effort for constructing as well as instantiating dependent objects.
In your case , Since it's a POJO class with only setters and getters , I think it is absolutely safe to instantiate it using a new keyword.
I have two Maven projects, one called project-data and the other one call project-rest which has a dependency on the project-data project.
The Maven build is successful in the project-data project but it fails in the project-rest project, with the exception:
Caused by: org.hibernate.DuplicateMappingException: duplicate import: TemplatePageTag refers to both com.thalasoft.learnintouch.data.jpa.domain.TemplatePageTag and com.thalasoft.learnintouch.data.dao.domain.TemplatePageTag (try using auto-import="false")
I could see some explanation here: http://isolasoftware.it/2011/10/14/hibernate-and-jpa-error-duplicate-import-try-using-auto-importfalse/
What I don't understand, is why this message does not occur when building the project-data project and occurs when building the project-rest project.
I tried to look up in the pom.xml files to see if there was something in there that could explain the issue.
I also looked up the way the tests are configured and run on the project-rest project.
But I haven't yet seen any thing.
The error is basically due to the fact that the sessionFactory bean underlies two entities with the same logical name TemplatePageTag :
One lies under the com.thalasoft.learnintouch.data.jpa.domain package.
The other under the com.thalasoft.learnintouch.data.dao.domain.
Since this fall to an unusual case, you will have Hibernate complaining about the case. Mostly because you may run in eventual issues when running some HQL queries (which are basically entity oriented queries) and may have inconsistent results.
As a solution, you may need either to:
Rename your Entity beans with different name to avoid confusion which I assume is not a suitable solution in your case since it may need much re-factoring and can hurt your project compatibility.
Configure your EJB entities to be resolved with different names. As you are configuring one entity using xml based processing and the other through annotation, the schema is not the same to define the entities names:
For the com.thalasoft.learnintouch.data.jpa.domain.TemplatePageTag entity, you will need to add the name attribute to the #Entity annotation as below:
#Entity(name = "TemplatePageTag_1")
public class TemplatePageTag extends AbstractEntity {
//...
}
For the com.thalasoft.learnintouch.data.dao.domain.TemplatePageTag, as it is mapped using an hbm xml declaration, you will need to add the entity-name attribute to your class element as follows:
<hibernate-mapping>
<class name="com.thalasoft.learnintouch.data.dao.domain.TemplatePageTag"
table="template_page_tag"
entity-name="TemplatePageTag_2"
dynamic-insert="true"
dynamic-update="true">
<!-- other attributes declaration -->
</class>
</hibernate-mapping>
As I took a look deeper into your project strucure, you may need also to fix entity names for other beans as you have been following the same schema for many other classes, such as com.thalasoft.learnintouch.data.jpa.domain.AdminModule and com.thalasoft.learnintouch.data.dao.domain.AdminModule.
This issue could be fixed by using a combination of #Entity and #Table annotations. Below link provides a good explanation and difference between both.
difference between name-attribute-in-entity-and-table