Hibernate and JPA error: duplicate import on dependent Maven project - maven

I have two Maven projects, one called project-data and the other one call project-rest which has a dependency on the project-data project.
The Maven build is successful in the project-data project but it fails in the project-rest project, with the exception:
Caused by: org.hibernate.DuplicateMappingException: duplicate import: TemplatePageTag refers to both com.thalasoft.learnintouch.data.jpa.domain.TemplatePageTag and com.thalasoft.learnintouch.data.dao.domain.TemplatePageTag (try using auto-import="false")
I could see some explanation here: http://isolasoftware.it/2011/10/14/hibernate-and-jpa-error-duplicate-import-try-using-auto-importfalse/
What I don't understand, is why this message does not occur when building the project-data project and occurs when building the project-rest project.
I tried to look up in the pom.xml files to see if there was something in there that could explain the issue.
I also looked up the way the tests are configured and run on the project-rest project.
But I haven't yet seen any thing.

The error is basically due to the fact that the sessionFactory bean underlies two entities with the same logical name TemplatePageTag :
One lies under the com.thalasoft.learnintouch.data.jpa.domain package.
The other under the com.thalasoft.learnintouch.data.dao.domain.
Since this fall to an unusual case, you will have Hibernate complaining about the case. Mostly because you may run in eventual issues when running some HQL queries (which are basically entity oriented queries) and may have inconsistent results.
As a solution, you may need either to:
Rename your Entity beans with different name to avoid confusion which I assume is not a suitable solution in your case since it may need much re-factoring and can hurt your project compatibility.
Configure your EJB entities to be resolved with different names. As you are configuring one entity using xml based processing and the other through annotation, the schema is not the same to define the entities names:
For the com.thalasoft.learnintouch.data.jpa.domain.TemplatePageTag entity, you will need to add the name attribute to the #Entity annotation as below:
#Entity(name = "TemplatePageTag_1")
public class TemplatePageTag extends AbstractEntity {
//...
}
For the com.thalasoft.learnintouch.data.dao.domain.TemplatePageTag, as it is mapped using an hbm xml declaration, you will need to add the entity-name attribute to your class element as follows:
<hibernate-mapping>
<class name="com.thalasoft.learnintouch.data.dao.domain.TemplatePageTag"
table="template_page_tag"
entity-name="TemplatePageTag_2"
dynamic-insert="true"
dynamic-update="true">
<!-- other attributes declaration -->
</class>
</hibernate-mapping>
As I took a look deeper into your project strucure, you may need also to fix entity names for other beans as you have been following the same schema for many other classes, such as com.thalasoft.learnintouch.data.jpa.domain.AdminModule and com.thalasoft.learnintouch.data.dao.domain.AdminModule.

This issue could be fixed by using a combination of #Entity and #Table annotations. Below link provides a good explanation and difference between both.
difference between name-attribute-in-entity-and-table

Related

Spring finds a test class on a jar and I get a NoUniqueBeanDefinitionException

I have a project that uses Spring. The project consists on two different parts, the generic part and the specific one. The generic part is compiled as a .jar, it defines a set of traits and it's used as a dependency by the specific part, which is the one that implements the methods.
In order to test the generic part, I have created a "fake" implementation of one of the trait (let's say "fakeMethodA"), under the test directory of the generic project and I annotated this fake implementation with the #Component annotation. I'm getting the beans using the application context.
The problem comes when I try to use this generic part on the specific project. Since my actual implementation of this trait (let's say "methodAImplementation") also has a #Component annotation, when I run my tests I get:
org.springframework.beans.factory.NoUniqueBeanDefinitionException
expected single matching bean but found 2:
It finds the fakeMethodA from the generic part and methodAImplementation from the implementation. Is there any way to exclude this "fake" implementation from the execution? Is there a better way to define this?
Any help would be greatly appreciated.
The problem was solved by the use of #Profile annotation on the generic method.
I annotated the fake method on the tests with:
#Profile(value = Array("Test"))
And the right implementation with another profile value. After that, when I select the bean from the context, I can select the correct profile.

Spring Boot : how auto configure works and #JsonTest

I've read some stuff about how auto-configuration works behind the scene (configuration classes with #Conditional, spring.factories inside /META-INF etc...)
Now I'm trying to understand with an example : #JsonTest
I can see this annotation is annotated with things like #AutoConfigureJson
What this #AutoConfigureJson does exactly ? Does it import some configuration classes with beans inside ? How Spring know how to use this annotation (basically this annotation is almost empty and doesn't say which classes to scan)
#AutoConfigure... (like #AutoConfigureJson) annotations are the way to allow tests with multiple "slices".
Slices load into your tests only a subset of the application, making them run faster. Let's say you need to test a component that uses the Jackson Object Mapper, then you would need the #JsonTest slice. (here is the list of all available slices.)
But you may also need some other part of the framework in your test not just tha single slice; let's say the JPA layer. You may want to annotate the test with both #JsonTest and #DataJpaTest to load both slices. According to the docs, this is not supported.
What you should do instead is choose one of the#...Test annotation, and include the other with an #AutoConfigure... annotation.
#JsonTest
#AutoConfigureDataJpa
class MyTests {
// tests
}
Update:
at a certain point while evaluating the annotation, Spring Boot will hit this line and will pass to the method SpringFactoriesLoader.loadFactoryNames() a source, that is the fully qualified name of the annotation (like interface org.springframework.boot.test.autoconfigure.json.AutoConfigureJson for example).
The loadFactoryNames method will do its magic and read the necessary information from here.
If more details are needed, the best thing is to use a debugger and just follow along all the steps.

How to use Jackson2RepositoriesPopulatorFactoryBean with child object?

I'm using Jackson2RepositoriesPopulatorFactoryBean to populate my bdd from json files.
It work perfectly but fail to find repository for object that are child of this repository. ( I have some object inheriting from an abstract one ).
'An exception occurred while running. null: InvocationTargetException:
No repository found for domain type: x.y.z'
I investigate and found that the Populator fetch repository from class name of the object.
My question is : is it possible to change that ? (And set it to fetch parent repo if it fail with actual class repo ?)
EDIT :
A solution could be to add a repository for each class in a package to the list of Repositories in spring context...
How to do that without adding an #RepositoryRestResource interface for each of them ?
Well, it seems I found a solution using the #Document on parent class instead of on child.
This to avoid creating a collection by child.
Plus, I add one repository (#Repository) by child... this is not the best way to do, but this is a solution.

Gemfire NoSuchBeanDefinitionException Autowiring Cache (Spring 5.0.2 / Gemfirev9.2.7)

We are migrating from Gemfire 8.2.7 to 9.2.1
As part of Gemfire startup, we leverage SpringContextBootstrappingInitializer to initialize the spring-beans which #Autowire the Cache.
The same code when migrated to Gemfire 9.2.1 (along with the other stack) is failing on server startup with below error.
Gemfire 8.2.7 --> Gemfire 9.2.1
Spring-data-Gemfire 1.8.4 --> 2.0.2
Spring-Boot 1.4.7 --> 2.0.0.M7
Spring --> 5.0.2
Caused by:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
qualifying bean of type 'org.apache.geode.cache.Cache' available:
expected at least 1 bean which qualifies as autowire candidate.
Dependency annotations:
{#org.springframework.beans.factory.annotation.Autowired(required=true)}
Any pointers / changes required for GemfireConfig? Below is our JavaConfig.
#Bean
public CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
Looks like the ComponentScan is kicking in prior to Configuration processor. Any idea on controlling this behavior? This was lasted tested to work in Spring-Boot 1.4.6 (Spring- 4.3.8) and gets resolved with a #Depends option - but just wanted to understand if there are any fundamental changes with the ordering of bean initialization with newer Spring version.
#Configuration
#EnableAutoConfiguration(exclude = { HibernateJpaAutoConfiguration.class, BatchAutoConfiguration.class })
#Import(value = { GemfireServerConfig.class, JpaConfiguration.class, JpaConfigurableProperties.class })
#ComponentScan(basePackages = "com.test.gemfire", excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class) )
To begin, let me give you some tips since there are 3 issues with your problem statement above...
1) First, you have not made it clear why or how you are using the o.s.d.g.support.SpringContextBootstrappingInitializer Docs here.
I can only assume it is because you are launching your GemFire servers with Gfsh
using the following command...
gfsh> start server --name=MyServer --cache-xml-file=/path/to/cache.xml ...
Where your cache.xml is defined similar to this. After all, this was the original intent for using the SpringContextBootstrappingInitializer.
If this is the case, why not use the Gfsh, start server command, --spring-xml-location option instead. For example:
gfsh> start server --name=MyServer --spring-xml-location=/by/default/a/classpath/to/applicationContext.xml --classpath=/path/to/spring-data-gemfire-2.0.2.RELEASE.jar:...
By doing so, you no longer need to provide cache.xml just to declare the SpringContextBootstrappingInitializer in order to bootstrap a Spring container inside the GemFire JVM process. You can simply use the --spring-xml-location option and put SDG on the server's classpath when starting the server.
2) Second, it is not apparent what type of application component/bean you are injecting a GemFire Cache reference into (e.g. a Region or another application component class, like a DAO, etc). Providing a snippet of code showing how you injected the Cache reference, i.e. the injection point using the #Autowired annotation would have been helpful. For example:
#Service
class MyService {
#Autowired
private Cache gemfireCache;
...
}
3) #2 would have been more apparent if you included the full stack trace rather than just the NoSuchBeanDefinitionException message.
Despite the issues with your problem statement, I can infer the following:
Clearly, you are using "classpath component scanning" (with the #ComponentScan annotation) and are auto-wiring "by type"; which maybe key actually; I will come back to this later below.
You are using Spring's #Autowired annotation on a bean class field (field injection) or property (setter injection), maybe even a constructor.
The type of this field/property (or constructor parameter) is definitely org.apache.geode.cache.Cache.
Moving on...
In general, Spring will follow dependency order first and foremost. That is, if A depends on B, then B must be created before and destroyed after A. Typically, Spring will and can honor this without incident.
Beyond "dependency order" bean creation and satisfying dependencies between beans (including with the #DependsOn annotation), the order of bean creation is pretty loosely defined.
There are several factors that can influence it, such as "registration order" (i.e. the order in which bean definitions are declared, which is particularly true for beans defined in XML), "import order" (when using the #Import annotation on #Configuration classes), Java reflection (includes #Bean definitions declared in #Configuration classes), etc. Configuration organization is definitely important and should not be taken lightly.
This is 1 reason why I am not a big proponent of "classpath component scanning. While it may be convenient, it is always better, IMO, to be more "explicit" in your configuration, and the organization of your configuration, for reasons outlined here in addition to other non-apparent limitations. At worst, you should definitely be limiting the scope of the scan.
Ironically, you excluded/filtered the 1 thing that could actually help your organizational concerns... components of type #Configuration:
... excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class)
NOTE: given the exclusion, are you certain you did not exclude the the 1 #Configuration class containing your CacheFactoryBean definition? I suppose not since you say this worked after including the #DependsOn annotation.
Clearly there is a dependency defined between some application component of yours (??) and a bean of type o.a.g.cache.Cache (using #Autowired), yet Spring is failing to resolve it.
My thinking is, Spring cannot resolve the Cache dependency because 1) the GemFire cache bean either has not been created yet and 2) Spring cannot find an appropriate bean definition of the desired type (i.e. o.a.g.cache.Cache) in your configuration that would resolve the dependency and force the GemFire Cache to be created first, or 3) the GemFire Cache bean has been created first but Spring is unable to resolve the type as o.a.g.cache.Cache.
I have encountered both scenarios before and it is not exactly clear to me when each scenario happens because I simply have not traced this through yet. I have simply corrected it and moved on. I have noticed that it is version related though.
There are several ways to solve this problem.
If the problem is the later, 3), then simply declaring your dependency as type o.a.g.cache.GemFireCache should resolve the problem. So, for example:
#Repository
class MyDataAccessObject {
#Autowired
private GemFireCache gemfireCache;
...
}
The reason for this is because the o.s.d.g.CacheFactoryBean class's getObjectType() method returns a Class type generically extending o.a.g.cache.GemFireCache. This was by design since o.s.d.g.client.ClientCacheFactoryBean extends o.s.d.g.CacheFactoryBean, though I probably would not have done it that way if I had created these classes. However, it is consistent with the fact that the actual cache type in GemFire is o.a.g.internal.cache.GemFireCacheImpl which indirectly implements both the o.a.g.cache.Cache interface as well as the o.a.g.cache.client.ClientCache interface.
If your problem is the former (1) + 2), which is a bit trickier), then I would suggest you employ a smarter organization of your configuration, separated by concern. For example, you can encapsulate your GemFire configuration with:
#Configuration
class GemFireConfiguration {
// define GemFire components (e.g. CacheFactoryBean) here
}
Then, your application components, where some are dependent on GemFire components, can be defined with:
#Configuration
#Import(GemFireConfiguration.class)
class ApplicationConfiguration {
// define application beans, including beans dependent on GemFire components
}
By importing the GemFireConfiguration you are ensuring the GemFire components/beans are created (instantiated, configured and initialized) first.
You can even employ more targeted, limited "classpath component scanning" at the ApplicationConfiguration class-level in cases where you have a large number of application components (services, DAO, etc).
Then, you can have your main, Spring Boot application class drive all this:
#Configuration
#Import(ApplicationConfiguration.class)
class MySpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(MySpringBootApplication.class, args);
}
}
The point is, you can be as granular as you choose. I like to encapsulate configuration by concern and clearly organize the configuration (using imports) to reflect the order in which I want my components created (constructed, configured and initialized).
Honestly, I basically organize my configuration in the order of dependencies. If my application ultimately depends on a data store and cannot function without that data store, then it makes since to ensure that is initialized first, otherwise, what is the point of starting the application.
Finally, you can always rely on the #DependsOn annotation, as you have appropriately done, to ensure that Spring will create the component before the component that expects it.
Based on the fact that the #DependsOn annotation solved your problem, then I would say this is an organizational problem and falls under the 1) / 2) category I outlined above.
I am going to dig into this a bit deeper and respond to my answer in comments with what I find.
Hope this helps!
-John

Neo4j/SDN warining: No identity field found for class of type for exception class

In my Neo4j/Spring Data Neo4j project I have a following exception class:
public class CriterionNotFoundException extends NotFoundDomainException {
private static final long serialVersionUID = -2226285877530156902L;
public CriterionNotFoundException(String message) {
super(message);
}
}
During application startup I see a following WARN:
WARN o.s.d.n.m.Neo4jPersistentProperty - No identity field found for class of type: com.example.domain.dao.decision.exception.DecisionAlreadyExistsException when creating persistent property for field: null
Why Neo4j/SDN is looking for identity field in this class ? How to correctly configure my application in order to skip this warning ?
You can ignore this warning- this is produced by SDN when building metadata Spring Data REST integration. It should not be doing this for Exceptions of course, and we'll have this fixed.
One way "to correctly configure [your] application" would be add EnableNeo4jRepositories and EntityScan annotations to your SpringBootApplication (or your config bean) as mentioned here and specify the names of your packages with Neo4J relevant classes.
I've debugged the SDN/Neo4j code only for 5 minutes, so my guesses may be off, but, I believe those warnings are generated when you don't specify packages to scan for your entities, and repositories. I'm guessing in that case SpringBoot+Neo4J-mapping scans each and every class in your project, and if a class has some fields, but nothing resembling an "id" field, it spits this warning. (So adding a Long id field to the classes with warnings may be another (yes, very ugly) work-around as well)
I've seen those warnings vanished when I tried explicitly specifying package names in my project using SpringBoot 2.0.6 + spring-data-neo4j 5.0.11.

Resources