Using spring-data-mongodb and spring-data-neo4j together - spring-boot

How can I use spring-data-mongodb and spring-data-neo4j in the same spring-boot application?
I can easily use one or the other following the "getting started" guides, but as soon as I try to add Neo4J to a MongoDB application then I get runtime errors such as:
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'application': Unsatisfied dependency expressed through field 'repository'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'bookRepository': Invocation of init method failed; nested exception is org.springframework.data.mapping.PropertyReferenceException: No property findAll found for type MongoBook!
I've setup a minimal example at https://github.com/afaulconbridge/myspring-mongo-neo

You can try this project which uses JPA and Neo4J together. The structure should technically work with Mongo as well. Be aware though that Mongo doesn't support the concept of transactions so you may not have to define an explicit transaction manager for each Spring Data project.

As #manish pointed out, you need to make Spring Data MongoDB and Spring Data Neo4J scan separate packages. i.e.
#EnableMongoRepositories(basePackageClasses=MongoBook.class)
#EnableNeo4jRepositories(basePackageClasses=NeoAuthor.class)
I've updated the example project at https://github.com/afaulconbridge/myspring-mongo-neo with a solution.

You should be able to use excludeFilters and includeFilters parameters respectively even in the same package (in most cases includeFilters is enough)
#EnableMongoRepositories(basePackageClasses=MongoBook.class,
includeFilters ={#ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE,
classes = {MongoRepository.class}))
#EnableNeo4jRepositories(basePackageClasses=NeoAuthor.class,
includeFilters ={#ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE,
classes = {NeoRepository.class}))
from includeFilters() description
Specifies which types are eligible for component scanning. Further
narrows the set of candidate components from everything in {#basePackages()} to everything in the base packages that matches the given filter or filters.

Related

MapStruct using interface with Spring Boot causes NoSuchBeanDefinitionException

I have implemnted an interface for using MapStruct:
#Mapper(componentModel = "spring")
public interface MapStructMapper {
MapStructMapper INSTANCE = Mappers.getMapper( MapStructMapper.class );
MyApiModel myInternalClassToMyApiModel(MyDocument.MyInternalClass myInternalClass);
MyDocument.MyInternalClass myApiModelToMyInternalClass(MyApiModel myApiModel);
}
When running the gradle build I get the following exception when tests are executed:
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'MapStructMapper' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {#org.springframework.beans.factory.annotation.Autowired(required=true)}
In my test class I currently have only:
#Autowired
protected MapStructMapper mapper;
and in my build.gradle
implementation 'org.mapstruct:mapstruct:1.4.2.Final'
annotationProcessor 'org.mapstruct:mapstruct-processor:1.4.2.Final'
How can I solve this problem and how can I invoke the mapping if I use MapStruct using an interface?
Based on the information you provided, it's hard to give a definitive answer.
Please check that you not only included the mapstruct dependencies but also the annotation processor in your build such that the MapStructMapperImpl is actually generated.
If it is indeed generated, you must make sure that it is included in the application context of your test. If you use #SpringBootTest, you need to make sure that the interface is declared in a package that is scanned by the component scan. If you construct a dedicated context with #ContextConfiguration, you need to list MapStructMapperImpl.class in the list that you pass to the parameter classes like you would with other classes annotated with #Component.

Spring Boot: Why isn't entity being scanned in createQuery when using multiple datasources?

I am using Spring Boot 2.3.0. I have 2 data sources one for oracle and one for h2 defined in application.properties.
I have to 2 #Configuration classes for the data configurations. Both classes implement:
DataSource
PlatformTransactionManager
LocalContainerEntityManagerFactoryBean
In LocalContainerEntityManagerFactoryBean I set up:
setDataSource
setPackagesToScan
setJpaVendorAdapter
The application starts up properly, I can even do .findAll on the table in the H2 database, however
as soon as I start executing custom methods in the repository implementation, such as this:
#Transactional(readOnly = true)
private Optional<List<Foo>> findFooByState(Optional<Integer> id, Foo.State state) {
CriteriaBuilder cp = em.getCriteriaBuilder();
CriteriaQuery<Foo> cqFoo= cp.createQuery(Foo.class);
Root<Foo> fooRoot = cqFoo.from(Foo.class);
[...]
Spring throws an exception such as:
Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception
[Request processing failed; nested exception is org.springframework.dao.InvalidDataAccessApiUsageException: Not an entity: class foo.Foo;
nested exception is java.lang.IllegalArgumentException: Not an entity: class foo.Foo] with root cause
Package foo is added in setPackagesToScan as I wrote earlier.
I have tried various things with #Transactional, e.g. remove it, add the name of the transaction manager specified in the DataSource in it, move the #Transactional to the #GetMapping, but none of it helped.
Does anybody have any clue what am I doing wrong?
Thanks,
I had a similar problem. Most probably you haven`t configured JPA repositories base packages to pick up different entities for different data sources. You can have a look at my guide on how to configure two data sources
in Spring Boot application. Hope it will help!
Ok, I was lame. I had to:
setPersistenceUnitName in the LocalContainerEntityManagerFactoryBean instantiation method
I had to use the proper #PersistenceContext with the proper unitName

Gemfire NoSuchBeanDefinitionException Autowiring Cache (Spring 5.0.2 / Gemfirev9.2.7)

We are migrating from Gemfire 8.2.7 to 9.2.1
As part of Gemfire startup, we leverage SpringContextBootstrappingInitializer to initialize the spring-beans which #Autowire the Cache.
The same code when migrated to Gemfire 9.2.1 (along with the other stack) is failing on server startup with below error.
Gemfire 8.2.7 --> Gemfire 9.2.1
Spring-data-Gemfire 1.8.4 --> 2.0.2
Spring-Boot 1.4.7 --> 2.0.0.M7
Spring --> 5.0.2
Caused by:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
qualifying bean of type 'org.apache.geode.cache.Cache' available:
expected at least 1 bean which qualifies as autowire candidate.
Dependency annotations:
{#org.springframework.beans.factory.annotation.Autowired(required=true)}
Any pointers / changes required for GemfireConfig? Below is our JavaConfig.
#Bean
public CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
Looks like the ComponentScan is kicking in prior to Configuration processor. Any idea on controlling this behavior? This was lasted tested to work in Spring-Boot 1.4.6 (Spring- 4.3.8) and gets resolved with a #Depends option - but just wanted to understand if there are any fundamental changes with the ordering of bean initialization with newer Spring version.
#Configuration
#EnableAutoConfiguration(exclude = { HibernateJpaAutoConfiguration.class, BatchAutoConfiguration.class })
#Import(value = { GemfireServerConfig.class, JpaConfiguration.class, JpaConfigurableProperties.class })
#ComponentScan(basePackages = "com.test.gemfire", excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class) )
To begin, let me give you some tips since there are 3 issues with your problem statement above...
1) First, you have not made it clear why or how you are using the o.s.d.g.support.SpringContextBootstrappingInitializer Docs here.
I can only assume it is because you are launching your GemFire servers with Gfsh
using the following command...
gfsh> start server --name=MyServer --cache-xml-file=/path/to/cache.xml ...
Where your cache.xml is defined similar to this. After all, this was the original intent for using the SpringContextBootstrappingInitializer.
If this is the case, why not use the Gfsh, start server command, --spring-xml-location option instead. For example:
gfsh> start server --name=MyServer --spring-xml-location=/by/default/a/classpath/to/applicationContext.xml --classpath=/path/to/spring-data-gemfire-2.0.2.RELEASE.jar:...
By doing so, you no longer need to provide cache.xml just to declare the SpringContextBootstrappingInitializer in order to bootstrap a Spring container inside the GemFire JVM process. You can simply use the --spring-xml-location option and put SDG on the server's classpath when starting the server.
2) Second, it is not apparent what type of application component/bean you are injecting a GemFire Cache reference into (e.g. a Region or another application component class, like a DAO, etc). Providing a snippet of code showing how you injected the Cache reference, i.e. the injection point using the #Autowired annotation would have been helpful. For example:
#Service
class MyService {
#Autowired
private Cache gemfireCache;
...
}
3) #2 would have been more apparent if you included the full stack trace rather than just the NoSuchBeanDefinitionException message.
Despite the issues with your problem statement, I can infer the following:
Clearly, you are using "classpath component scanning" (with the #ComponentScan annotation) and are auto-wiring "by type"; which maybe key actually; I will come back to this later below.
You are using Spring's #Autowired annotation on a bean class field (field injection) or property (setter injection), maybe even a constructor.
The type of this field/property (or constructor parameter) is definitely org.apache.geode.cache.Cache.
Moving on...
In general, Spring will follow dependency order first and foremost. That is, if A depends on B, then B must be created before and destroyed after A. Typically, Spring will and can honor this without incident.
Beyond "dependency order" bean creation and satisfying dependencies between beans (including with the #DependsOn annotation), the order of bean creation is pretty loosely defined.
There are several factors that can influence it, such as "registration order" (i.e. the order in which bean definitions are declared, which is particularly true for beans defined in XML), "import order" (when using the #Import annotation on #Configuration classes), Java reflection (includes #Bean definitions declared in #Configuration classes), etc. Configuration organization is definitely important and should not be taken lightly.
This is 1 reason why I am not a big proponent of "classpath component scanning. While it may be convenient, it is always better, IMO, to be more "explicit" in your configuration, and the organization of your configuration, for reasons outlined here in addition to other non-apparent limitations. At worst, you should definitely be limiting the scope of the scan.
Ironically, you excluded/filtered the 1 thing that could actually help your organizational concerns... components of type #Configuration:
... excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class)
NOTE: given the exclusion, are you certain you did not exclude the the 1 #Configuration class containing your CacheFactoryBean definition? I suppose not since you say this worked after including the #DependsOn annotation.
Clearly there is a dependency defined between some application component of yours (??) and a bean of type o.a.g.cache.Cache (using #Autowired), yet Spring is failing to resolve it.
My thinking is, Spring cannot resolve the Cache dependency because 1) the GemFire cache bean either has not been created yet and 2) Spring cannot find an appropriate bean definition of the desired type (i.e. o.a.g.cache.Cache) in your configuration that would resolve the dependency and force the GemFire Cache to be created first, or 3) the GemFire Cache bean has been created first but Spring is unable to resolve the type as o.a.g.cache.Cache.
I have encountered both scenarios before and it is not exactly clear to me when each scenario happens because I simply have not traced this through yet. I have simply corrected it and moved on. I have noticed that it is version related though.
There are several ways to solve this problem.
If the problem is the later, 3), then simply declaring your dependency as type o.a.g.cache.GemFireCache should resolve the problem. So, for example:
#Repository
class MyDataAccessObject {
#Autowired
private GemFireCache gemfireCache;
...
}
The reason for this is because the o.s.d.g.CacheFactoryBean class's getObjectType() method returns a Class type generically extending o.a.g.cache.GemFireCache. This was by design since o.s.d.g.client.ClientCacheFactoryBean extends o.s.d.g.CacheFactoryBean, though I probably would not have done it that way if I had created these classes. However, it is consistent with the fact that the actual cache type in GemFire is o.a.g.internal.cache.GemFireCacheImpl which indirectly implements both the o.a.g.cache.Cache interface as well as the o.a.g.cache.client.ClientCache interface.
If your problem is the former (1) + 2), which is a bit trickier), then I would suggest you employ a smarter organization of your configuration, separated by concern. For example, you can encapsulate your GemFire configuration with:
#Configuration
class GemFireConfiguration {
// define GemFire components (e.g. CacheFactoryBean) here
}
Then, your application components, where some are dependent on GemFire components, can be defined with:
#Configuration
#Import(GemFireConfiguration.class)
class ApplicationConfiguration {
// define application beans, including beans dependent on GemFire components
}
By importing the GemFireConfiguration you are ensuring the GemFire components/beans are created (instantiated, configured and initialized) first.
You can even employ more targeted, limited "classpath component scanning" at the ApplicationConfiguration class-level in cases where you have a large number of application components (services, DAO, etc).
Then, you can have your main, Spring Boot application class drive all this:
#Configuration
#Import(ApplicationConfiguration.class)
class MySpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(MySpringBootApplication.class, args);
}
}
The point is, you can be as granular as you choose. I like to encapsulate configuration by concern and clearly organize the configuration (using imports) to reflect the order in which I want my components created (constructed, configured and initialized).
Honestly, I basically organize my configuration in the order of dependencies. If my application ultimately depends on a data store and cannot function without that data store, then it makes since to ensure that is initialized first, otherwise, what is the point of starting the application.
Finally, you can always rely on the #DependsOn annotation, as you have appropriately done, to ensure that Spring will create the component before the component that expects it.
Based on the fact that the #DependsOn annotation solved your problem, then I would say this is an organizational problem and falls under the 1) / 2) category I outlined above.
I am going to dig into this a bit deeper and respond to my answer in comments with what I find.
Hope this helps!
-John

2 beans with same name but in different packages; how to autowire them?

I have an application that has 2 beans with the same name, but which are in different packages. My Spring application fails because it cannot decide on which bean to take. Is there any solution for this? The beans do not currently implement specific interfaces.
See below an edited example of the exception:
Caused by:
org.springframework.context.annotation.ConflictingBeanDefinitionException:
Annotation-specified bean name 'dataTransferHandler' for bean class
[aaaaa.ws.handler.DataTransferHandler] conflicts with existing,
non-compatible bean definition of same name and class
[bbbbb.ws.handler.DataTransferHandler]
You will have to give your beans different names - if multiple beans are defined with the same name, then the one defined later will override the one defined earlier - so in your case only one bean will exist with the name of dataTransferHandler.
You can give these two beans different names, so that both can exist and you can inject in the correct one either using:
#AutoWired #Qualifier("dataTransferHandler")
OR
#Resource(name="dataTransferHandler")
You can give attribute primary="true" to the bean defination you want to have the preference when autowired. But the bean names must be different. There is no solution for same bean name.
At run-time when you will get the autowired class then the primary true bean will get the preference for autowiring. Hope this helps you. Cheers.
I asked another question regarding the same problem, and there is a solution that doesn't require using the #Qualifier annotation: if both of your DataTransferHandler classes have a #Component annotation, you can simply add a String argument to one of their constructions (i.e. #Component("Foo")), and that should solve the problem without needing additional changes.
See User9123's answer on my question for more details.

Spring #Qualifier not working when bean is in another jar file

I have a number of Spring beans, some of which are in a shared library jar. I can't seem to get #Qualifier to work.
I have default-autowire set to "byType", this is using Spring 3.1.0.M2 and running as a standalone executable. If I remove "TestTwoBean" from the shared library the project executes as expected.
myproj-shared-lib.jar:
#Service
public class TestOneBean implements ITestBean {
}
#Service
public class TestTwoBean implements ITestBean {
}
myproj.jar:
#Service
public class TestConsumerBean {
#Autowired #Qualifier("testOneBean")
private ITestBean bean;
}
I get the "no unique bean with name" exception at runtime:
org.springframework.beans.factory.UnsatisfiedDependencyException:
Error creating bean with name 'testConsumerBean' defined in file [-]:
Unsatisfied dependency expressed through bean property 'bean': : No
unique bean of type [com.myco.ITestBean] is defined: expected single
matching bean but found 2: [testOneBean, testTwoBean]; nested
exception is
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
unique bean of type [com.myco.TestBean] is defined: expected single
matching bean but found 2: [testOneBean, testTwoBean] at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireByType(AbstractAutowireCapableBeanFactory.java:1167)
...
Does #Qualifier not work in this situation? Is there a known workaround?
Are you sure you want to use autowire by type AND annotation injection? Autowire by type means spring will attempt to inject detected setters and constructor parameters using by type lookup even if they aren't annotated for injection.
At the same time you are trying to inject fields by name. Your #Service annotated classes produce beans with names defaulting to the class name, "testOneBean" and "testTwoBean" respectively. #Qualifier uses bean names as correct matches. The recommended way of doing "by name" injection though is by using #Resource(name="testOneBean"). I can only guess spring tries injection by type due to autowire mode set to by type (which I doubt you really need).
I would recommend reverting to default autowire mode and using #Resource for wiring by name.

Resources