How to force MongoMappingContext to be created via MongoDataAutoConfiguration.mongoMappingContext()? - spring-boot

I'm having an issue with my #Document classes not being discovered at start up and it appears to be because the MongoMappingContext isn't created via MongoDataAutoConfiguration (which would scan for the #Document classes) since it's already been defined elsewhere. By setting breakpoints, it appears that MongoMappingContext is instantiated as part of the creation of MongoTemplate by the application context.
I'm puzzled because it seems like using repositories in any way would cause this to occur so I don't see how the method would ever be called.

Related

Clean database with Flyway after each test class

we are in situation where we cannot simple rollback data after test, so we decide to use Flyway java API like this:
#Autowired
protected Flyway flyway;
#AfterEach
public void restoreDatabase() {
flyway.clean();
flyway.migrate();
}
Is possible execute clean and migrate after each test class instead of test method? I need call this in #AfterAll annotated static method, but this type of methods have to be static so I cannot use autowired component Flyway. Can you advice me any workaround? Thank you.
The following solution may help you.
Besides the #Rollback annotation there is also the possibility to mark a class (or method) as "dirty" with the annotation org.springframework.test.annotation.DirtiesContext. This will provide the test cases a fresh context. From the Java Docs:
Test annotation which indicates that the ApplicationContext associated with a test is dirty and should therefore be closed and removed from the context cache.
Use this annotation if a test has modified the context — for example, by modifying the state of a singleton bean, modifying the state of an embedded database, etc. Subsequent tests that request the same context will be supplied a new context.
#DirtiesContext may be used as a class-level and method-level annotation within the same class or class hierarchy. In such scenarios, the ApplicationContext will be marked as dirty before or after any such annotated method as well as before or after the current test class, depending on the configured methodMode and classMode.
Let me show you an example:
#RunWith(SpringRunner.class)
#SpringBootTest
#DirtiesContext(classMode = ClassMode.BEFORE_CLASS)
public class SomeTestClass {
#Autowired
private ExampleController controller;
#Test
public void testSomething() {
//Do some testing here
}
Now in this case, with an embedded DB (like H2), a fresh DB will be started containing no changes from previous transactions.
Please note that this will most probably slow down your test cases because creating a new context can be time consuming.
Edit:
If you watch the log output you'll see that Spring creates a new application context with everything included. So, when using an embedded DB for the test cases, Spring will drop the current DB and creates a new one and runs all specified migrations to it. It's like restarting the server which also creates a new embedded DB.
A new DB doesn't contain any commits from previous actions. That's why it works. It's actually not hacky imo but a proper set up for integration tests as integration tests mess up the DB and need the same clean setup. However, there are most probably other solutions as well because creating new contexts for every test class may slow down the execution time. So, I would recommend to annotate only classes (or methods) which really needs it. On the other hand, unit tests are in most cases not very time critical and with newer Spring versions lazy loading will speed up startup time.

Spring - how to debug to find specific use of spring's auto-wiring or a request parameter?

Spring is auto-wiring in a request parameter - let's call it "bob".
I don't know where nor how it is doing this, so I cannot debug it. What spring specific code (using intellij, so I can at lest set a conditional) would be appropriate to find where the auto-wiring of the request parameter is happening, so I can work out what the system is doing?
I think I understood the question, so I will try to answer it as best as I can.
You are facing a dilemma of choosing between managing your instances, or letting Spring manage them. If you let Spring manage dependency injection, you will often face situations where you wish you had more fine control over the beans lifecycle.
By default, Spring beans are "singletons", which means that only one
instance of that object will be created, and every class that demands
a dependency injection of that object will receive the same instance.
The first step on beans lifecycle is its construction. You can setup a breakpoint to catch that moment on any method annotated with #PostConstruct. This article describes the need of running some code on bean initialization, and how it is solved by this annotation. For example:
public class AnyBean {
#PostConstruct
public void init(){
// any code or breakpoints inserted here will
// be run whenever an instance of this bean is created.
// if a singleton bean, only one instance is created and,
// only one #PostConstruct will be called.
// If a bean is a prototype bean, a new instance will be created
// for every dependency injection, and hence one #PostConstruct
// will be called for each.
}
}

Grails 2.3 strange plug-in eager loading behavior

I've just added a custom local plug-in (via 'grails.plugin.location...' build config declaration) to a grails 2.3 project. As soon as I add the plug-in and try to run my application I see a strange behavior whereby all of the beans in my main application suddenly try to eagerload across the board. I.e. if I have:
class FooService {
BarService barService
}
class BarService {
FooService fooService
}
Then the application cannot start. There is no code being executed in FooService or BarService at init time that should cause the Spring context to need to unwrap the either fooService or barService instance but that behavior seems to be happening anyway. In the end the application init fails with an exception like:
Caused by: org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'fooService': org.springframework.beans.factory.FactoryBeanNotInitializedException: FactoryBean is not fully initialized yet
As the fooService requires the barService, which requires the fooService which is still being created.
As soon as I remove the dependency on the custom plug-in then the problem stops. Obviously something either in the plugin or about the plugin (meta data or something) is causing this behavior but I can't figure out what exactly.
Rather than a solution I think what I am looking for here is some trouble-shooting techniques or information leading me to understand why this eagerloading behavior is occurring. I just don't know what the right hooks/configs are that will let me get in the head of the Spring context initialization mechanism.
I found the answer to my question buried deep in the comments of this other SO question: Grails service using a method from another service
Basically my plugin introduced the hibernate plug-in into my project and therefore also introduced transactional-by-default services. This transactional bootstrapping was causing the eager load of all dependencies and thus my circular dependency problem.
The fix was to, at least for now, mark the circularly dependent services as non-transactional. Of course in the future if I need those services to be transactional it seems I will need to re-architect my service setup.

Does ComponentScan order matter?

I'm setting up a very small Spring/REST/JPA project with Boot, using annotations.
I'm getting some Bean not found errors in my REST controller class that has an Autowired repository variable, when I move my JPA repository class out to a different package, and calling componentscan on its package. However, everything was working fine when all my files(5 total) were in the same package.
So I was wondering, however unlikely, if the component scan order matters? For example, if a class is AutoWiring some beans from a package that has not been 'component scanned' yet, will that cause a Bean not found error?
No, Spring loads all configuration information, from files and annotations and the environment when appropriate. It then creates beans (instances of classes) according to a dependency tree that it calculates in memory. In order to do this it has to have a good idea of the entire configuration at startup. The whole model derived from all the aggregated configuration information is called the Application Context.
In modern versions of spring the application context is flexible at runtime and so it's not quite the case that all the configuration is necessarily known up front, but the configuration that is flexible is limited in scope and must be planned for carefully.
Maybe you need to share some code. When you move that stuff, you also need to tell Spring where they went. My guess would be you haven't defined #EntityScan and #EnableJpaRepositories (which default to the location of #EnableAutoConfiguration).
There could be several problems:
You moved your class out of the some package where you have #ComponentScan without arguments. That basically means that components are scan only in this package and its children. Thus, moved class are not scanned and there is no bean to wire.
Wrong package name in #ComponentScan args.
The order isn't matter at all. There is an #Order annotation, but it's purpose is more about loading multiple implementations of sth in a different order.
At first Bean Definitions are created and they have nothing to do with wiring. Then via bean post processors, autowired beans are injected. Since there were no bean definition. There is nothing to inject.
In a well structured program it doesn't, because first each bean gets instantiated, then autowired and then you can actually use them.
However there could be situations where the order does matter and I had an issue figuring out what was going on. So this is an example where it would matter:
You have some Repository that you want to fill with data initially, call it SetupData component.
Then you use #PostConstruct to save the default objects.
You have some component that this Repository depends on but isn't managed by Spring, for example a #Converter.
And that #Converter depends on some other component which you would statically inject.
In this case #PostConstruct methods will be executed before the components into your #Converter get autowired which will result in an exception.
Relying on ComponentScan order is a bad habit, because it's not intuitive especially when you are working with multiple people who may not know about. Or there might be such dependencies that you can't fix the code by changing the scan order.
The best solution in this case was using a task executor service that takes care of running initialization functions.

Spring3 deserialization - singleton injection

i am developing a Webapplication with JSF2 and Spring3 and have a problem with deserialization.
I have some session-scoped beans defined like so:
#Controller(value = "admin")
#Scope(value = "session")
public class AdminBean implements Serializable {
...
also i have some singletons defined like so:
#Repository
public class Repo {
Singletons are injected into the session-beans like this
#Resource
private transient Repo repo;
After i added transient, my problems with serialization/deserialization went away. But now i have the problem that after deserialzing the dependencies (repo in this case) are null. I searched a lot on this problem and found some workarounds, but i still wonder what the best solution for this problem is?
It seems to me that using application-scoped beans in session-beans is a quite common case, isn't there a clean solution for this? I came arround a solution with #Configurable, but do i really need some load-time-weaving stuff? The targets of the injection are already spring-managed, so it doesn't make sense to me..
Please enlight me
update 2 years later: You CAN transparently inject session-scoped beans into application-scoped-beans (might not be a good idea in most cases though). I just had to set the proxyMode on #Scope accordingly.
Try getting AutowireCapableBeanFactory through applicationContext.getAutowireCapableBeanFactory() and there are some methods like autowireBeanProperties, autowireBean and configureBean that should be able to reconfigure your ban after deserialization. Choose a method best for you (one of them triggers post processing, others not etc..)
Second idea is to wrap Repo in a proxy, that is serializable. It'll serialize with AdminBean and deserialize. This proxy should hold 'real' transient reference and if it gets null after deserialization, it should lookup it from the Application Context.
I heard that Spring 3 automaticaly wraps beans with such a proxy, but I've never managed to make it work.
I faced similar problem using #ViewScoped of JSF where backing bean has to serializable which in turn wants all dependencies to be serializable, which in all scenarios is not possible to get all. As #Peter mentioned about Spring 3 have a look at link # stackoverflow discussing same here. I used transient for some dependencies(Spring) and for deserialization process I hooked bean to
private void writeObject(java.io.ObjectOutputStream stream)
and private void readObject(java.io.ObjectInputStream stream)
in view scoped bean and tweaked them to get proper reference.

Resources