How to Not Mock in JUnit - spring

We have a class in our code base called LocationService that basically does two things: makes HTTP calls to look up "Locations", and does a lot of work to map the information coming back into a usable format. For various reasons, primarily SRP, we would like to separate this class into two classes: LocationLookupService and LocationFactoryService. We've done that, and now the LocationLookupService will do its HTTP work and then call the Factory service. From a src/main perspective, this seems to work fine. However, the unit tests for the Lookup service are failing because the Factory isn't mocked - and I don't want it to be. I want the Lookup service to call the actual Factory without having to constantly rely on doCallRealMethod type mocks (which are dis-recommended).
How can I correctly wire the actual Factory service bean into the Lookup bean at test time? I have tried various combinations of #Autowired, calling the constructor, and so on.
#Autowired private LocationFactoryService locationFactoryService;
#InjectMocks private LocationLookupServiceImpl locationService;

After speaking with a few more people, I think I've found the answer: using Spy functionality either through #Spy annotation or Mockito.spy() method. However, I've also been warned that this is a potential slippery slope to bad test code.

Related

Kotlin instance variable is null when accessed by Spring proxied class

I have a service class that is being proxied by Spring, like so:
#Service
#Transactional
open class MyService { ... }
If I remove the open modifier, Spring complains that it needs to proxy the class to apply the #Transactional annotation tweaks.
However, this is causing issues when calling a function on the proxied service, which attempts to access a variable:
#Service
#Transactional
open class MyService {
protected val internalVariable = ...
fun doWork() {
internalVariable.execute() // NullPointerException
}
}
The internalVariable is assigned as part of its declaration, does not have any annotations (like #Autowired, etc.), and works fine when I remove the #Transactional annotation and the requirement for Spring to proxy the class.
Why is this variable null when Spring is proxying/subclassing my service class?
I hit a similar issue and the above comments by Rafal G & Craig Otis helped me-- so I'd like to propose that the following write up be accepted as an answer (or the comments above be changed to an answer and they be accepted).
The solution: open the method/field.
(I hit a similar case where it was a closed method that caused the problem. But whether it is a field/method the solution is the same, and I think the general cause is the same...)
Explanation:
Why this is the solution is more complicated and definitely has to do with Spring AOP, final fields/methods, CGLIB proxies, and how Spring+CGLIB attempts to deal with final methods (or fields).
Spring uses proxies to represent certain objects to handle certain concerns dealt with by Aspect Oriented Programming. This happens with services & controllers (especially when #Transactional or other advice is given that requires AOP solutions).
So a Proxy/Wrapper is needed with these beans, and Spring has 2 choices-- but only CGLIB is available when the parent class is not an interface.
When using CGLIB to proxy classes Spring will create a subclass called
something like myService$EnhancerByCGLIB. This enhanced class will
override some if not all of your business methods to apply
cross-cutting concerns around your actual code.
Here comes the real surprise. This extra subclass does not call super
methods of the base class. Instead it creates second instance of
myService and delegates to it. This means you have two objects now:
your real object and CGLIB enhanced object pointing to (wrapping) it.
From: spring singleton bean fields are not populated
Referenced By: Spring AOP CGLIB proxy's field is null
In Kotlin, classes & methods are final unless explicitly opened.
The magic of how Spring/CGLib when & how chooses to wrap a Bean in an EnhancerByCGLIB with a target delegate (so that it can use finalized methods/fields) I don't know. For my case, however the debugger showed me the 2 different structures. When the parent methods are open, it does not create a delegate (using subclassing instead) and works without NPE. However, when a particular methods is closed then for that closed method Spring/CGLIB uses a wrapped object with delegation to a properly initialized target delegate. For some reason, the actual invocation of the method is done with the context being the wrapper with its uninitialized field values (NULLs), causing NPE. (Had the method on the actual target/delegate been called, there should not have been a problem).
Craig was able to solve the problem by opening the property (not the method)-- which I suspect had a similar effect of allowing Spring/CGLib to either not use a delegate, or to somehow use the delegate correctly.

Spring autowiring based on service availability

I have a need of conditionally creating one of three possible implementations of a service depending upon the environment detected by a Spring application at runtime. If Service A is available, then I want to create a concrete implementation class that uses Service A as a dependency. If Service A is not available, then I want to create an implementation using Service B as a dependency. And so-on.
Classes which depend on the implementation will Autowire the Interface and not care what the underlying Service was that got selected for the particular environment.
My first stab at this was to implement multiple #Bean methods which either return a bean or null, depending on whether the Service is available, and to then have a separate #Configuration class which #Autowire(required=false) the two possible services, conditionally creating the implementation depending on which of the #Autowired fields was not-null.
The problem here is that when required=false, Spring doesn't appear to care whether it waits around for candidates to be constructed; that is to say, the class which tries to pick the implementation might be constructed before one or both of the required=false Beans gets constructed, thus ensuring that one or both might always be null, regardless of whether it may manage to initialize correctly.
It kind of feels like I'm going against the grain at this point, so I'm looking for advice on the "right" way to do this sort of thing, where a whole set of beans might get switched out based on the availability of some outside service or environment.
Profiles don't look like the right answer, because I won't know until after my Service beans try to initialize which implementation I want to choose; I certainly won't know it at the time I create the context.
#Order doesn't achieve the goal either. Nor does #Conditional and testing on the existence of the bean (because it still might not be constructed yet). Same problem with FactoryBean- it does no good to check for the existence of beans that might not have been constructed at the time the FactoryBean is asked to create an instance.
What I really need to do is create a Bean based on the availability of other beans, but only AFTER those beans have at least had a chance to try to initialize.
Spring Profiles is your friend. You can set the current profile by way of environmental variable, command-line argument, and other methods. You can annotate a Spring-managed component so that it's created for a certain profile.
Spring Profiles from the Spring Documentation
Well in this case it turned out to be a tangential mistake that influenced the whole wrong behavior.
To give some background, my first, naive (but workable) approach looked like this:
#Autowired(required=false)
#Qualifier(RedisConfig.HISTORY)
private RLocalCachedMap<String, History> redisHistoryMap;
#Autowired(required=false)
#Qualifier(HazelcastConfig.HISTORY)
private IMap<String, History> hazelcastHistoryMap;
// RequestHistory is an interface
#Bean
public RequestHistory requestHistory() {
if (redisHistoryMap != null) {
return new RedisClusteredHistory(redisHistoryMap);
} else if (hazelcastHistoryMap != null) {
return new HazelcastClusteredHistory(hazelcastHistoryMap);
} else {
return new LocalRequestHistory(); // dumb hashmap
}
}
In other #Configuration classes, if the beans that get #Autowired here aren't available (due to missing configuration, exceptions during initialization, etc), the #Bean methods that create them return null.
The observed behavior was that this #Bean method was getting called after the RLocalCachedMap<> #Bean method got called, but before Spring attempted to create the IMap<> by calling its #Bean method. I had incorrectly thought that this had something to do with required=false but in fact that had nothing to do with it.
What actually happened was I accidentally used the same constant for both #Bean names (and consequently #Qualifiers), so presumably Spring couldn't tell the difference when it was calculating its dependency graph for this #Configuration class... because the two #Autowired beans appeared to be the same thing (because they had the same name).
(There's a secondary reason for using #Qualifier in this case, which I won't go into here, but suffice it to say it's possible to have many maps of the same type.)
Once I qualified the names better, the code did exactly what I wanted it to, albeit in a way that's somewhat inelegant/ugly.
At some point I'll go back and see if it looks more elegant / less ugly and works just as well to use #Conditional and #Primary instead of the if/else foulness.
The lesson here is that if you explicitly name beans, make absolutely sure your names are unique across your application, even if you plan to swap things around like this.

Autowire two Neo4j GraphRepository in Spring

I'm new to using Spring with Neo4j and I have a question about #Autowire for a GraphRepository.
Most examples I've seen use one #Autowire per Controller, but I have two Nodes I need to modify at the same time when a particular method is called in the controller. Should I simply #Autowire the repositories for both nodes (eg per the code below)? Is there any impact if I do this in a second controller with the same repositories as well (so if I had a ChatSessionController which also #Autowired ChatMessageService and ChatSessionService)?
ChatMessageController.java
#Controller
public class ChatMessageController {
#Autowired
private ChatMessageService chatMessageService;
#Autowired
private ChatSessionService chatSessionService;
#RequestMapping(value = "/message/add/{chatSessionId}", method = RequestMethod.POST)
#ResponseBody
#Transactional
public void addMessage(#RequestBody ChatMessagePack chatMessagePack,
#PathVariable("chatSessionId") Long chatSessionId) {
ChatMessage chatMessage = new ChatMessage(chatMessagePack);
chatMessageService.save(chatMessage);
// TODO: Make some modifications to the ChatSession as well
}
}
Any help would be much appreciated! I've been googling and looking through Stackoverflow to understand this better but I haven't found anything yet. Any pointers in the right directions would be great.
Another underlying question is, should I be (and can I?) modifying other Nodes in a GraphRepository that handles a particular node? Eg Should my GraphRepository be able to modify my GraphRespository?
Thanks!
I'm not convinced that this is a SO question, it's not really a Neo4J or Spring question either, it is more about the architecture of your application. However assuming that you understand the negatives of class fan out, and how to use the #Transactional annotation to achieve what you want then the answer to your question is that it is just fine to have many Repositories (Neo4J or otherwise, autowired or otherwise) in your class and in as many classes as you want.
Neo4J transactions default to Isolation level READ_COMMITTED and if you need anything else, you need to add the guards/locks yourself. Nested transactions are consideredd tobe the same transaction. The Spring #Transactional annotation relies on proxies that you should be aware of as they have implications when calling methods from within the same class.
I would go through this tuotorial over at Spring Data and get your head around how real world vs domain vs node models differ, there will be cases where one repository impacts another node type but I would think it is often transparent to you (i.e adding relationships). You can do what you like in each repository (the generic nature of them is largely confined to all of the built in CRUD and queries derived from finder-method names (see documentation ) using the #Query annotation, and some queries have side effects, but largely you should avoid it.
As you start adding multiple repositories to multiple controllers I think that your code will begin to smell bad and that you should consider encapsulating this business logic off on its own somewhere, neatly unit tested. I also wouldn't tie myself to one controller per data object, it would be fine to have a single ChatController with a POST/chat/ to create a new session and POST /chat/{sessionId} to add a message. Intersting questions on Programmers:
How accurate is "Business logic should be in a service, not in a model?"
Best Practices for MVC Architecture
MVC Architecture — How many Controllers do I need?

Spring Weaving: #Configurable objects do not get woven correctly if used as a Parameter inside Methods of Autoproxies

Please consider following situation with spring 4.0.7
For Eclipselink, we use a load-time-weaver. So we wanted to experiment with Springs #Configurable annotation using #EnableSpringConfigured with #EnableLoadTimeWeaving at the same time.
This is fully functional, and Spring-Beans are injected perfectly into POJOs during construction. This functionality is helpful for us, because we want to keep some code regarding validation of these POJOs inside these and not somewhere else in a Bean.
SOme of our Spring Context contains Beans that do not implement any interface, because they are local to some package and used only there. Lets say, FooLogicBean is one of them. If this is to be injected into another Bean, and some Spring-AOP-Aspect (not-aspectj) like some performance measurement aspect is in the execution path, Spring will create a CGLIB autoproxy for the FooLogicBean and inject that. This is fully functional and works perfectly too.
Problems arise, when we want to actually use a POJO that is annotated with #Configurable as a parameter in a method of FooLogicBean (like fooLogicBean.doValidate(myPojo); ), respectively a CGLIB Proxy. In this case, some non-trivial magic stops that POJO from being woven thru aspectj (AnnotationBeanConfigurerAspect from spring-aspects). It is even never woven anywhere in the code regardless of calling the aforementioned doValidate() Method.
If we create that POJO inside FooLogicBean, but dont use it as a method Parameter, it gets woven again due to #Configurable.
Without knowing the Autoproxy creation code, we assume some fancy marking routine from hindering a class from being detected by aspectj, if that class was already used in spring-aop. use in this case means Access.
Did anyone experiment with such obscure constellation and knows a solution for that?
Thanks in advance!
Proxies are created by subclassing, i.e. when you create a proxy of an annotated class Foo:
class Foo {
#Bar
void bar() { }
}
the proxy is created by implementing a class
class Foo$Proxy extends Foo {
#Override
void bar() {
// Proxy logic
}
// Overridden methods of Object
}
This makes the Foo$Proxy class a valid Liskov substitute for Foo. However, normal override semantics apply, i.e. non-inherited annotations such as #Bar are not longer present for the overridden methods. Once another annotation-based API processes your beans, all annotations have disappeared which leads to your outcome. This is true for all kinds of annotations. Those on types, methods and method parameters.
But is it avoidable? It most certainly is by using a proxy generator that was written recently and is built with knowledge of what annotations are, something that is not true for cglib which was first shipped in the early days of the Java virtual machine. However, as long as Spring does not move away from cglib, you will have to live with today's limitations.

How does Mockito #InjectMocks work?

Here's my question:
I have several web services classes to test that all inherit their methods from a generic service. Rather than write a unit test for each, I figure I can break the test suite down by functional areas (i.e. three groups of test methods, each relying on a different underlying DAO method call).
What I propose to do is:
#Mock StateDAO mockedStateDao;
#Mock CountyDAO mockedCountyDao;
#Mock VisitorDAO mockedVisitorDao;
then call:
#InjectMocks CountyServiceImpl<County> countyService = new CountyServiceImpl<County>();
#InjectMocks StateServiceImpl<State> stateService = new StateServiceImpl<State>();
#InjectMocks VisitorServiceImpl<Visitor> visitorService = new VisitorServiceImpl<Visitor>();
How can I be sure that each mockedDAO will be injected into the correct service?
Would it be easier to autowire all three (rather than use #InjectMocks)?
I'm using Spring, Hibernate, and Mockito...
Well nicholas answer is almost correct, but instead of guessing just look at the javadoc of InjectMocks, it contains more details ;)
To me it's weird to have so many Service in a single test, it doesn't feel right, as a unit test or as an integration test. In unit test it's wrong because well you have way too much collaborators, it doesn't look like object oriented (or SOLID). In integration tests, it's weird because the code you test the integration with the DB not mock it.
For a rapid reference in 1.9.5 you have :
Mark a field on which injection should be performed.
Allows shorthand mock and spy injection.
Minimizes repetitive mock and spy injection.
Mockito will try to inject mocks only either by constructor injection, setter injection, or property injection in order and as described below. If any of the following strategy fail, then Mockito won't report failure; i.e. you will have to provide dependencies yourself.
Constructor injection; the biggest constructor is chosen, then arguments are resolved with mocks declared in the test only.
Note: If arguments can not be found, then null is passed. If non-mockable types are wanted, then constructor injection won't happen. In these cases, you will have to satisfy dependencies yourself.
Property setter injection; mocks will first be resolved by type, then, if there is several property of the same type, by the match of the property name and the mock name.
Note 1: If you have properties with the same type (or same erasure), it's better to name all #Mock annotated fields with the matching properties, otherwise Mockito might get confused and injection won't happen.
Note 2: If #InjectMocks instance wasn't initialized before and have a no-arg constructor, then it will be initialized with this constructor.
Field injection; mocks will first be resolved by type, then, if there is several property of the same type, by the match of the field name and the mock name.
Note 1: If you have fields with the same type (or same erasure), it's better to name all #Mock annotated fields with the matching fields, otherwise Mockito might get confused and injection won't happen.
Note 2: If #InjectMocks instance wasn't initialized before and have a no-arg constructor, then it will be initialized with this constructor.
If you have multiple Services and would like to replace the DAOs with Mock-Objects in a Spring-based environment, I would recommend to use Springockito: https://bitbucket.org/kubek2k/springockito/wiki/Home
which is also mentioned here:
Injecting Mockito mocks into a Spring bean
Your Testclass then might look like this:
#RunWith (SpringJUnit4ClassRunner.class)
#ContextConfiguration (loader = SpringockitoContextLoader.class, locations = {"classpath:/org/example/package/applicationContext.xml"})
public class NameOfClassTest {
#Autowired
#ReplaceWithMock
StateDAO mockedStateDao;
#Autowired
#ReplaceWithMock
CountyDAO mockedCountyDao;
#Autowired
#ReplaceWithMock
VisitorDAO mockedVisitorDao;
In your #Test or #Before Methode you can setup your mocks the standard Mockito way:
Mockito.doReturn(null).when(mockedCountyDao).selectFromDB();
Well, the static method MockitoAnnotations.initMocks(Object) is used to bootstrap the whole process.
I don't know for sure how it works, as I haven't browsed the source code, but I would implement it something like this:
Scan the passed Object's class for member variables with the #Mock annotation.
For each one, create a mock of that class, and set it to that member.
Scan the passed Object's class for member variables with the #InjectMocks annotation.
Scan the class of each found member for members it has that can be injected with one of the mock objects created in (2) (that is, where the field is a parent class/interface, or the same class, as the mock objects declared class) and set it to that member.
Nevermind, looked online- the InjectMocks annotation treats anything with the #Mock annotation as a field and is static-scoped (class wide), so I really couldn't guarentee that the mocks would go to the correct service. this was somewhat a thought experiment for trying to unit test at feature level rather than class level. Guess I'll just autowire this stuff with Spring...

Resources