Maybe this is counterproductive, I don't know, but right now I am in need of a debugger in IntelliJ that are aware of EasyMock mocks and especially what the mocks methods actually returns.
For example, I have a transport interface ITransport, which has some methods that had to be mocked, and where I only want some of methods returning something. E.g.
ITransport myTransport = createMock(ITransport.class);
I want myTransport.getID() to return a mocked ID 10.
expect(myTransport.getID()).andReturn(10);
With ID 10 I want a method to be invoked once,
expect(myTransport.publish(any(...)));
expectLastCall.once();
Something in the transport class breaks and myTransport isn't called, and my test fails. Know I just want to step through the code with the debugger to check why my test fails. So I add a breakpoint to verify the values of the mocked myTransport object. But they all say "null", even the ID. So I assume, with some brief investigation, that the cause of this is the EasyMock mock class, it doesn't really update the object with value (which sounds reasonable) and instead returns the mocked value at runtime when the method is called.
So, are there any mock aware debuggers for IntelliJ that lets me see which value the method will eventually return.
Yes, and before I receive responses saying that "The debugger is not required if you write unit tests for everything", I just want to state that I know about that. And this is legacy code, or at least code that wasn't written with testing in mind.
This may not be what you're looking for... but it feels like the problem is more on the debugging approach.
A mock object is really just that - a mock - meaning it's a fake empty object that doesn't do anything unless you specifically tell it. When your debugger inspects the mock object, it won't find any values that you did not specifically program it to return. It's not meant to hold values.
EasyMock has an argument capture feature, but since you just want it for debugging, this is probably the wrong approach. Mockito has a spying feature that could be suitable for what you want, but it would involve additional mock-programming statements.
I would say the easiest approach would be to implement your own ITransport just for use in your test class. That way you can implement getID() to always return 10 and put in an assert statement inside your publish(). And you can implement whatever other methods you need in order to capture additional data for debugging purposes. And you get to keep this test-only ITransport for either shared use or future debugging needs.
Indeed, the methods are mocked but the internal implementation of the class is left to itself.
Usually, you don't need to know what is returned since you're the one who recorded it in the first place.
You can also evaluate myTransport.getID() in your debugger. But doing this will consume the expectations.
However, it seems like a good idea to be able to list the all current pending expectations on a mock. And maybe to have a peek function. You can request such features on the EasyMock bug tracker: http://jira.codehaus.org/browse/EASYMOCK
Related
I was just reading Martin Fowler's post Mocks Aren't Stubs. He defines the different test doubles (or rather references Gerard Meszaros's xUnit patterns book):
Dummy objects are passed around but never actually used. Usually they
are just used to fill parameter lists.
Fake objects actually have
working implementations, but usually take some shortcut which makes
them not suitable for production (an in memory database is a good
example).
Stubs provide canned answers to calls made during the test,
usually not responding at all to anything outside what's programmed in
for the test. Stubs may also record information about calls, such as
an email gateway stub that remembers the messages it 'sent', or maybe
only how many messages it 'sent'.
Mocks are ... objects pre-programmed with expectations which form a
specification of the calls they are expected to receive.
Part one of my question would be, is this even authoritative? Is this language widely used and understood?
The second part of my question is that it seems that my mocking framework of choice, Mockito, makes it easy to blur the line, certainly between mocks and stubs.
Everything is called "mock". Either by calling the Mockito.mock() method or with a #Mock annotation, you use the word "mock" to create mocks, stubs, and sometimes dummies (if a simple "new" won't do). The exception is a "spy" which might be used to make something like a "fake", but can also be used to wrap your system under test.
Even if you didn't care about the name of the method to create a test double, the double can be verified (or not) and you can include a capture in the verification step, which seem to include some things that a stub would do (remembering calls that were made) and mocks (verifying that certain expected calls were made).
The reason I ask is that I try to name my doubles according to the four things I see above, but then get confused sometimes whether something really has the role of stub or a mock. So, is this a deficiency of Mockito, or is this just how things have evolved and the distinction is not really important?
Actually, it's a strength of Mockito. A Mockito mock is an object on which you can either "stub" the methods, or "verify" the methods, or both. (Doing both for the same method is kind of a code smell, but that's another topic). So a Mockito mock is both a "stub" (in the Martin Fowler sense) and a "mock" (in the Martin Fowler sense); but you don't have to use it as both. Usually, a Mockito mock will act as EITHER a "stub", OR as a "mock"; less often as both.
In some other mocking frameworks, you don't have quite this level of flexibility. If you want to do "verifying" on a mock, you also have to do "stubbing". In these frameworks, the mocks MUST act as both a "stub" and as a "mock". As far as I understand, one of the factors that motivated Szczepan Faber to create Mockito was a desire to be able to separate "stub" behaviour, and "mock" behaviour (in the strict Martin Fowler senses of both words).
The English word "mock" means "an imitation of lesser quality than the original". This is why even hand-rolled mocks (written without the aid of a framework like Mockito) are sometimes called mocks.
The language which Martin used is now a little bit out of date. He wrote it in the context of old mocking frameworks like JMock, before the "nice mocks" came along. In that era, mocks used to be strict; any interactions which hadn't been set up and weren't expected would fail.
Nowadays we tend to think of it a different way. If I'm a class, I have some other classes that I need to help me. They're either providing information, or doing some work for me - for instance, a repository might provide a list of employees, or save a new employee.
Mocks stand in for these collaborators, and we don't tend to use expectations on mocks any more. Instead, we set up mocks to provide information, then verify that they were asked to do any jobs that need to be done. Mockito was the first framework to work this way, and that's why the distinction is blurred - because whatever you're doing, you're mocking out a collaborator, and you no longer need to set up expectations. Moq works the same way in .NET.
Mockito's mocks by default don't even care if you use them and don't check (although you'll need to set up any information that they have to provide before-hand - the equivalent of a "stub").
Additionally, because Mockito provides "nice" mocks, you don't need to worry about setting expectations in case a dummy object is used somewhere - you can just use Mockito to create those, as well. And, just in case you want to add some simple behavior, Mockito lets you do callbacks easily on the arguments which are passed to it, so you can create "Fakes".
It doesn't really matter what they are - you're just mocking out a collaborating class, and the flexibility means that you don't need to worry about how you do that.
Early frameworks didn't provide this flexibility, hence Martin's differentiation, intended to help you use mocks appropriately. Hope this helps clarify things and explain why Mockito's flexibility isn't a deficiency, but - as David Wallace pointed out - a strength.
According to what I understand from Gerard Meszaros' X-Unit test patterns, a mock is a test-double that includes the functionality of a dummy, a stub and a spy. You can check out the actual comparison he draws about them in pg.742 of that book.
Also this article might throw some light on your question. This article clearly states that
"A mock is dynamically created by a mock library (the others are
typically produced by a test developer using code). The test developer
never sees the actual code implementing the interface or base class,
but can configure the mock to provide return values, expect particular
members to be invoked, and so on. Depending on its configuration, a
mock can behave like a dummy, a stub, or a spy."
Both the quote and the image were taken from this article. IMHO, a mock is intended to blur the line between a dummy, a stub and a spy.
In TDD how should you continue when you know what your final outcome should be, but not the processing steps you need to get there?
For example your class is being passed an object whose API is completely new to you, You know the class has the information you need but you don't know how to retrieve it yet: How would you go about testing this?
Do you just focus on the desired result ignoring the steps?
Edit 1
package com.wesley_acheson.codeReview.annotations;
import com.sun.mirror.apt.AnnotationProcessor;
import com.sun.mirror.apt.AnnotationProcessorEnvironment;
public class AnnotationPresenceWarner implements AnnotationProcessor {
private final AnnotationProcessorEnvironment environment;
public AnnotationPresenceWarner(AnnotationProcessorEnvironment env) {
environment = env;
}
public void process() {
//This is what I'm testing
}
}
I'm trying to test this incomplete class. I want to test I have the right interactions with AnnotationProcessorEnvironment within the process method. However I'm unsure from the API docs what the right interaction is.
This will produce a file that contains details on the occurrence of each annotation within a source tree.
The actual file writing will probably be delegated to another class however. So this class' responsiblity is to create a representation of the annotation occurrences and pass that to whatever classes need to move it.
In non TDD I'd probably invoke a few methods set a breakpoint and see what they return.
Anyway I'm not looking for a solution to this specific example more sometimes you don't know how to get from A to B and you'd like your code to be test driven.
I'm basing my answer on this video:
http://misko.hevery.com/2008/11/11/clean-code-talks-dependency-injection/
If you have a model/business logic class that's supposed to get some data from a service then I'd go about this way:
Have your model class take the data that it needs in the constructor, rather than the service itself. You could then mock the data and unit test your class.
Create a wrapper for the service, you can then unit test then wrapper.
Perform a fuller test where you actually pass the data from the wrapper to the model class.
General Answer
TDD can be used to solve a number of issues, the first and foremost is to ensure that code changes do not break existing code in regards to their expected behavior. Thus, if you've written a class with TDD, you write some code first, see that it fails, then write the behavior to make it green without causing other tests to become red.
The side-effect of writing the test cases is that now you have Documentation. This means that TDD actually provides answers to two distinct problems with code. When learning a new API, regardless of what it is, you can use TDD to explore it's behavior (granted, in some frameworks this can be very difficult). So, when you are exploring an API, it's ok to write some tests to provide documentation to it's use. You can consider this a prototyping step as well, just that prototyping assumes you throw it away when complete. With the TDD approach, you keep it, so you can always return back to it long after you've learned the API.
Specific Answer to the Example Given
There are a number of approaches which attempt to solve the problem with the AnnotationProcessor. There is an Assertion framework which addresses the issue by loading the java code during the test and asserting the line which the error/warning occurs. And here on Stack overflow
I would create a prototype without the testing to get knowledge of how the api is working. When I got that understanding, I would continue on the TDD cycle on my project
I agree with Bassetassen. First do a spike to understand what is this external API call does and what you need for your method. Once you are comfortable with the API you know how to proceed with TDD.
Never ever Unit Test against an unknown API. Follow the same principle is if you didn't own the code. Isolate all the code you are writing from the unknown or unowned.
Write your unit tests as if the environmental processor was going to be code that you were going to TDD later.
Now you can follow #Tom's advice, except drop step 1. Step 2's unit tests now are just a matter of mapping the outputs of the wrapper class to calls on the API of the unknown. Step two is more along the lines of an integration test.
I firmly believe changing your flow from TDD to Prototyping to TDD is a loss in velocity. Stay with the TDD until you are done, then prototype.
First of all, I have to say, I'm new to mocking. So maybe I'm missing a point.
I'm also just starting to get used to the TDD approach.
So, in my actual project I'm working on a class in the business layer, while the data layer has yet to be deployed. I thought, this would be a good time to get started with mocking. I'm using Rhino Mocks, but I've come to the problem of needing to know the implementation details of a class before writing the class itself.
Rhino Mocks checks if alle the methods expected to be called are actually called. So I often need to know which mocked method is being called by the tested method first, even though they could be called in any order. Because of that I'm often writing complicated methods before I test them, because then I know already in which order the methods are being called.
simple example:
public void CreateAandB(bool arg1, bool arg2) {
if(arg1)
daoA.Create();
else throw new exception;
if(arg2)
daoB.Create();
else throw new exception;
}
if I want to test the error handling of this method, I'd have to know which method is being called first. But I don't want to be bugged about implementation details when writing the test first.
Am I missing something?
You have 2 choices. If the method should result in some change in your class the you can test the results of your method instead. So can you call CreateAandB(true,false) then then call some other method to see if the correct thing was created. In this situation your mock objects will probably be stubs which just provide some data.
If the doaA and doaB are objects which are injected into your class that actually create data in the DB or similar, which you can't verify the results of in the test, then you want to test the interaction with them, in which case you create the mocks and set the expectations, then call the method and verify that the expectations are met. In this situation your mock objects will be mocks and will verify the expected behaviour.
Yes you are testing implementation details, but your are testing the details of if your method is using its dependencies correctly, which is what you want to test, not how it is using them, which are the details you are not really interested in.
EDIT
IDao daoA = MockRepository.GenerateMock<IDao>(); //create mock
daoA.Expect(dao=>dao.Create); //set expectation
...
daoA.VerifyExpectations(); //check that the Create method was called
you can ensure that the expectations happen in a certain order, but not using the AAA syntax I believe (source from 2009, might have changed since,EDIT see here for an option which might work), but it seems someone has developed an approach which might allow this here. I've never used that and can't verify it.
As for needing to know which method was called first so you can verify the exception you have a couple of choices:
Have a different message in your exception and check that to determine which exception was raised.
Expect a call to daoA in addition to expecting the exception. If you don't get the call to daoA then the test fails as the exception must have been the first one.
Often times you just need fake objects, not mocks. Mock objects are meant to test component interaction, and often you can avoid this by querying the state of SUT directly. Most practical uses of mocks are to test interaction with some external system (DB, file system, webservice, etc.), and for other things you should be able to query system state directly.
I've been told not to use implementation details. A dependency seems like an implementation detail. However I could phrase it also as a behavior.
Example: A LinkList depends on a storage engine to store its links (eg LinkStorageInterface). The constructor needs to be passed an instance of an implemented LinkStorageInterface to do its job.
I can't say 'shouldUseLinkStorage'. But maybe I can say 'shouldStoreLinksInStorage'.
What is correct to 'test' in this case? Should I test that it stores links in a store (behavior) or don't test this at all?
The dependency itself is not an expected behavior, but the actions called on the dependency most certainly are. You should test the stuff you (the caller) know about, and avoid testing the stuff that requires you to have intimate knowledge of the inner workings of the SUT.
Expanding your example a little, lets imagine that our LinkStorageInterface has the following definition (Pseudo-Code):
Interface LinkStorageInterface
void WriteListToPersistentMedium(LinkList list)
End Interface
Now, since you (the caller) are providing the concrete implementation for that interface it is perfectly reasonable for you to test that WriteListToPersistentMedium() gets called when you invoke the Save() method on your LinkList.
A test might look like this, again using pseudo-code:
void ShouldSaveLinkListToPersistentMedium()
define da = new MockLinkListStorage()
define list = new LinkList(da)
list.Save()
Assert.Method(da.WriteListToPersistentMedium).WasCalledWith(list)
end method
You have tested expected behavior without testing implementation specific details of either your SUT, or your mock. What you want to avoid testing (mostly) are things like:
Order in which methods were called
Making a method, or property public just so you can check it
Anything that does not directly involve the expected behavior you are testing
Again, a dependency is something that you as the consumer of the class are providing, so you expect it to be used. Otherwise there is no point in having that dependency in the first place.
LinkStorageInterface is not an implementation detail - its name suggests an interface to to an engine. In which case the name shouldUseLinkStorage has more value than shouldStoreLinksInStorage.
That's my 2 pennies worth!
I know I can figure out the name of the method as its being executed, just wondering if there is a way from the setup method. I guess an attribute method would work but from the setup method it would be the best.
EDIT NUnit
I know this is going to sound negative, but don't do it! :-)
The idea behind the setup method is that it executes something required by every test, which means that it doesn't matter which test is being executed, so you don't need to know the name of the method.
If you are after different data used in initialisation, then call a separate method with the data passed as a parameter from your test method.
If you really want what you are asking for, then you may need a different method that takes the name of the current method as a parameter and call that from your test method.