TDD and mocking - tdd

First of all, I have to say, I'm new to mocking. So maybe I'm missing a point.
I'm also just starting to get used to the TDD approach.
So, in my actual project I'm working on a class in the business layer, while the data layer has yet to be deployed. I thought, this would be a good time to get started with mocking. I'm using Rhino Mocks, but I've come to the problem of needing to know the implementation details of a class before writing the class itself.
Rhino Mocks checks if alle the methods expected to be called are actually called. So I often need to know which mocked method is being called by the tested method first, even though they could be called in any order. Because of that I'm often writing complicated methods before I test them, because then I know already in which order the methods are being called.
simple example:
public void CreateAandB(bool arg1, bool arg2) {
if(arg1)
daoA.Create();
else throw new exception;
if(arg2)
daoB.Create();
else throw new exception;
}
if I want to test the error handling of this method, I'd have to know which method is being called first. But I don't want to be bugged about implementation details when writing the test first.
Am I missing something?

You have 2 choices. If the method should result in some change in your class the you can test the results of your method instead. So can you call CreateAandB(true,false) then then call some other method to see if the correct thing was created. In this situation your mock objects will probably be stubs which just provide some data.
If the doaA and doaB are objects which are injected into your class that actually create data in the DB or similar, which you can't verify the results of in the test, then you want to test the interaction with them, in which case you create the mocks and set the expectations, then call the method and verify that the expectations are met. In this situation your mock objects will be mocks and will verify the expected behaviour.
Yes you are testing implementation details, but your are testing the details of if your method is using its dependencies correctly, which is what you want to test, not how it is using them, which are the details you are not really interested in.
EDIT
IDao daoA = MockRepository.GenerateMock<IDao>(); //create mock
daoA.Expect(dao=>dao.Create); //set expectation
...
daoA.VerifyExpectations(); //check that the Create method was called
you can ensure that the expectations happen in a certain order, but not using the AAA syntax I believe (source from 2009, might have changed since,EDIT see here for an option which might work), but it seems someone has developed an approach which might allow this here. I've never used that and can't verify it.
As for needing to know which method was called first so you can verify the exception you have a couple of choices:
Have a different message in your exception and check that to determine which exception was raised.
Expect a call to daoA in addition to expecting the exception. If you don't get the call to daoA then the test fails as the exception must have been the first one.

Often times you just need fake objects, not mocks. Mock objects are meant to test component interaction, and often you can avoid this by querying the state of SUT directly. Most practical uses of mocks are to test interaction with some external system (DB, file system, webservice, etc.), and for other things you should be able to query system state directly.

Related

Ruby test failure with exception

I have some tests in ruby which call some framework methods and classes. The problem I'm facing, is that some methods can throw exceptions, since they contact services not in my control. I want in the teardown method of the test to actually have the result of the test (success, failed with XXX), so I can do some stuff based on that. Is there a way I can do that (different from wrapping the whole test in a begin/rescue block) ?
A code example of what you're trying to test might help. But depending on the framework API you're working with you can stub out the method that tries to contact the service, that would raise the exception. If you are calling one method that you are trying to test, and it also raises an exception (not very DRY/SRP code) then it makes it tricky. If that's the case, can either do a begin/rescue around that method call, or stub out the call to that larger method. The problem with doing the latter is it can obscure the value of your test more.

EasyMock aware debugger in Intellij?

Maybe this is counterproductive, I don't know, but right now I am in need of a debugger in IntelliJ that are aware of EasyMock mocks and especially what the mocks methods actually returns.
For example, I have a transport interface ITransport, which has some methods that had to be mocked, and where I only want some of methods returning something. E.g.
ITransport myTransport = createMock(ITransport.class);
I want myTransport.getID() to return a mocked ID 10.
expect(myTransport.getID()).andReturn(10);
With ID 10 I want a method to be invoked once,
expect(myTransport.publish(any(...)));
expectLastCall.once();
Something in the transport class breaks and myTransport isn't called, and my test fails. Know I just want to step through the code with the debugger to check why my test fails. So I add a breakpoint to verify the values of the mocked myTransport object. But they all say "null", even the ID. So I assume, with some brief investigation, that the cause of this is the EasyMock mock class, it doesn't really update the object with value (which sounds reasonable) and instead returns the mocked value at runtime when the method is called.
So, are there any mock aware debuggers for IntelliJ that lets me see which value the method will eventually return.
Yes, and before I receive responses saying that "The debugger is not required if you write unit tests for everything", I just want to state that I know about that. And this is legacy code, or at least code that wasn't written with testing in mind.
This may not be what you're looking for... but it feels like the problem is more on the debugging approach.
A mock object is really just that - a mock - meaning it's a fake empty object that doesn't do anything unless you specifically tell it. When your debugger inspects the mock object, it won't find any values that you did not specifically program it to return. It's not meant to hold values.
EasyMock has an argument capture feature, but since you just want it for debugging, this is probably the wrong approach. Mockito has a spying feature that could be suitable for what you want, but it would involve additional mock-programming statements.
I would say the easiest approach would be to implement your own ITransport just for use in your test class. That way you can implement getID() to always return 10 and put in an assert statement inside your publish(). And you can implement whatever other methods you need in order to capture additional data for debugging purposes. And you get to keep this test-only ITransport for either shared use or future debugging needs.
Indeed, the methods are mocked but the internal implementation of the class is left to itself.
Usually, you don't need to know what is returned since you're the one who recorded it in the first place.
You can also evaluate myTransport.getID() in your debugger. But doing this will consume the expectations.
However, it seems like a good idea to be able to list the all current pending expectations on a mock. And maybe to have a peek function. You can request such features on the EasyMock bug tracker: http://jira.codehaus.org/browse/EASYMOCK

NMock2.0 - how to stub a non interface call?

I have a class API which has full code coverage and uses DI to mock out all the logic in the main class function (Job.Run) which does all the work.
I found a bug in production where we werent doing some validation on one of the data input fields.
So, I added a stub function called ValidateFoo()... Wrote a unit test against this function to Expect a JobFailedException, ran the test - it failed obviously because that function was empty. I added the validation logic, and now the test passes.
Great, now we know the validation works. Problem is - how do I write the test to make sure that ValidateFoo() is actually called inside Job.Run()? ValidateFoo() is a private method of the Job class - so it's not an interface...
Is there anyway to do this with NMock2.0? I know TypeMock supports fakes of non interface types. But changing mock libs right now is not an option. At this point if NMock can't support it, I will simply just add the ValidateFoo() call to the Run() method and test things manually - which obviously I'd prefer not to do considering my Job.Run() method has 100% coverage right now. Any Advice? Thanks very much it is appreciated.
EDIT: the other option I have in mind is to just create an integration test for my Job.Run functionality (injecting to it true implementations of the composite objects instead of mocks). I will give it a bad input value for that field and then validate that the job failed. This works and covers my test - but it's not really a unit test but instead an integration test that tests one unit of functionality.... hmm..
EDIT2: IS there any way to do tihs? Anyone have ideas? Maybe TypeMock - or a better design?
The current version of NMock2 can mock concrete types (I don't remember exactly which version they added this, but we're using version 2.1) using the mostly familiar syntax:
Job job = mockery.NewMock<Job>(MockStyle.Transparent);
Stub.On(job).Method("ValidateFoo").Will(Return.Value(true));
MockStyle.Transparent specifies that anything you don't stub or expect should be handled by the underlying implementation - so you can stub and set expectations for methods on an instance you're testing.
However, you can only stub and set expectations on public methods (and properties), which must also be virtual or abstract. So to avoid relying on integration testing, you have two options:
Make Job.ValidateFoo() public and virtual.
Extract the validation logic into a new class and inject an instance into Job.
Since all private are all called by public methods (unless relying on reflection runtime execution), then those privates are being executed by public methods. Those private methods are causing changes to the object beyond simply executing code, such as setting class fields or calling into other objects. I'd find a way to get at those "results" of calling the private method. (Or mocking the things that shouldn't be executed in the private methods.)
I can't see the class under test. Another problem that could be pushing you to want access to the private methods is that it's a super big class with a boatload of private functionality. These classes may need to be broken down into smaller classes, and some of those privates may turn into simpler publics.

BDD/TDD: can dependencies be a behavior?

I've been told not to use implementation details. A dependency seems like an implementation detail. However I could phrase it also as a behavior.
Example: A LinkList depends on a storage engine to store its links (eg LinkStorageInterface). The constructor needs to be passed an instance of an implemented LinkStorageInterface to do its job.
I can't say 'shouldUseLinkStorage'. But maybe I can say 'shouldStoreLinksInStorage'.
What is correct to 'test' in this case? Should I test that it stores links in a store (behavior) or don't test this at all?
The dependency itself is not an expected behavior, but the actions called on the dependency most certainly are. You should test the stuff you (the caller) know about, and avoid testing the stuff that requires you to have intimate knowledge of the inner workings of the SUT.
Expanding your example a little, lets imagine that our LinkStorageInterface has the following definition (Pseudo-Code):
Interface LinkStorageInterface
void WriteListToPersistentMedium(LinkList list)
End Interface
Now, since you (the caller) are providing the concrete implementation for that interface it is perfectly reasonable for you to test that WriteListToPersistentMedium() gets called when you invoke the Save() method on your LinkList.
A test might look like this, again using pseudo-code:
void ShouldSaveLinkListToPersistentMedium()
define da = new MockLinkListStorage()
define list = new LinkList(da)
list.Save()
Assert.Method(da.WriteListToPersistentMedium).WasCalledWith(list)
end method
You have tested expected behavior without testing implementation specific details of either your SUT, or your mock. What you want to avoid testing (mostly) are things like:
Order in which methods were called
Making a method, or property public just so you can check it
Anything that does not directly involve the expected behavior you are testing
Again, a dependency is something that you as the consumer of the class are providing, so you expect it to be used. Otherwise there is no point in having that dependency in the first place.
LinkStorageInterface is not an implementation detail - its name suggests an interface to to an engine. In which case the name shouldUseLinkStorage has more value than shouldStoreLinksInStorage.
That's my 2 pennies worth!

TDD and DI: dependency injections becoming cumbersome

C#, nUnit, and Rhino Mocks, if that turns out to be applicable.
My quest with TDD continues as I attempt to wrap tests around a complicated function. Let's say I'm coding a form that, when saved, has to also save dependent objects within the form...answers to form questions, attachments if available, and "log" entries (such as "blahblah updated the form." or "blahblah attached a file."). This save function also fires off emails to various people depending on how the state of the form changed during the save function.
This means in order to fully test out the form's save function with all of its dependencies, I have to inject five or six data providers to test out this one function and make sure everything fired off in the right way and order. This is cumbersome when writing the multiple chained constructors for the form object to insert the mocked providers. I think I'm missing something, either in the way of refactoring or simply a better way to set the mocked data providers.
Should I further study refactoring methods to see how this function can be simplified? How's the observer pattern sound, so that the dependent objects detect when the parent form is saved and handle themselves? I know that people say to split out the function so it can be tested...meaning I test out the individual save functions of each dependent object, but not the save function of the form itself, which dictates how each should save themselves in the first place?
First, if you are following TDD, then you don't wrap tests around a complicated function. You wrap the function around your tests. Actually, even that's not right. You interweave your tests and functions, writing both at almost exactly the same time, with the tests just a little ahead of the functions. See The Three Laws of TDD.
When you follow these three laws, and are diligent about refactoring, then you never wind up with "a complicated function". Rather you wind up with many, tested, simple functions.
Now, on to your point. If you already have "a complicated function" and you want to wrap tests around it then you should:
Add your mocks explicitly, instead of through DI. (e.g. something horrible like a 'test' flag and an 'if' statement that selects the mocks instead of the real objects).
Write a few tests in order to cover the basic operation of the component.
Refactor mercilessly, breaking up the complicated function into many little simple functions, while running your cobbled together tests as often as possible.
Push the 'test' flag as high as possible. As you refactor, pass your data sources down to the small simple functions. Don't let the 'test' flag infect any but the topmost function.
Rewrite tests. As you refactor, rewrite as many tests as possible to call the simple little functions instead of the big top-level function. You can pass your mocks into the simple functions from your tests.
Get rid of the 'test' flag and determine how much DI you really need. Since you have tests written at the lower levels that can insert mocks through areguments, you probably don't need to mock out many data sources at the top level anymore.
If, after all this, the DI is still cumbersome, then think about injecting a single object that holds references to all your data sources. It's always easier to inject one thing rather than many.
Use an AutoMocking container. There is one written for RhinoMocks.
Imagine you have a class with a lot of dependencies injected via constructor injection. Here's what it looks like to set it up with RhinoMocks, no AutoMocking container:
private MockRepository _mocks;
private BroadcastListViewPresenter _presenter;
private IBroadcastListView _view;
private IAddNewBroadcastEventBroker _addNewBroadcastEventBroker;
private IBroadcastService _broadcastService;
private IChannelService _channelService;
private IDeviceService _deviceService;
private IDialogFactory _dialogFactory;
private IMessageBoxService _messageBoxService;
private ITouchScreenService _touchScreenService;
private IDeviceBroadcastFactory _deviceBroadcastFactory;
private IFileBroadcastFactory _fileBroadcastFactory;
private IBroadcastServiceCallback _broadcastServiceCallback;
private IChannelServiceCallback _channelServiceCallback;
[SetUp]
public void SetUp()
{
_mocks = new MockRepository();
_view = _mocks.DynamicMock<IBroadcastListView>();
_addNewBroadcastEventBroker = _mocks.DynamicMock<IAddNewBroadcastEventBroker>();
_broadcastService = _mocks.DynamicMock<IBroadcastService>();
_channelService = _mocks.DynamicMock<IChannelService>();
_deviceService = _mocks.DynamicMock<IDeviceService>();
_dialogFactory = _mocks.DynamicMock<IDialogFactory>();
_messageBoxService = _mocks.DynamicMock<IMessageBoxService>();
_touchScreenService = _mocks.DynamicMock<ITouchScreenService>();
_deviceBroadcastFactory = _mocks.DynamicMock<IDeviceBroadcastFactory>();
_fileBroadcastFactory = _mocks.DynamicMock<IFileBroadcastFactory>();
_broadcastServiceCallback = _mocks.DynamicMock<IBroadcastServiceCallback>();
_channelServiceCallback = _mocks.DynamicMock<IChannelServiceCallback>();
_presenter = new BroadcastListViewPresenter(
_addNewBroadcastEventBroker,
_broadcastService,
_channelService,
_deviceService,
_dialogFactory,
_messageBoxService,
_touchScreenService,
_deviceBroadcastFactory,
_fileBroadcastFactory,
_broadcastServiceCallback,
_channelServiceCallback);
_presenter.View = _view;
}
Now, here's the same thing with an AutoMocking container:
private MockRepository _mocks;
private AutoMockingContainer _container;
private BroadcastListViewPresenter _presenter;
private IBroadcastListView _view;
[SetUp]
public void SetUp()
{
_mocks = new MockRepository();
_container = new AutoMockingContainer(_mocks);
_container.Initialize();
_view = _mocks.DynamicMock<IBroadcastListView>();
_presenter = _container.Create<BroadcastListViewPresenter>();
_presenter.View = _view;
}
Easier, yes?
The AutoMocking container automatically creates mocks for every dependency in the constructor, and you can access them for testing like so:
using (_mocks.Record())
{
_container.Get<IChannelService>().Expect(cs => cs.ChannelIsBroadcasting(channel)).Return(false);
_container.Get<IBroadcastService>().Expect(bs => bs.Start(8));
}
Hope that helps. I know my testing life has been made a whole lot easier with the advent of the AutoMocking container.
You're right that it can be cumbersome.
Proponent of mocking methodology would point out that the code is written improperly to being with. That is, you shouldn't be constructing dependent objects inside this method. Rather, the injection API's should have functions that create the appropriate objects.
As for mocking up 6 different objects, that's true. However, if you also were unit-testing those systems, those objects should already have mocking infrastructure you can use.
Finally, use a mocking framework that does some of the work for you.
I don't have your code, but my first reaction is that your test is trying to tell you that your object has too many collaborators. In cases like this, I always find that there's a missing construct in there that should be packaged up into a higher level structure. Using an automocking container is just muzzling the feedback you're getting from your tests. See http://www.mockobjects.com/2007/04/test-smell-bloated-constructor.html for a longer discussion.
In this context, I usually find statements along the lines of "this indicates that your object has too many dependencies" or "your object has too many collaborators" to be a fairly specious claim. Of course a MVC controller or a form is going to be calling lots of different services and objects to fulfill its duties; it is, after all, sitting at the top layer of the application. You can smoosh some of these dependencies together into higher-level objects (say, a ShippingMethodRepository and a TransitTimeCalculator get combined into a ShippingRateFinder), but this only goes so far, especially for these top-level, presentation-oriented objects. That's one less object to mock, but you've just obfuscated the actual dependencies via one layer of indirection, not actually removed them.
One blasphemous piece of advice is to say that if you are dependency injecting an object and creating an interface for it that is quite unlikely to ever change (Are you really going to drop in a new MessageBoxService while changing your code? Really?), then don't bother. That dependency is part of the expected behavior of the object and you should just test them together since the integration test is where the real business value lies.
The other blasphemous piece of advice is that I usually see little utility in unit testing MVC controllers or Windows Forms. Everytime I see someone mocking the HttpContext and testing to see if a cookie was set, I want to scream. Who cares if the AccountController set a cookie? I don't. The cookie has nothing to do with treating the controller as a black box; an integration test is what is needed to test its functionality (hmm, a call to PrivilegedArea() failed after Login() in the integration test). This way, you avoid invalidating a million useless unit tests if the format of the login cookie ever changes.
Save the unit tests for the object model, save the integration tests for the presentation layer, and avoid mock objects when possible. If mocking a particular dependency is hard, it's time to be pragmatic: just don't do the unit test and write an integration test instead and stop wasting your time.
The simple answer is that code that you are trying to test is doing too much. I think sticking to the Single Responsibility Principle might help.
The Save button method should only contain a top-level calls to delegate things to other objects. These objects can then be abstracted through interfaces. Then when you test the Save button method, you only test the interaction with mocked objects.
The next step is to write tests to these lower-level classes, but thing should get easier since you only test these in isolation. If you need a complex test setup code, this is a good indicator of a bad design (or a bad testing approach).
Recommended reading:
Clean Code: A Handbook of Agile Software Craftsmanship
Google's guide to writing testable code
Constructor DI isn't the only way to do DI. Since you're using C#, if your constructor does no significant work you could use Property DI. That simplifies things greatly in terms of your object's constructors at the expense of complexity in your function. Your function must check for the nullity of any dependent properties and throw InvalidOperation if they're null, before it begins work.
When it is hard to test something, it is usually symptom of the code quality, that the code is not testable (mentioned in this podcast, IIRC). The recommendation is to refactor the code so that the code will be easy to test. Some heuristics for deciding how to split the code into classes are the SRP and OCP. For more specific instructions, it would be necessary to see the code in question.

Resources