Recently, I got started reading on BDD and TDD and I got hooked. I got lost with the amount of unorganized sources of information and different opinions of what's best and what not. At the end I settled on xBehave & xUnit. I like the fluent syntax and the ease of defining the behaviors with Fluent Assertions and Fluent Validation.
I'm also trying to implement the onion architecture with a test project I'm working on for learning. Here's my scenario: The project, to make it simple, is a product tracker. I can create products and track who owns it. I want to implement two specs:
when a new product is created without a name then an error should be displayed
when a new product is created without an owner assigned then an error should be displayed.
I created the spec which instantiated a new Product and a new ProductService which in turns creates the Product. The spec passes and the validation is occuring now the question is:
How do I test my ProductRepository class? Do I test it next or mock it and finish all specs first then come back and test repository classes?
Should I have mocked the ProductService class in the first spec?
Is that done at the unit test level? should i create a unit test class?
Wouldn't testing the repository make it an integration test?
so far, I don't have a UI and i'm writing my specs for the domain, service, and infrastructure layers.
do i need to use watin for my UI tests?
would switching to watin/specflow makes more sense and would save on efforts to have fully tested layers from top to bottom?
Here's one of the specs I worked on:
[Scenario]
public void creating_new_product_without_a_name_should_throw_error()
{
var productService = default(IProductService);
var action = default(Action);
_
.Given("a new product", () =>
productService = new ProductService() as IProductService)
.When("creating the new product without a name", () =>
action = () => productService.Create(new Product()))
.Then("it should should display an error", () =>
action.ShouldThrow<ValidationException("Name is required."));
}
Thank you for your reply in advance and, please, if you are answering this thread back up with some materials/articles/sample code on to why your suggestion would be better to follow.
It sounds like you are testing small parts and then plan to glue them together into something big. This is (IMO, againt) not TDD (and certainly not BDD): you are not letting the tests drive the design/architecture.
To start with, don't think that much about the design. Don't think onion architecture, don't think repositories, don't think services.
Start by writing a test that verifies the whole solution, from end to end. Make that test as small as possible. To start with, just verify that e.g. a label is displayed. Then write the smallest solution that you can think of to make the test pass. Then write another test and the solution for that.
After a while, have a look at the code (production, but also test) to find resposibilities. Is there an embryo of a service in there somewhere? Extract it! But don't do it prematurely. Let the code tell you what it wants to look like.
Start thinking about the domain (Product, Owner, etc) up front and include it early in your code. Wait a little longer with persistance (repositories), but not too long.
Keep testing at this level (end-to-end). Add micro tests when necessary. So my answer to the question "how do I test my ProductRepository/Service class" is a question to you: do you need to? Or is it sufficiently covered by the end-to-end tests? If not, why?
Related
A simple question: How do you differentiate between a feature, unit and integration test?
There are a lot of differing opinions, but I'm specifically trying to determine how to organise a Laravel test which touches a model's relationship. Here is an example if some PHP code which would require testing:
public function prices()
{
return $this->hasMany(Prices::class);
}
public function getPriceAttribute($)
{
return $this->prices()->first() * 2;
}
The test descriptions as I understand them (feel free to correct me):
Unit test
Tests the smallest part of your code
Does not touch the database
Does not interact with any other part of the system
Integration test
Tests part of the system working together
e.g controllers which call helper functions which need to be tested together
Feature test
Blackbox test
e.g. Call an api end point, see that it has returned the correct JSON response
Here is my issue given those descriptions:
My Laravel model test needs to test the smallest unit of code - the calculated accessor of a model, which makes it feel like a Unit test
But, it touches the database when it loads the model's relationship
It doesnt feel like an Integration test, because it is only touching other related models, not internal or external services
Other property accessor tests in Laravel would fall under Unit tests when they do not touch the database or the model's relationships
Separating these types of tests into integration tests would mean that a single model's tests against its properties are fragmented between integration and unit tests
So, without mocking relationships between models, where would my test belong?
If I’m interpreting your original question correctly, I think the killer constraint here is:
So, without mocking relationships between models, where would my test belong?
If mocking isn't allowed and you're required to touch a DB then, by your/and google's definition, it has to belong as an integration/medium size test :)
The way I think of this is get price attribute functionality is separate from the DB. Even though it's in the model the prices could come from anywhere. Right now its a RDBMS but what if your org go really big and it split into another service? Basically, I believe, that the capability of getPriceAttributes is distinct from the storage of attributes:
public function getPriceAttribute($)
{
return $this->prices()->first() * 2;
}
If you buy into this reasoning, it creates a logical separation that supports unit tests. prices() can be mocked to returns a collection of 0, 1 & many (2) results. This test can be executed as a unit tests (for orders of magnitude faster test execution (ie on the order of 1ms vs potentially 10s or 100s of ms talking to a local DB)
I am not familiar with php test ecosystem but one way to do this could be with a test specific subclass (not sure if the following is valid PHP :p ):
class PricedModel extends YourModel {
function __construct($stub_prices_supporting_first) {
$this->stub_prices = $stub_prices_supporting_first;
}
public function prices() {
return $this->stub_prices;
}
}
tests
function test_priced_model_0_prices() {
p = new PricedModel(new Prices(array()));
assert.equal(null, p.getPriceAttribute());
}
function test_priced_model_1_price() {
p = new PricedModel(new Prices(array(1)));
assert.equal(2, p.getPriceAttribute());
}
function test_priced_model_2_prices() {
p = new PricedModel(new Prices(array(5, 1)));
assert.equal(10, p.getPriceAttribute());
}
The above should hopeuflly allow you to fully control input into the getPriceAttribute method to support direct IO-free unit testing.
——
Also all the unit tests above can tell you is that you’re able to process prices correctly , it doesn’t price any feedback on if you’re able to query prices !
What distinguishes the tests is their respective goal:
Unit-testing aims at findings those bugs that can be found in isolated small parts of the software. (Note that this does not say you must isolate - it only means your focus is on the isolated code. Isolation and mocking often enough are not needed to reach this goal: Think of a call to a sin function - you almost never need to mock this, you let your system under test just call the original one.)
Integration testing aims at findings bugs in the interaction of two or more components, for example mutual misconceptions about an interface. These bugs can not be found in the isolated software: If you test code in isolation, you also write your tests on your (possibly wrong) understanding of the other components.
Feature tests as you describe them will then have the goal to find further bugs, which the other tests so far could not detect. One example for such a bug could be, that an old version of the feature was integrated (which was correct at that time, but lacked some functionality).
The conclusion, although it may be surprising, is, that it is not in the stricter sense forbidden to make data base accesses in unit-testing. Consider the following scenario: You start writing unit-tests and mock the data base accesses. Later, you realize you can be more lazy and just use the data base without mocking - but otherwise leave all the tests as they are. Your tests have not changed, and they will continue finding the bugs in the isolated code as before. They may run a bit slower now, and the setup may be more complex than with the mocked data base. However, the goal of the test suite was the same - with and without mocking the data base.
This scenario simplifies things a bit, because there may be test cases that can only be done with a mock: For example, testing the case that the data base gets corrupted in a specific way and your code handles this properly. With the real data base such test cases may be practically impossible to set up.
I have read or tried to read far too many "how to"s on this and have gotten exactly nowhere. Unity? System.Web.Http.Dependencies? Ninject? StructureMap? Ugh. I just want something simple that works! I just can't figure out what the current state of this is. There are wildly different approaches and the examples appear to be incomplete. Heck the best lead had a sample project with it ... that I can't load in VS2010 or 2012. ARG! I waster 3/4 of the day on something that I feel should have been half an hour at most and move on! It's just plumbing!
I have a repository that's based on generics to process a number of data sets that all support the same operations.
IRepository
I want to control which repository each data set is bound to. This will allow me to bind everything to a test XML repository, transitioning them over to a SQL repository as the project advances.
I sure would appreciate some help getting this going! Thank you!
Sounds like you are at the state I was a couple of years ago.
Note, if you need any further help I cn send you some code. It's just hard to put all the code in here.
Ill try to explain the current architecture in the project I am working on. This post is a bit long winded but I am trying to give you a big picture of how using IOC can help you in many ways.
So I use Ninject. After trying to use Castle Windsor for a while I found Ninject easy to get up and running. Ninject has a cool website that will get you started.
First of all my project structure is as follows: (top down and it's MVC)
View - razor
ViewModel - I use 1 view model per view
ViewModelBuilder - Builds my view models for my views (used to abstract code away from my controller so my controller stays neat and tidy)
AutoMapper - to map domain entities to my view models
Controller - calls my service layer to get Domain Entities
Domain Entities - representations of my domain
ServiceLayer (business layer) - Calls my repository layer to get Domain entities or collections of these
AutoMapper again - to map custom types from my 3rd party vendors into my domain entities
RepositoryLayer - does CRUD operations to my data stores
This is a hierarchy but Domain entities kind of sit along side and are used in a few different layer.
Note: some extra tools mentioned in this post are:
AutoMapper - maps entities to other entities - eliminates the need to write loads of mapping code
Moq - Allows you to mock stuff for unit testing. This is mentioned later.
Now, regarding Ninject.
Each layer is marked with an interface. This must be done so Ninject can say to its self.
When I find IVehicleRepository inject it with a real VehicleRepository or even inject it with FakeVehicleRepository if I need a fake.
(this relates to your comment - "This will allow me to bind everything to a test XML repository")
Now every layer has a contstructor so that Ninject (or any other IOC container) can inject what it needs to:
public VehicleManager(IVehicleRepository vehicleRepository)
{
this._vehicleRepository = vehicleRepository;
}
VehicleManager is in my serviceLayer (not to be confused with anything to do with web services). The service layer is really what we would call the business layer. It seems that a lot of people are using the word service. (even though I think it is annoying as it makes me think about web services or WCF instead of just a business layer.... anyway...)
now without getting into the nitty gritty of Ninject setup the following line of code in my NinjectWebCommon.cs tells ninject what to do:
kernel.Bind<IVehicleRepository>().To<VehicleRepository>().InRequestScope();
This says:
Hey Ninject, when I ask for IVehicleRepository give my a concrete implementation of VehicleRepository.
As mentioned before I could replace VehicleRepository with FakeVehicleRepository so that I didnt have to read from a real database.
So, as you can now imagine, every layer is only dependent on interfaces.
I dont know how much unit testing you have done but you can also imagine that if you wanted to unit test your service layer and it had concrete references to your repository layer then you would not be able to unit test as you would be ACTUALLY hitting your repository and hence reading from a real database.
Remember unit testing is called unit testing because it tests ONE thing only. Hence the word UNIT. So because everything only knows about interfaces, it means that you can test a method on your service layer and mock the repository.
So if your service layer has a method like this:
public bool ThisIsACar(int id)
{
bool isCar = false;
var vehicle = vehicleRepository.GetVehicleById(id);
if(vehicle.Type == VehicleType.Car)
{
isCar = true;
}
else
{
isCar = false;
}
}
You would not want the vehicleRepository to be call so you could Moq what the VehicleRepository gives you back. You can mostly only Mock stuff if it implements an interface.
So your unit test would look like this (some pseudo code here):
[TestMethod]
public void ServiceMethodThisIsACar_Returns_True_When_VehicleIsACar()
{
// Arrange
_mockRepository.Setup(x => x.ThisIsACar(It.IsAny<int>)).returns(new Car with a type of VehicleType.Car)
// Act
var vehicleManager = new VehicleManager(_mockVehicleRepository.Object);
var vehicle = vehicleManager.ThisIsACar(3);
// Assert
Assert.IsTrue(vehicle.VehicleType == VehicleType.Car)
}
So as you can see, at this point and it is very simplified, you only want to test the IF statement in your service layer to make sure the outcome is correct.
You would test your repository in its own unit tests and possibly mock the entity frame work if you were using it.
So, overall, I would say, use what ever IOC container gets you up and running the fastest with the least amount of pain.
I would also say, try and unit test all that you can. This is great for a few different reasons. Obviously it tests the code you have written but it will also immediately show you if you have done some thing stupid like new-ing up a concrete repository. You will quickly see that you wont have an interface to mock in your unit tests and this will lead you to go back and refactor your code.
I found that with IOC, it takes a while to just get it. It confuses the crap out of you until one day it just clicks. After that it is so easy you wonder how you ever lived without it.
Here is a list of things I cant live without
Automapper
Moq
Fluent Validation
Resharper - some hate it, I love it, mostly for its unit testing UI.
Anyway, this is getting too long. Let me know what you think.
thanks
RuSs
In TDD how should you continue when you know what your final outcome should be, but not the processing steps you need to get there?
For example your class is being passed an object whose API is completely new to you, You know the class has the information you need but you don't know how to retrieve it yet: How would you go about testing this?
Do you just focus on the desired result ignoring the steps?
Edit 1
package com.wesley_acheson.codeReview.annotations;
import com.sun.mirror.apt.AnnotationProcessor;
import com.sun.mirror.apt.AnnotationProcessorEnvironment;
public class AnnotationPresenceWarner implements AnnotationProcessor {
private final AnnotationProcessorEnvironment environment;
public AnnotationPresenceWarner(AnnotationProcessorEnvironment env) {
environment = env;
}
public void process() {
//This is what I'm testing
}
}
I'm trying to test this incomplete class. I want to test I have the right interactions with AnnotationProcessorEnvironment within the process method. However I'm unsure from the API docs what the right interaction is.
This will produce a file that contains details on the occurrence of each annotation within a source tree.
The actual file writing will probably be delegated to another class however. So this class' responsiblity is to create a representation of the annotation occurrences and pass that to whatever classes need to move it.
In non TDD I'd probably invoke a few methods set a breakpoint and see what they return.
Anyway I'm not looking for a solution to this specific example more sometimes you don't know how to get from A to B and you'd like your code to be test driven.
I'm basing my answer on this video:
http://misko.hevery.com/2008/11/11/clean-code-talks-dependency-injection/
If you have a model/business logic class that's supposed to get some data from a service then I'd go about this way:
Have your model class take the data that it needs in the constructor, rather than the service itself. You could then mock the data and unit test your class.
Create a wrapper for the service, you can then unit test then wrapper.
Perform a fuller test where you actually pass the data from the wrapper to the model class.
General Answer
TDD can be used to solve a number of issues, the first and foremost is to ensure that code changes do not break existing code in regards to their expected behavior. Thus, if you've written a class with TDD, you write some code first, see that it fails, then write the behavior to make it green without causing other tests to become red.
The side-effect of writing the test cases is that now you have Documentation. This means that TDD actually provides answers to two distinct problems with code. When learning a new API, regardless of what it is, you can use TDD to explore it's behavior (granted, in some frameworks this can be very difficult). So, when you are exploring an API, it's ok to write some tests to provide documentation to it's use. You can consider this a prototyping step as well, just that prototyping assumes you throw it away when complete. With the TDD approach, you keep it, so you can always return back to it long after you've learned the API.
Specific Answer to the Example Given
There are a number of approaches which attempt to solve the problem with the AnnotationProcessor. There is an Assertion framework which addresses the issue by loading the java code during the test and asserting the line which the error/warning occurs. And here on Stack overflow
I would create a prototype without the testing to get knowledge of how the api is working. When I got that understanding, I would continue on the TDD cycle on my project
I agree with Bassetassen. First do a spike to understand what is this external API call does and what you need for your method. Once you are comfortable with the API you know how to proceed with TDD.
Never ever Unit Test against an unknown API. Follow the same principle is if you didn't own the code. Isolate all the code you are writing from the unknown or unowned.
Write your unit tests as if the environmental processor was going to be code that you were going to TDD later.
Now you can follow #Tom's advice, except drop step 1. Step 2's unit tests now are just a matter of mapping the outputs of the wrapper class to calls on the API of the unknown. Step two is more along the lines of an integration test.
I firmly believe changing your flow from TDD to Prototyping to TDD is a loss in velocity. Stay with the TDD until you are done, then prototype.
First, Please bear with me with all my questions. I have never used TDD before but more and more I come to realize that I should. I have read a lot of posts and how to guides on TDD but some things are still not clear. Most example used for demonstration are either math calculation or some other simple operations. I also started reading Roy Osherove's book about TDD. Here are some questions I have:
If you have an object in your solution, for instance an Account class, what is the benefit of testing setting a property on it, for example an account name, then you Assert that whatever you set is right. Would this ever fail?
Another example, an account balance, you create an object with balance 300 then you assert that the balance is actually 300. How would that ever fail? What would I be testing here? I can see testing a subtraction operation with different input parameters would be more of a good test.
What should I actually test my objects for? methods or properties? sometime you also have objects as service in an infrastructure layer. In the case of methods, if you have a three tier app and the business layer is calling the data layer for some data. What gets tested in that case? the parameters? the data object not being null? what about in the case of services?
Then on to my question regarding real life project, if you have a green project and you want to start it with TDD. What do you start with first? do you divide your project into features then tdd each one or do you actually pick arbitrarily and you go from there.
For example, I have a new project and it requires a login capability. Do I start with creating User tests or Account tests or Login tests. Which one I start with first? What do I test in that class first?
Let's say I decide to create a User class that has a username and password and some other properties. I'm supposed to create the test first, fix all build error, run the test for it to fail then fix again to get a green light then refactor. So what are the first tests I should create on that class? For example, is it:
Username_Length_Greater_Than_6
Username_Length_Less_Than_12
Password_Complexity
If you assert that length is greater than 6, how is that testing the code? do we test that we throw an error if it's less than 6?
I am sorry if I was repetitive with my questions. I'm just trying to get started with TDD and I have not been able to have a mindset change. Thank you and hopefully someone can help me determine what am I missing here. By the way, does anyone know of any discussion groups or chats regarding TDD that I can join?
Have a look at low-level BDD. This post by Dan North introduces it quite well.
Rather than testing properties, think about the behavior you're looking for. For instance:
Account Behavior:
should allow a user to choose the account name
should allow funds to be added to the account
User Registration Behavior:
should ensure that all usernames are between 6 and 12 characters
should ask the password checker if the password is complex enough <-- you'd use a mock here
These would then become tests for each class, with the "should" becoming the test name. Each test is an example of how the class can be used valuably. Instead of testing methods and properties, you're showing someone else (or your future self) why the class is valuable and how to change it safely.
We also do something in BDD called "outside-in". So start with the GUI (or normally the controller / presenter, since we don't often unit-test the GUI).
You already know how the GUI will use the controller. Now write an example of that. You'll probably have more than one aspect of behavior, so write more examples until the controller works. The controller will have a number of collaborating classes that you haven't written yet, so mock those out - just dependency inject them via an interface. You can write them later.
When you've finished with the controller, replace the next thing you've mocked out in the real system by real code, and test-drive that. Oh, and don't bother mocking out domain objects (like Account) - it'll be a pain in the neck - but do inject any complex behavior into them and mock that out instead.
This way, you're always writing the interface that you wish you had - something that's easy to use - for every class. You're describing the behavior of that class and providing some examples of how to use it. You're making it safe and easy to change, and the appropriate design will emerge (feel free to be guided by patterns, thoughtful common sense and experience).
BTW, with Login, I tend to work out what the user wants to log in for, then code that first. Add Login later - it's usually not very risky and doesn't change much once it's written, so you may not even need to unit-test it. Up to you.
Test until fear is replaced by boredom. Property accessors and constructors are high cost to benefit to write tests against. I usually test them indirectly as part of some other (higher) test.
For a new project, I'd recommend looking at ATDD. Find a user-story that you want to pick first (based on user value). Write an acceptance test that should pass when the user story is done. Now drill down into the types that you'd need to get the AT to pass -- using TDD. The acceptance test will tell you which objects and what behaviors are required. You then implement them one at a time using TDD. When all your tests (incl your acc. test) pass - you pick up the next user story and repeat.
Let's say you pick 'Create user' as your first story. Then you write examples of how that should work. Turn them into automated acceptance tests.
create valid user -> account should be created
create invalid user ( diff combinations that show what is invalid ) -> account shouldn't be created, helpful error shown to the user
AccountsVM.CreateUser(username, password)
AccountsVM.HasUser(username)
AccountsVM.ErrorMessage
The test would show that you need the above. You then go test-drive them them out.
Don't test what is too simple to break.
getters and setters are too simple to be broken, so said, the code is so simple that an error can not happen.
you test the public methods and assert the response is as expected. If the method return void you have to test "collateral consequences" (sometimes is not easy, eg to test a email was sent). When this happens you can use mocks to test not the response but how the method executes (you ask the mockk if the Class Under Test called him the desired way)
I start doing Katas to learn the basics: JUnit and TestNG; then Harmcrest; then read EasyMock or Mockito documentation.
Look for katas at github, or here
http://codekata.pragprog.com
http://codingdojo.org/
The first test should be the easiest one! Maybe one that just force you to create the CUT (class under test)
But again, try katas!
http://codingdojo.org/cgi-bin/wiki.pl?KataFizzBuzz
C#, nUnit, and Rhino Mocks, if that turns out to be applicable.
My quest with TDD continues as I attempt to wrap tests around a complicated function. Let's say I'm coding a form that, when saved, has to also save dependent objects within the form...answers to form questions, attachments if available, and "log" entries (such as "blahblah updated the form." or "blahblah attached a file."). This save function also fires off emails to various people depending on how the state of the form changed during the save function.
This means in order to fully test out the form's save function with all of its dependencies, I have to inject five or six data providers to test out this one function and make sure everything fired off in the right way and order. This is cumbersome when writing the multiple chained constructors for the form object to insert the mocked providers. I think I'm missing something, either in the way of refactoring or simply a better way to set the mocked data providers.
Should I further study refactoring methods to see how this function can be simplified? How's the observer pattern sound, so that the dependent objects detect when the parent form is saved and handle themselves? I know that people say to split out the function so it can be tested...meaning I test out the individual save functions of each dependent object, but not the save function of the form itself, which dictates how each should save themselves in the first place?
First, if you are following TDD, then you don't wrap tests around a complicated function. You wrap the function around your tests. Actually, even that's not right. You interweave your tests and functions, writing both at almost exactly the same time, with the tests just a little ahead of the functions. See The Three Laws of TDD.
When you follow these three laws, and are diligent about refactoring, then you never wind up with "a complicated function". Rather you wind up with many, tested, simple functions.
Now, on to your point. If you already have "a complicated function" and you want to wrap tests around it then you should:
Add your mocks explicitly, instead of through DI. (e.g. something horrible like a 'test' flag and an 'if' statement that selects the mocks instead of the real objects).
Write a few tests in order to cover the basic operation of the component.
Refactor mercilessly, breaking up the complicated function into many little simple functions, while running your cobbled together tests as often as possible.
Push the 'test' flag as high as possible. As you refactor, pass your data sources down to the small simple functions. Don't let the 'test' flag infect any but the topmost function.
Rewrite tests. As you refactor, rewrite as many tests as possible to call the simple little functions instead of the big top-level function. You can pass your mocks into the simple functions from your tests.
Get rid of the 'test' flag and determine how much DI you really need. Since you have tests written at the lower levels that can insert mocks through areguments, you probably don't need to mock out many data sources at the top level anymore.
If, after all this, the DI is still cumbersome, then think about injecting a single object that holds references to all your data sources. It's always easier to inject one thing rather than many.
Use an AutoMocking container. There is one written for RhinoMocks.
Imagine you have a class with a lot of dependencies injected via constructor injection. Here's what it looks like to set it up with RhinoMocks, no AutoMocking container:
private MockRepository _mocks;
private BroadcastListViewPresenter _presenter;
private IBroadcastListView _view;
private IAddNewBroadcastEventBroker _addNewBroadcastEventBroker;
private IBroadcastService _broadcastService;
private IChannelService _channelService;
private IDeviceService _deviceService;
private IDialogFactory _dialogFactory;
private IMessageBoxService _messageBoxService;
private ITouchScreenService _touchScreenService;
private IDeviceBroadcastFactory _deviceBroadcastFactory;
private IFileBroadcastFactory _fileBroadcastFactory;
private IBroadcastServiceCallback _broadcastServiceCallback;
private IChannelServiceCallback _channelServiceCallback;
[SetUp]
public void SetUp()
{
_mocks = new MockRepository();
_view = _mocks.DynamicMock<IBroadcastListView>();
_addNewBroadcastEventBroker = _mocks.DynamicMock<IAddNewBroadcastEventBroker>();
_broadcastService = _mocks.DynamicMock<IBroadcastService>();
_channelService = _mocks.DynamicMock<IChannelService>();
_deviceService = _mocks.DynamicMock<IDeviceService>();
_dialogFactory = _mocks.DynamicMock<IDialogFactory>();
_messageBoxService = _mocks.DynamicMock<IMessageBoxService>();
_touchScreenService = _mocks.DynamicMock<ITouchScreenService>();
_deviceBroadcastFactory = _mocks.DynamicMock<IDeviceBroadcastFactory>();
_fileBroadcastFactory = _mocks.DynamicMock<IFileBroadcastFactory>();
_broadcastServiceCallback = _mocks.DynamicMock<IBroadcastServiceCallback>();
_channelServiceCallback = _mocks.DynamicMock<IChannelServiceCallback>();
_presenter = new BroadcastListViewPresenter(
_addNewBroadcastEventBroker,
_broadcastService,
_channelService,
_deviceService,
_dialogFactory,
_messageBoxService,
_touchScreenService,
_deviceBroadcastFactory,
_fileBroadcastFactory,
_broadcastServiceCallback,
_channelServiceCallback);
_presenter.View = _view;
}
Now, here's the same thing with an AutoMocking container:
private MockRepository _mocks;
private AutoMockingContainer _container;
private BroadcastListViewPresenter _presenter;
private IBroadcastListView _view;
[SetUp]
public void SetUp()
{
_mocks = new MockRepository();
_container = new AutoMockingContainer(_mocks);
_container.Initialize();
_view = _mocks.DynamicMock<IBroadcastListView>();
_presenter = _container.Create<BroadcastListViewPresenter>();
_presenter.View = _view;
}
Easier, yes?
The AutoMocking container automatically creates mocks for every dependency in the constructor, and you can access them for testing like so:
using (_mocks.Record())
{
_container.Get<IChannelService>().Expect(cs => cs.ChannelIsBroadcasting(channel)).Return(false);
_container.Get<IBroadcastService>().Expect(bs => bs.Start(8));
}
Hope that helps. I know my testing life has been made a whole lot easier with the advent of the AutoMocking container.
You're right that it can be cumbersome.
Proponent of mocking methodology would point out that the code is written improperly to being with. That is, you shouldn't be constructing dependent objects inside this method. Rather, the injection API's should have functions that create the appropriate objects.
As for mocking up 6 different objects, that's true. However, if you also were unit-testing those systems, those objects should already have mocking infrastructure you can use.
Finally, use a mocking framework that does some of the work for you.
I don't have your code, but my first reaction is that your test is trying to tell you that your object has too many collaborators. In cases like this, I always find that there's a missing construct in there that should be packaged up into a higher level structure. Using an automocking container is just muzzling the feedback you're getting from your tests. See http://www.mockobjects.com/2007/04/test-smell-bloated-constructor.html for a longer discussion.
In this context, I usually find statements along the lines of "this indicates that your object has too many dependencies" or "your object has too many collaborators" to be a fairly specious claim. Of course a MVC controller or a form is going to be calling lots of different services and objects to fulfill its duties; it is, after all, sitting at the top layer of the application. You can smoosh some of these dependencies together into higher-level objects (say, a ShippingMethodRepository and a TransitTimeCalculator get combined into a ShippingRateFinder), but this only goes so far, especially for these top-level, presentation-oriented objects. That's one less object to mock, but you've just obfuscated the actual dependencies via one layer of indirection, not actually removed them.
One blasphemous piece of advice is to say that if you are dependency injecting an object and creating an interface for it that is quite unlikely to ever change (Are you really going to drop in a new MessageBoxService while changing your code? Really?), then don't bother. That dependency is part of the expected behavior of the object and you should just test them together since the integration test is where the real business value lies.
The other blasphemous piece of advice is that I usually see little utility in unit testing MVC controllers or Windows Forms. Everytime I see someone mocking the HttpContext and testing to see if a cookie was set, I want to scream. Who cares if the AccountController set a cookie? I don't. The cookie has nothing to do with treating the controller as a black box; an integration test is what is needed to test its functionality (hmm, a call to PrivilegedArea() failed after Login() in the integration test). This way, you avoid invalidating a million useless unit tests if the format of the login cookie ever changes.
Save the unit tests for the object model, save the integration tests for the presentation layer, and avoid mock objects when possible. If mocking a particular dependency is hard, it's time to be pragmatic: just don't do the unit test and write an integration test instead and stop wasting your time.
The simple answer is that code that you are trying to test is doing too much. I think sticking to the Single Responsibility Principle might help.
The Save button method should only contain a top-level calls to delegate things to other objects. These objects can then be abstracted through interfaces. Then when you test the Save button method, you only test the interaction with mocked objects.
The next step is to write tests to these lower-level classes, but thing should get easier since you only test these in isolation. If you need a complex test setup code, this is a good indicator of a bad design (or a bad testing approach).
Recommended reading:
Clean Code: A Handbook of Agile Software Craftsmanship
Google's guide to writing testable code
Constructor DI isn't the only way to do DI. Since you're using C#, if your constructor does no significant work you could use Property DI. That simplifies things greatly in terms of your object's constructors at the expense of complexity in your function. Your function must check for the nullity of any dependent properties and throw InvalidOperation if they're null, before it begins work.
When it is hard to test something, it is usually symptom of the code quality, that the code is not testable (mentioned in this podcast, IIRC). The recommendation is to refactor the code so that the code will be easy to test. Some heuristics for deciding how to split the code into classes are the SRP and OCP. For more specific instructions, it would be necessary to see the code in question.