How to Inject Controller for MVC4/VS2012/Web API - asp.net-web-api

I have read or tried to read far too many "how to"s on this and have gotten exactly nowhere. Unity? System.Web.Http.Dependencies? Ninject? StructureMap? Ugh. I just want something simple that works! I just can't figure out what the current state of this is. There are wildly different approaches and the examples appear to be incomplete. Heck the best lead had a sample project with it ... that I can't load in VS2010 or 2012. ARG! I waster 3/4 of the day on something that I feel should have been half an hour at most and move on! It's just plumbing!
I have a repository that's based on generics to process a number of data sets that all support the same operations.
IRepository
I want to control which repository each data set is bound to. This will allow me to bind everything to a test XML repository, transitioning them over to a SQL repository as the project advances.
I sure would appreciate some help getting this going! Thank you!

Sounds like you are at the state I was a couple of years ago.
Note, if you need any further help I cn send you some code. It's just hard to put all the code in here.
Ill try to explain the current architecture in the project I am working on. This post is a bit long winded but I am trying to give you a big picture of how using IOC can help you in many ways.
So I use Ninject. After trying to use Castle Windsor for a while I found Ninject easy to get up and running. Ninject has a cool website that will get you started.
First of all my project structure is as follows: (top down and it's MVC)
View - razor
ViewModel - I use 1 view model per view
ViewModelBuilder - Builds my view models for my views (used to abstract code away from my controller so my controller stays neat and tidy)
AutoMapper - to map domain entities to my view models
Controller - calls my service layer to get Domain Entities
Domain Entities - representations of my domain
ServiceLayer (business layer) - Calls my repository layer to get Domain entities or collections of these
AutoMapper again - to map custom types from my 3rd party vendors into my domain entities
RepositoryLayer - does CRUD operations to my data stores
This is a hierarchy but Domain entities kind of sit along side and are used in a few different layer.
Note: some extra tools mentioned in this post are:
AutoMapper - maps entities to other entities - eliminates the need to write loads of mapping code
Moq - Allows you to mock stuff for unit testing. This is mentioned later.
Now, regarding Ninject.
Each layer is marked with an interface. This must be done so Ninject can say to its self.
When I find IVehicleRepository inject it with a real VehicleRepository or even inject it with FakeVehicleRepository if I need a fake.
(this relates to your comment - "This will allow me to bind everything to a test XML repository")
Now every layer has a contstructor so that Ninject (or any other IOC container) can inject what it needs to:
public VehicleManager(IVehicleRepository vehicleRepository)
{
this._vehicleRepository = vehicleRepository;
}
VehicleManager is in my serviceLayer (not to be confused with anything to do with web services). The service layer is really what we would call the business layer. It seems that a lot of people are using the word service. (even though I think it is annoying as it makes me think about web services or WCF instead of just a business layer.... anyway...)
now without getting into the nitty gritty of Ninject setup the following line of code in my NinjectWebCommon.cs tells ninject what to do:
kernel.Bind<IVehicleRepository>().To<VehicleRepository>().InRequestScope();
This says:
Hey Ninject, when I ask for IVehicleRepository give my a concrete implementation of VehicleRepository.
As mentioned before I could replace VehicleRepository with FakeVehicleRepository so that I didnt have to read from a real database.
So, as you can now imagine, every layer is only dependent on interfaces.
I dont know how much unit testing you have done but you can also imagine that if you wanted to unit test your service layer and it had concrete references to your repository layer then you would not be able to unit test as you would be ACTUALLY hitting your repository and hence reading from a real database.
Remember unit testing is called unit testing because it tests ONE thing only. Hence the word UNIT. So because everything only knows about interfaces, it means that you can test a method on your service layer and mock the repository.
So if your service layer has a method like this:
public bool ThisIsACar(int id)
{
bool isCar = false;
var vehicle = vehicleRepository.GetVehicleById(id);
if(vehicle.Type == VehicleType.Car)
{
isCar = true;
}
else
{
isCar = false;
}
}
You would not want the vehicleRepository to be call so you could Moq what the VehicleRepository gives you back. You can mostly only Mock stuff if it implements an interface.
So your unit test would look like this (some pseudo code here):
[TestMethod]
public void ServiceMethodThisIsACar_Returns_True_When_VehicleIsACar()
{
// Arrange
_mockRepository.Setup(x => x.ThisIsACar(It.IsAny<int>)).returns(new Car with a type of VehicleType.Car)
// Act
var vehicleManager = new VehicleManager(_mockVehicleRepository.Object);
var vehicle = vehicleManager.ThisIsACar(3);
// Assert
Assert.IsTrue(vehicle.VehicleType == VehicleType.Car)
}
So as you can see, at this point and it is very simplified, you only want to test the IF statement in your service layer to make sure the outcome is correct.
You would test your repository in its own unit tests and possibly mock the entity frame work if you were using it.
So, overall, I would say, use what ever IOC container gets you up and running the fastest with the least amount of pain.
I would also say, try and unit test all that you can. This is great for a few different reasons. Obviously it tests the code you have written but it will also immediately show you if you have done some thing stupid like new-ing up a concrete repository. You will quickly see that you wont have an interface to mock in your unit tests and this will lead you to go back and refactor your code.
I found that with IOC, it takes a while to just get it. It confuses the crap out of you until one day it just clicks. After that it is so easy you wonder how you ever lived without it.
Here is a list of things I cant live without
Automapper
Moq
Fluent Validation
Resharper - some hate it, I love it, mostly for its unit testing UI.
Anyway, this is getting too long. Let me know what you think.
thanks
RuSs

Related

first time TDD help needed

Its (almost) my first time trying to create code by TDD principles. But i'm having troubles how to start.
In example: I want to mutate some information about a person.
To make it easy, a person has only these values:
- FirstName
- LastName
- Email
What i need at the end:
- A person DTO
- A person entity (Nhibernate)
- Functionality to store the dto values in the Database. At the end i need to return a succes or an error (possibly a boolean).
With the given information, how to start at all? It's a global question, but that's because i have no clue how to start. I've red many articles but somehow i get stuck already.
Edit:
- I'm using MVC: MVC will give a DTO (filled from form fields) back.
So the MVC start call could be something like this:
public JsonResult MutatePerson(PersonDto person){
//Call functions by TDD here
return Json(true);
}
You've described the objects involved, but not the operations.
Presumably you need a read() operation, a write() operation. Perhaps a list() operation ?
All of the above should have tests associated with them e.g.
testCanReadViaId();
testCanThrowExceptionOnReadingInvalidId();
testCanWrite();
etc.
For a lot of work you should mock your datasources etc. and position (say) hardcoded data under your persistenc elayer such that you don't rely on a database. However for something like the above I would definitely test the basic database interaction.
As such you may need to (initially) point a test suite against a database with canned data. If you want to be more flexible, then your test setup code could write the entities into the database first, prior to running the tests.
Your tests should test different permutations of data and operations e.g. in the above I've suggested a test to read an object via a valid id (say, 1) and a similar operation against an invalid id (say, -1). You may also want to check different data combinations (e.g. does everything work if the email address isn't populated - this may be valid if the database column is nullable)
Using TDD, you should use interfaces as arguments. Interfaces can be mocked, and with a mock you test the MutatePerson method, and ONLY that. In a unit test, you only want to test how a method reacts to input, not how the object reacts to the method. If you test how the DTO object behaves as well, you are writing an integration test.
So, use the interface of PersonDto (create one if it doesn't exist). And use that as method argument instead of the concrete class.
It might be just me, but I have the feeling that you're starting off with little idea of what your global system should look like in terms of layers, modules and the dependencies and interactions between them.
TDD's emergent design sure works at the level of a small object graph, but you won't get away without doing some amount of overall architectural design first (not big upfront design, but enough to get you started).
With that in mind, I think you'll have a far better idea of what to test.
Once you've figured that out, I think you should :
Learn about object-oriented unit testing techniques, and by unit testing I mean testing things in isolation. Roy Osherove's Art Of Unit Testing is an excellent place to start for a .NET developer.
Learn about architecture-level TDD strategies. With the articles you read you certainly got an idea of how to do TDD in the small, but you need a more global approach : what should you TDD first, in what order, etc. A book like GOOS might help you in that department.

What are the possible problems with unit testing ASP.NET MVC code in the following way?

I've been looking at the way unit testing is done in the NuGetGallery. I observed that when controllers are tested, service classes are mocked. This makes sense to me because while testing the controller logic, I didn't want to be worried about the architectural layers below. After using this approach for a while, I noticed how often I was running around fixing my mocks all over my controller tests when my service classes changed. To solve this problem, without consulting people that are smarter than me, I started writing tests like this (don't worry, I haven't gotten that far):
public class PersonController : Controller
{
private readonly LESRepository _repository;
public PersonController(LESRepository repository)
{
_repository = repository;
}
public ActionResult Index(int id)
{
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
public class PersonControllerTests
{
public void can_get_person()
{
var person = _helper.CreatePerson(username: "John");
var controller = new PersonController(_repository);
controller.FakeOutContext();
var result = (ViewResult)controller.Index(person.Id);
var model = (VMPerson)result.Model;
Assert.IsTrue(model.Person.Username == "John");
}
}
I guess this would be integration testing because I am using a real database (I'd prefer an inmemory one). I begin my test by putting data in my database (each test runs in a transaction and is rolled back when the test completes). Then I call my controller and I really don't care how it retrieves the data from the database (via a repository or service class) just that the Model to be sent to the view must have the record I put into the database aka my assertion. The cool thing about this approach is that a lot of times I can continue to add more layers of complexity without having to change my controller tests:
public class PersonController : Controller
{
private readonly LESRepository _repository;
private readonly PersonService _personService;
public PersonController(LESRepository repository)
{
_repository = repository;
_personService = new PersonService(_repository);
}
public ActionResult Index(int id)
{
var model = _personService.GetActivePerson(id);
if(model == null)
return PersonNotFoundResult();
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
Now I realize I didn't create an interface for my PersonService and pass it into the constructor of my controller. The reason is 1) I don't plan to mock my PersonService and 2) I didn't feel I needed to inject my dependency since my PersonController for now only needs to depend on one type of PersonService.
I'm new at unit testing and I'm always happy to be shown that I'm wrong. Please point out why the way I'm testng my controllers could be a really bad idea (besides the obvious increase in the time my tests will take to run).
Hmm. a few things here mate.
First, it looks like you're trying to test the a controller method. Great :)
So this means, that anything the controller needs, should be mocked. This is because
You don't want to worry about what happens inside that dependency.
You can verify that the dependency was called/executed.
Ok, so lets look at what you did and I'll see if i can refactor it to make it a bit more testable.
-REMEMBER- i'm testing the CONTROLLER METHOD, not the stuff the controller method calls/depends upon.
So this means I don't care about the service instance or the repository instance (which ever architectural way you decide to follow).
NOTE: I've kept things simple, so i've stripped lots of crap out, etc.
Interface
First, we need an interface for the repository. This can be implemented as a in-memory repo, an entity framework repo, etc.. You'll see why, soon.
public interface ILESRepository
{
IQueryable<Person> GetAll();
}
Controller
Here, we use the interface. This means it's really easy and awesome to use a mock IRepository or a real instance.
public class PersonController : Controller
{
private readonly ILESRepository _repository;
public PersonController(ILESRepository repository)
{
if (repository == null)
{
throw new ArgumentNullException("repository");
}
_repository = repository;
}
public ActionResult Index(int id)
{
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
Unit Test
Ok - here's the magic money shot stuff.
First, we create some Fake People. Just work with me here... I'll show you where we use this in a tick. It's just a boring, simple list of your POCO's.
public static class FakePeople()
{
public static IList<Person> GetSomeFakePeople()
{
return new List<Person>
{
new Person { Id = 1, Name = "John" },
new Person { Id = 2, Name = "Fred" },
new Person { Id = 3, Name = "Sally" },
}
}
}
Now we have the test itself. I'm using xUnit for my testing framework and moq for my mocking. Any framework is fine, here.
public class PersonControllerTests
{
[Fact]
public void GivenAListOfPeople_Index_Returns1Person()
{
// Arrange.
var mockRepository = new Mock<ILESRepository>();
mockRepository.Setup(x => x.GetAll<Person>())
.Returns(
FakePeople.GetSomeFakePeople()
.AsQueryable);
var controller = new PersonController(mockRepository);
controller.FakeOutContext();
// Act.
var result = controller.Index(person.Id) as ViewResult;
// Assert.
Assert.NotNull(result);
var model = result.Model as VMPerson;
Assert.NotNull(model);
Assert.Equal(1, model.Person.Id);
Assert.Equal("John", model.Person.Username);
// Make sure we actually called the GetAll<Person>() method on our mock.
mockRepository.Verify(x => x.GetAll<Person>(), Times.Once());
}
}
Ok, lets look at what I did.
First, I arrange my crap. I first create a mock of the ILESRepository.
Then i say: If anyone ever calls the GetAll<Person>() method, well .. don't -really- hit a database or a file or whatever .. just return a list of people, which created in FakePeople.GetSomeFakePeople().
So this is what would happen in the controller ...
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
First, we ask our mock to hit the GetAll<Person>() method. I just 'set it up' to return a list of people .. so then we have a list of 3 Person objects. Next, we then call a FirstOrDefault(...) on this list of 3 Person objects .. which returns the single object or null, depending on what the value of id is.
Tada! That's the money shot :)
Now back to the rest of the unit test.
We Act and then we Assert. Nothing hard there.
For bonus points, I verify that we've actually called the GetAll<Person>() method, on the mock .. inside the Controller's Index method. This is a safety call to make sure our controller logic (we're testing for) was done right.
Sometimes, you might want to check for bad scenario's, like a person passed in bad data. This means you might never ever get to the mock methods (which is correct) so you verify that they were never called.
Ok - questions, class?
Even when you do not plan to mock an interface, I strongly suggest you to do not hide the real dependencies of an object by creating the objects inside the constructor, you are breaking the Single Responsibility principle and you are writing un-testable code.
The most important thing to consider when writing tests is: "There is no magic key to write tests". There are a lot of tools out there to help you write tests but the real effort should be put in writing testable code rather than trying to hack our existing code to write a test which usually ends up being an integration test instead of a unit-test.
Creating a new object inside a constructor is one of the first big signals that your code is not testable.
These links helped me a lot when I was making the transition to start writing tests and let me tell you that after you start, that will become a natural part of your daily work and you will love the benefits of writing tests I can not picture myself writing code without tests anymore
Clean code guide (used in Google): http://misko.hevery.com/code-reviewers-guide/
To get more information read the following:
http://misko.hevery.com/2008/09/30/to-new-or-not-to-new/
and watch this video cast from Misko Hevery
http://www.youtube.com/watch?v=wEhu57pih5w&feature=player_embedded
Edited:
This article from Martin Fowler explain the difference between a Classical and a Mockist TDD approach
http://martinfowler.com/articles/mocksArentStubs.html
As a summary:
Classic TDD approach: This implies to test everything you can without creating substitutes or doubles (mocks, stubs, dummies) with the exception of external services like web services or databases. The Classical testers use doubles for the external services only
Benefits: When you test you are actually testing the wiring logic of your application and the logic itself (not in isolation)
Cons: If an error occurs you will see potentially hundreds of tests failing and it will be hard to find the code responsible
Mockist TDD approach: People following the Mockist approach will test in isolation all the code because they will create doubles for every dependency
Benefits: You are testing in isolation each part of your application. If an error occurs, you know exactly where it occurred because just a few tests will fail, ideally only one
Cons: Well you have to double all your dependencies which makes tests a little bit harder but you can use tools like AutoFixture to create doubles for the dependencies automatically
This is another great article about writing testable code
http://www.loosecouplings.com/2011/01/how-to-write-testable-code-overview.html
There are some downsides.
First, when you have a test that depends on an external component (like a live database), that test is no longer really predictable. It can fail for any number of reasons - a network outage, a changed password on the database account, missing some DLLs, etc. So when your test suddenly fails, you cannot be immediately sure where the flaw is. Is it a database problem? Some tricky bug in your class?
When you can immediately answer that question just by knowing which test failed, you have the enviable quality of defect localization.
Secondly, if there is a database problem, all your tests that depend on it will fail at once. This might not be so severe, since you can probably realize what the cause is, but I guarantee it will slow you down to examine each one. Widespread failures can mask real problems, because you don't want to look at the exception on each of 50 tests.
And I know you want to hear about factors besides the execution time, but that really does matter. You want to run your tests as frequently as possible, and a longer runtime discourages that.
I have two projects: one with 600+ tests that run in 10 seconds, one with 40+ tests that runs in 50 seconds (this project does actually talk to a database, on purpose). I run the faster test suite much more frequently while developing. Guess which one I find easier to work with?
All of that said, there is value in testing external components. Just not when you're unit-testing. Integration tests are more brittle, and slower. That makes them more expensive.
Accessing the database in unit tests has the following consequences:
Performance. Populating the database and accessing it is slow. The more tests you have, the longer the wait. If you used mocking your controller tests may run in a couple of milliseconds each, compared to seconds if it was using the database directly.
Complexity. For shared databases, you'll have to deal with concurrency issues where multiple agents are running tests against the same database. The database needs to be provisioned, structure needs to be created, data populated etc. It becomes rather complex.
Coverage. You mind find that some conditions are nearly impossible to test without mocking. Examples may include verifying what to do when the database times out. Or what to do if sending an email fails.
Maintenance. Changes to your database schema, especially if its frequent, will impact almost every test that uses the database. In the beginning when you have 10 tests it may not seem like much, but consider when you have 2000 tests. You may also find that changing business rules and adapting the tests to be more complex, as you'll have to modify the data populated in the database to verify the business rule.
You have to ask whether it is worth it for testing business rules. In most cases, the answer may be "no".
The approach I follow is:
Unit classes (controllers, service layers etc) by mocking out dependencies and simulating conditions that may occur (like database errors etc). These tests verify business logic and one aims to gain as much coverage of decision paths as possible. Use a tool like PEX to highlight any issues you never thought of. You'll be surprised how much robust (and resilient) your application would be after fixing some of the issues PEX highlights.
Write database tests to verify that the ORM I'm using works with the underlying database. You'll be surprised the issues EF and other ORM have with certain database engines (and versions). These tests are also useful to for tuning performance and reducing the amount of queries and data being sent to and from the database.
Write coded UI tests that automates the browser and verifies the system actually works. In this case I would populate the database before hand with some data. These tests simply automate the tests I would have done manually. The aim is to verify that critical pieces still work.

TDD Data Access layer

In TDD, I've been testing business logic by mocking data access functionalities.
but in reality I need the layers below the business layer also to be implemented for the application to work.
should I be implementing data access layer using TDD?
Based on the discussions I've seen on web, unit tests should not be connecting to any external resources such as databases, web services etc.. If they connect, then they become integration tests.
Could someone shed some light on this please.
Thank you very much.
You are right, contact to the outside makes it integration test, but that contact is important to test as well. Using TDD, it should be exposed to you, that the contact surface is as small as possible. That can be achieved by using wrappers for each record, or similar methods.
If you're using something like Hibernate for example and if you have any kind of logic in your DAO you can mock out the calls to eg Session and Query and unit test without hitting the database.
If you want to test the queries themselves you could use an in-memory database and something like DbUnit. I would count these as integration tests and run them separately as they tend to take longer.
Here's an example of a typical Java Spring/Hibernate DAO method with some logic you might want to test:
public List<Entity> searchEntities(final String searchText, final int maxResults) {
String sql = "select * from entity where field like :text";
Query query = sessionFactory.getCurrentSession().createSQLQuery(sql);
if (maxResults != 0) {
query.setMaxResults(maxResults);
}
query.setString("searchText", "%"+searchText+"%");
return query.list();
}
Using a mocking framework you could mock sessionFactory, session and query and create a unit test that has expectations that query.setMaxResults is only called if it doesn't equal 0 and that query.setString is called with the correct string value. You could also assert that whatever is returned from query.list() is the thing returned by the method.
However, this is making you test code coupled to your implementation of this method. Also, if your DAO method has a lot of logic in it then you should consider refactoring and maybe moving this logic to a service layer where you could unit test it separately from any database interaction.
You can use Dev Magic Fake to fake the DAL so you can Work with TDD as you need without writing any code for the Fake DAL for example
Just add a reference to DevMagicFake.dll and you can code the following:
[HttpPost]
public ActionResult Create(VendorForm vendorForm)
{
var repoistory = new FakeRepository<VendorForm>();
repoistory.Save(vendorForm);
return View("Page", repoistory.GetAll());
}
This will save the VendorForm permanent in the memory, and you can retrieve it anytime You need, you can also generate data for this object or any other object in your model, without writing any code to that operation, so now you can make TDD as you already finish your DAL for more information about Dev Magic Fake see the following Link on CodePlex:
http://devmagicfake.codeplex.com
Thanks
M.Radwan

Business Layer structure, how do you build yours?

I am a big fan of NTiers for my development choices, of course it doesnt fit every scenario.
I am currently working on a new project and I am trying to have a play with the way I normally work, and trying to see if I can clean it up. As I have been a very bad boy and have been putting too much code in the presentation layer.
My normal business layer structure is this (basic view of it):
Business
Services
FooComponent
FooHelpers
FooWorkflows
BahComponent
BahHelpers
BahWorkflows
Utilities
Common
ExceptionHandlers
Importers
etc...
Now with the above I have great access to directly save a Foo object and a Bah object, via their respective helpers.
The XXXHelpers give me access to Save, Edit and Load the respective objects, but where do I put the logic to save objects with child objects.
For example:
We have the below objects (not very good objects I know)
Employee
EmployeeDetails
EmployeeMembership
EmployeeProfile
Currently I would build these all up in the presentation layer and then pass them to their Helpers, I feel this is wrong, I think the data should be passed to a single point above presentation in the business layer some place and sorted out there.
But I'm at a bit of a loss as to where I would put this logic and what to call the sector, would it go under Utilities as EmployeeManager or something like this?
What would you do? and I know this is all preference.
A more detailed layout
The workflows contain all the calls directly to the DataRepository for example:
public ObjectNameGetById(Guid id)
{
return DataRepository.ObjectNameProvider.GetById(id);
}
And then the helpers provider access to the workflows:
public ObjectName GetById(Guid id)
{
return loadWorkflow.GetById(id);
}
This is to cut down on duplicate code, as you can have one call in the workflow to getBySomeProperty
and then several calls in the Helper which could do other operations and return the data in different ways, a bad example would be public GetByIdAsc and GetByIdDesc
By seperating the calls to the Data Model by using the DataRepository, it means that it would be possible to swap out the model for another instance (that was the thinking) but ProviderHelper has not been broken down so it is not interchangable, as it is hardcode to EF unfortunately.
I dont intend to change the access technology, but in the future there might be something better or just something that all the cool kids are now using that I might want to implement instead.
projectName.Core
projectName.Business
- Interfaces
- IDeleteWorkflows.cs
- ILoadWorkflows.cs
- ISaveWorkflows.cs
- IServiceHelper.cs
- IServiceViewHelper.cs
- Services
- ObjectNameComponent
- Helpers
- ObjectNameHelper.cs
- Workflows
- DeleteObjectNameWorkflow.cs
- LoadObjectNameWorkflow.cs
- SaveObjectNameWorkflow.cs
- Utilities
- Common
- SettingsManager.cs
- JavascriptManager.cs
- XmlHelper.cs
- others...
- ExceptionHandlers
- ExceptionManager.cs
- ExceptionManagerFactory.cs
- ExceptionNotifier.cs
projectName.Data
- Bases
- ObjectNameProviderBase.cs
- Helpers
- ProviderHelper.cs
- Interfaces
- IProviderBase.cs
- DataRepository.cs
projectName.Data.Model
- Database.edmx
projectName.Entities (Entities that represent the DB tables are created by EF in .Data.Model, this is for others that I may need that are not related to the database)
- Helpers
- EnumHelper.cs
projectName.Presenation
(depends what the call of the application is)
projectName.web
projectName.mvc
projectName.admin
The test Projects
projectName.Business.Tests
projectName.Data.Test
+1 for an interesting question.
So, the problem you describe is pretty common - I'd take a different approach - first with the logical tiers and secondly with the utility and helper namespaces, which I'd try and factor out completely - I'll tell you why in a second.
But first, my preferred approach here is pretty common enterprise architecture which I'll try to highlight in brief, but there's much more depth out there. It does require some radical changes in thinking - using NHibernate or Entity framework to allow you to query your object model directly and let the ORM deal with things like mapping to and from the database and lazy loading relationships etc. Doing this will allow you to implement all of your business logic within a domain model.
First the tiers (or projects in your solution);
YourApplication.Domain
The domain model - the objects representing your problem space. These are plain old CLR objects with all of your key business logic. This is where your example objects would live, and their relationships would be represented as collections. There is nothing in this layer that deals with persistence etc, it's just objects.
YourApplication.Data
Repository classes - these are classes that deal with getting the aggregate root(s) of your domain model.
For instance, it's unlikely in your sample classes that you would want to look at EmployeeDetails without also looking at Employee (an assumption I know, but you get the gist - invoice lines is a better example, you generally will get to invoice lines via an invoice rather than loading them independently). As such, the repository classes, of which you have one class per aggregate root will be responsible for getting initial entities out of the database using the ORM in question, implementing any query strategies (like paging or sorting) and returning the aggregate root to the consumer. The repository would consume the current active data context (ISession in NHibernate) - how this session is created depends on what type of app you are building.
YourApplication.Workflow
Could also be called YourApplication.Services, but this can be confused with web services
This tier is all about interrelated, complex atomic operations - rather than have a bunch of things to be called in your presentation tier, and therefore increase coupling, you can wrap such operations into workflows or services.
It's possible you could do without this in many applications.
Other tiers then depend on your architecture and the application you're implementing.
YourApplication.YourChosenPresentationTier
If you're using web services to distribute your tiers, then you would create DTO contracts that represent just the data you are exposing between the domain and the consumers. You would define assemblers that would know how to move data in and out of these contracts from the domain (you would never send domain objects over the wire!)
In this situation, and you're also creating the client, you would consume the operation and data contracts defined above in your presentation tier, probably binding to the DTOs directly as each DTO should be view specific.
If you have no need to distribute your tiers, remembering the first rule of distributed architectures is don't distribute, then you would consume the workflow/services and repositories directly from within asp.net, mvc, wpf, winforms etc.
That just leaves where the data contexts are established. In a web application, each request is usually pretty self contained, so a request scoped context is best. That means that the context and connection is established at the start of the request and disposed at the end. It's trivial to get your chosen IoC/dependency injection framework to configure per-request components for you.
In a desktop app, WPF or winforms, you would have a context per form. This ensures that edits to domain entities in an edit dialog that update the model but don't make it to the database (eg: Cancel was selected) don't interfere with other contexts or worse end up being accidentally persisted.
Dependency injection
All of the above would be defined as interfaces first, with concrete implementations realised through an IoC and dependency injection framework (my preference is castle windsor). This allows you to isolate, mock and unit test individual tiers independently and in a large application, dependency injection is a life saver!
Those namespaces
Finally, the reason I'd lose the helpers namespace is, in the model above, you don't need them, but also, like utility namespaces they give lazy developers an excuse not to think about where a piece of code logically sits. MyApp.Helpers.* and MyApp.Utility.* just means that if I have some code, say an exception handler that maybe logically belongs within MyApp.Data.Repositories.Customers (maybe it's a customer ref is not unique exception), a lazy developer can just place it in MyApp.Utility.CustomerRefNotUniqueException without really having to think.
If you have common framework type code that you need to wrap up, add a MyApp.Framework project and relevant namespaces. If your're adding a new model binder, put it in MyApp.Framework.Mvc, if it's common logging functionality, put it in MyApp.Framework.Logging and so on. In most cases, there shouldn't be any need to introduce a utility or helpers namespace.
Wrap up
So that scratches the surface - hope it's of some help. This is how I'm developing software today, and I've intentionally tried to be brief - if I can elaborate on any specifics, let me know. The final thing to say on this opinionated piece is the above is for reasonably large scale development - if you're writing notepad version 2 or a corporate phone book, the above is probably total overkill!!!
Cheers
Tony
There is a nice diagram and a description on this page about the application layout, alhtough looks further down the article the application isnt split into physical layers (seperate project) - Entity Framework POCO Repository

TDD and DI: dependency injections becoming cumbersome

C#, nUnit, and Rhino Mocks, if that turns out to be applicable.
My quest with TDD continues as I attempt to wrap tests around a complicated function. Let's say I'm coding a form that, when saved, has to also save dependent objects within the form...answers to form questions, attachments if available, and "log" entries (such as "blahblah updated the form." or "blahblah attached a file."). This save function also fires off emails to various people depending on how the state of the form changed during the save function.
This means in order to fully test out the form's save function with all of its dependencies, I have to inject five or six data providers to test out this one function and make sure everything fired off in the right way and order. This is cumbersome when writing the multiple chained constructors for the form object to insert the mocked providers. I think I'm missing something, either in the way of refactoring or simply a better way to set the mocked data providers.
Should I further study refactoring methods to see how this function can be simplified? How's the observer pattern sound, so that the dependent objects detect when the parent form is saved and handle themselves? I know that people say to split out the function so it can be tested...meaning I test out the individual save functions of each dependent object, but not the save function of the form itself, which dictates how each should save themselves in the first place?
First, if you are following TDD, then you don't wrap tests around a complicated function. You wrap the function around your tests. Actually, even that's not right. You interweave your tests and functions, writing both at almost exactly the same time, with the tests just a little ahead of the functions. See The Three Laws of TDD.
When you follow these three laws, and are diligent about refactoring, then you never wind up with "a complicated function". Rather you wind up with many, tested, simple functions.
Now, on to your point. If you already have "a complicated function" and you want to wrap tests around it then you should:
Add your mocks explicitly, instead of through DI. (e.g. something horrible like a 'test' flag and an 'if' statement that selects the mocks instead of the real objects).
Write a few tests in order to cover the basic operation of the component.
Refactor mercilessly, breaking up the complicated function into many little simple functions, while running your cobbled together tests as often as possible.
Push the 'test' flag as high as possible. As you refactor, pass your data sources down to the small simple functions. Don't let the 'test' flag infect any but the topmost function.
Rewrite tests. As you refactor, rewrite as many tests as possible to call the simple little functions instead of the big top-level function. You can pass your mocks into the simple functions from your tests.
Get rid of the 'test' flag and determine how much DI you really need. Since you have tests written at the lower levels that can insert mocks through areguments, you probably don't need to mock out many data sources at the top level anymore.
If, after all this, the DI is still cumbersome, then think about injecting a single object that holds references to all your data sources. It's always easier to inject one thing rather than many.
Use an AutoMocking container. There is one written for RhinoMocks.
Imagine you have a class with a lot of dependencies injected via constructor injection. Here's what it looks like to set it up with RhinoMocks, no AutoMocking container:
private MockRepository _mocks;
private BroadcastListViewPresenter _presenter;
private IBroadcastListView _view;
private IAddNewBroadcastEventBroker _addNewBroadcastEventBroker;
private IBroadcastService _broadcastService;
private IChannelService _channelService;
private IDeviceService _deviceService;
private IDialogFactory _dialogFactory;
private IMessageBoxService _messageBoxService;
private ITouchScreenService _touchScreenService;
private IDeviceBroadcastFactory _deviceBroadcastFactory;
private IFileBroadcastFactory _fileBroadcastFactory;
private IBroadcastServiceCallback _broadcastServiceCallback;
private IChannelServiceCallback _channelServiceCallback;
[SetUp]
public void SetUp()
{
_mocks = new MockRepository();
_view = _mocks.DynamicMock<IBroadcastListView>();
_addNewBroadcastEventBroker = _mocks.DynamicMock<IAddNewBroadcastEventBroker>();
_broadcastService = _mocks.DynamicMock<IBroadcastService>();
_channelService = _mocks.DynamicMock<IChannelService>();
_deviceService = _mocks.DynamicMock<IDeviceService>();
_dialogFactory = _mocks.DynamicMock<IDialogFactory>();
_messageBoxService = _mocks.DynamicMock<IMessageBoxService>();
_touchScreenService = _mocks.DynamicMock<ITouchScreenService>();
_deviceBroadcastFactory = _mocks.DynamicMock<IDeviceBroadcastFactory>();
_fileBroadcastFactory = _mocks.DynamicMock<IFileBroadcastFactory>();
_broadcastServiceCallback = _mocks.DynamicMock<IBroadcastServiceCallback>();
_channelServiceCallback = _mocks.DynamicMock<IChannelServiceCallback>();
_presenter = new BroadcastListViewPresenter(
_addNewBroadcastEventBroker,
_broadcastService,
_channelService,
_deviceService,
_dialogFactory,
_messageBoxService,
_touchScreenService,
_deviceBroadcastFactory,
_fileBroadcastFactory,
_broadcastServiceCallback,
_channelServiceCallback);
_presenter.View = _view;
}
Now, here's the same thing with an AutoMocking container:
private MockRepository _mocks;
private AutoMockingContainer _container;
private BroadcastListViewPresenter _presenter;
private IBroadcastListView _view;
[SetUp]
public void SetUp()
{
_mocks = new MockRepository();
_container = new AutoMockingContainer(_mocks);
_container.Initialize();
_view = _mocks.DynamicMock<IBroadcastListView>();
_presenter = _container.Create<BroadcastListViewPresenter>();
_presenter.View = _view;
}
Easier, yes?
The AutoMocking container automatically creates mocks for every dependency in the constructor, and you can access them for testing like so:
using (_mocks.Record())
{
_container.Get<IChannelService>().Expect(cs => cs.ChannelIsBroadcasting(channel)).Return(false);
_container.Get<IBroadcastService>().Expect(bs => bs.Start(8));
}
Hope that helps. I know my testing life has been made a whole lot easier with the advent of the AutoMocking container.
You're right that it can be cumbersome.
Proponent of mocking methodology would point out that the code is written improperly to being with. That is, you shouldn't be constructing dependent objects inside this method. Rather, the injection API's should have functions that create the appropriate objects.
As for mocking up 6 different objects, that's true. However, if you also were unit-testing those systems, those objects should already have mocking infrastructure you can use.
Finally, use a mocking framework that does some of the work for you.
I don't have your code, but my first reaction is that your test is trying to tell you that your object has too many collaborators. In cases like this, I always find that there's a missing construct in there that should be packaged up into a higher level structure. Using an automocking container is just muzzling the feedback you're getting from your tests. See http://www.mockobjects.com/2007/04/test-smell-bloated-constructor.html for a longer discussion.
In this context, I usually find statements along the lines of "this indicates that your object has too many dependencies" or "your object has too many collaborators" to be a fairly specious claim. Of course a MVC controller or a form is going to be calling lots of different services and objects to fulfill its duties; it is, after all, sitting at the top layer of the application. You can smoosh some of these dependencies together into higher-level objects (say, a ShippingMethodRepository and a TransitTimeCalculator get combined into a ShippingRateFinder), but this only goes so far, especially for these top-level, presentation-oriented objects. That's one less object to mock, but you've just obfuscated the actual dependencies via one layer of indirection, not actually removed them.
One blasphemous piece of advice is to say that if you are dependency injecting an object and creating an interface for it that is quite unlikely to ever change (Are you really going to drop in a new MessageBoxService while changing your code? Really?), then don't bother. That dependency is part of the expected behavior of the object and you should just test them together since the integration test is where the real business value lies.
The other blasphemous piece of advice is that I usually see little utility in unit testing MVC controllers or Windows Forms. Everytime I see someone mocking the HttpContext and testing to see if a cookie was set, I want to scream. Who cares if the AccountController set a cookie? I don't. The cookie has nothing to do with treating the controller as a black box; an integration test is what is needed to test its functionality (hmm, a call to PrivilegedArea() failed after Login() in the integration test). This way, you avoid invalidating a million useless unit tests if the format of the login cookie ever changes.
Save the unit tests for the object model, save the integration tests for the presentation layer, and avoid mock objects when possible. If mocking a particular dependency is hard, it's time to be pragmatic: just don't do the unit test and write an integration test instead and stop wasting your time.
The simple answer is that code that you are trying to test is doing too much. I think sticking to the Single Responsibility Principle might help.
The Save button method should only contain a top-level calls to delegate things to other objects. These objects can then be abstracted through interfaces. Then when you test the Save button method, you only test the interaction with mocked objects.
The next step is to write tests to these lower-level classes, but thing should get easier since you only test these in isolation. If you need a complex test setup code, this is a good indicator of a bad design (or a bad testing approach).
Recommended reading:
Clean Code: A Handbook of Agile Software Craftsmanship
Google's guide to writing testable code
Constructor DI isn't the only way to do DI. Since you're using C#, if your constructor does no significant work you could use Property DI. That simplifies things greatly in terms of your object's constructors at the expense of complexity in your function. Your function must check for the nullity of any dependent properties and throw InvalidOperation if they're null, before it begins work.
When it is hard to test something, it is usually symptom of the code quality, that the code is not testable (mentioned in this podcast, IIRC). The recommendation is to refactor the code so that the code will be easy to test. Some heuristics for deciding how to split the code into classes are the SRP and OCP. For more specific instructions, it would be necessary to see the code in question.

Resources