I'm new to RhinoMocks, and trying to get a grasp on the syntax in addition to what is happening under the hood.
I have a user object, we'll call it User, which has a property called IsAdministrator. The value for IsAdministrator is evaluated via another class that checks the User's security permissions, and returns either true or false based on those permissions. I'm trying to mock this User class, and fake the return value for IsAdministrator in order to isolate some Unit Tests.
This is what I'm doing so far:
public void CreateSomethingIfUserHasAdminPermissions()
{
User user = _mocks.StrictMock<User>();
SetupResult.For(user.IsAdministrator).Return(true);
// do something with my User object
}
Now, I'm expecting that Rhino is going to 'fake' the call to the property getter, and just return true to me. Is this incorrect? Currently I'm getting an exception because of dependencies in the IsAdministrator property.
Can someone explain how I can achieve my goal here?
One quick note before I jump into this. Typically you want to avoid the use of a "Strict" mock because it makes for a brittle test. A strict mock will throw an exception if anything occurs that you do not explicitly tell Rhino will happen. Also I think you may be misunderstanding exactly what Rhino is doing when you make a call to create a mock. Think of it as a custom Object that has either been derived from, or implements the System.Type you defined. If you did it yourself it would look like this:
public class FakeUserType: User
{
//overriding code here
}
Since IsAdministrator is probably just a public property on the User type you can't override it in the inheriting type.
As far as your question is concerned there are multiple ways you could handle this. You could implement IsAdministrator as a virtual property on your user class as aaronjensen mentioned as follows:
public class User
{
public virtual Boolean IsAdministrator { get; set; }
}
This is an ok approach, but only if you plan on inheriting from your User class. Also if you wan't to fake other members on this class they would also have to be virtual, which is probably not the desired behavior.
Another way to accomplish this is through the use of interfaces. If it is truly the User class you are wanting to Mock then I would extract an interface from it. Your above example would look something like this:
public interface IUser
{
Boolean IsAdministrator { get; }
}
public class User : IUser
{
private UserSecurity _userSecurity = new UserSecurity();
public Boolean IsAdministrator
{
get { return _userSecurity.HasAccess("AdminPermissions"); }
}
}
public void CreateSomethingIfUserHasAdminPermissions()
{
IUser user = _mocks.StrictMock<IUser>();
SetupResult.For(user.IsAdministrator).Return(true);
// do something with my User object
}
You can get fancier if you want by using dependency injection and IOC but the basic principle is the same across the board. Typically you want your classes to depend on interfaces rather than concrete implementations anyway.
I hope this helps. I have been using RhinoMocks for a long time on a major project now so don't hesitate to ask me questions about TDD and mocking.
Make sure IsAdministrator is virtual.
Also, be sure you call _mocks.ReplayAll()
_mocks.ReplayAll() will do nothing. It is just because you use SetupResult.For() that does not count. Use Expect.Call() to be sure that your code do everything correct.
Related
How a class can implement two interfaces with the same default method in Java 8.
I was not able to get the concept behind the same default method from different interfaces getting inherited in the sub class.Please explain the issue.
interface House {
default String getAddress() {
return "101 Main Str";
}
}
interface Bungalow extends House {
default String getAddress() {
return "101 Smart Str";
}
}
class MyHouse implements Bungalow, House {
}
public class TestClass {
public static void main(String[] args) {
House ci = new MyHouse(); //1
System.out.println(ci.getAddress()); //2
}
}
In the above code default method getAddress() in interface House is present.another method with the same name is declared as default in the extending interface Bungalow
How class MyHouse can implement both the interfaces without any compilation error(because it doesn't know which method has the preference in that case implementing should fail.)
If i call new MyHouse().getAddress(); gives compile error but it should give compilation error even without method calling from MyHouse class.
It seems that the answer is here, where there is a different example, but sort of makes sense and is really close to yours.
Ask me the exact same thing in 1/2 a year and I'll say it will fail at compile time and point me to this answer, so that I could read the JLS again. I guess this is how they decided to implement it. Without thinking too much, I, personally (may be wrong) think that this is at least counter intuitive...
I am using MVC3, Entity Framework v4.3 Code First, and SimpleInjector. I have several simple classes that look like this:
public class SomeThing
{
public int Id { get; set; }
public string Name { get; set; }
}
I have another entity that looks like this:
public class MainClass
{
public int Id { get; set; }
public string Name { get; set; }
public virtual AThing AThingy { get; set; }
public virtual BThing BThingy { get; set; }
public virtual CThing CThingy { get; set; }
public virtual DThing DThingy { get; set; }
public virtual EThing EThingy { get; set; }
}
Each Thingy (currently) has its own Manager class, like so:
public class SomeThingManager
{
private readonly IMyRepository<SomeThing> MyRepository;
public SomeThingManager(IMyRepository<SomeThing> myRepository)
{
MyRepository = myRepository;
}
}
My MainController consequently follows:
public class MainController
{
private readonly IMainManager MainManager;
private readonly IAThingManager AThingManager;
private readonly IBThingManager BThingManager;
private readonly ICThingManager CThingManager;
private readonly IDThingManager DThingManager;
private readonly IEThingManager EThingManager;
public MainController(IMainManager mainManager, IAThingManager aThingManager, IBThingManager bThingManager, ICThingManager cThingManager, IDThingManager dThingManager, IEThingManager eThingManager)
{
MainManager = mainManager;
AThingManager = aThingManager;
BThingManager = bThingManager;
CThingManager = cThingManager;
DThingManager = dThingManager;
EThingManager = eThingManager;
}
...various ActionMethods...
}
In reality, there are twice as many injected dependencies in this controller. It smells. The smell is worse when you also know that there is an OtherController with all or most of the same dependencies. I want to refactor it.
I already know enough about DI to know that property injection and service locator are not good ideas.
I can not split my MainController, because it is a single screen that requires all these things be displayed and editable with the click of a single Save button. In other words, a single post action method saves everything (though I'm open to changing that if it makes sense, as long as it's still a single Save button). This screen is built with Knockoutjs and saves with Ajax posts if that makes a difference.
I humored the use of an Ambient Context, but I'm not positive it's the right way to go.
I humored the use of injecting a Facade as well.
I'm also wondering if I should implement a Command architecture at this point.
(Don't all of the above just move the smell somewhere else?)
Lastly, and perhaps independent of the three above approaches, is should I instead have a single, say, LookupManager with explicit methods like GetAThings(), GetAThing(id), GetBThings(), GetBThing(id), and so on? (But then that LookupManager would need several repositories injected into it, or a new type of repository.)
My musings aside, my question is, to reiterate: what's a good way to refactor this code to reduce the crazy number of injected dependencies?
Using a command architecture is a good idea, since this moves all business logic out of the controller, and allows you to add cross-cutting concerns without changes to the code. However, this will not fix your problem of constructor over-injection. The standard solution is to move related dependencies into a aggregate service. However, I do agree with Mark that you should take a look at the unit of work pattern.
Have you considered using a unit of work design pattern? There is a great MSDN post on what a unit of work is. An excerpt from that article:
In a way, you can think of the Unit of Work as a place to dump all
transaction-handling code. The responsibilities of the Unit of Work
are to:
Manage transactions.
Order the database inserts, deletes, and updates.
Prevent duplicate updates. Inside a single usage of a Unit of Work object, different parts of the code may mark the same Invoice
object as changed, but the Unit of Work class will only issue a
single UPDATE command to the database.
The value of using a Unit of Work pattern is to free the rest of your
code from these concerns so that you can otherwise concentrate on
business logic.
There are several blog posts about this, but the best one I've found is on how to implement it is here. There are some other ones which have been referred to from this site here, and here.
Lastly, and perhaps independent of the three above approaches, is
should I instead have a single, say, LookupManager with explicit
methods like GetAThings(), GetAThing(id), GetBThings(), GetBThing(id),
and so on? (But then that LookupManager would need several
repositories injected into it, or a new type of repository.)
The unit of work would be able to handle all of these, especially if you're able to implement a generic repository for most of your database handling needs. Your tag mentions you're using Entity Framework 4.3 right?
Hope this helps!
I think your main issue is too many layers of abstraction. You are using Entity Framework, so you already have a layer of abstraction around you data, adding two more layers (one per entity) via a Repository and a Manager interface has led to the large number of interfaces your controller depends upon. It doesn't add a whole lot of value, and besides, YAGNI.
I would refactor, getting rid of your repository and manager layers, and use an 'ambient context'.
Then, look at the kinds of queries your controller is asking of the manager layers. Where these are very simple, I see no problems querying your 'ambient context' directly in your controller - this is what I would do. Where they are more complicated, refactor this into a new interface, grouping things logically (not necessarily one per Entity) and use your IOC for this.
I have a unit test to check my AccountController.LogIn method. A redirect result is returned to indicate successs, otherwise a viewresult is returned.
The test always fails as the return type is always viewresult, even though the test should return success as the credentials are valid, however I can't identify where the problem is.
My TestMethod:
CustomerRepositoryTest.cs
[TestMethod]
public void Can_Login_With_Valid_Credentials()
{
// Arrange
Mock<IAddressRepository> mockAddressRepository = new Mock<IAddressRepository>();
Mock<ICustomerRepository> mockCustomerRepository = new Mock<ICustomerRepository>();
Mock<IOrderRepository> mockOrderRepository = new Mock<IOrderRepository>();
LoginViewModel model = new LoginViewModel
{
Email = "me#5.com",
Password = "password"
};
AccountController target = new AccountController(mockCustomerRepository.Object, mockAddressRepository.Object, mockOrderRepository.Object);
// Act
ActionResult result = target.LogIn(model);
// Assert
Assert.IsInstanceOfType(result, typeof(RedirectResult));
Assert.AreEqual("", ((RedirectResult)result).Url);
}
When I run the test, it fails in My AccountController Login method when I call ValidateUser
AccountController.cs
if (Membership.ValidateUser(LoginModel.Email, LoginModel.Password))
{
...
return RedirectToRoute(new
{
controller = "Account",
action = "Details"
});
}
else
{
return View();
}
My custom MembershipProvider ValidateUser looks like this:
AccountMembershipProvider.cs
public class AccountMembershipProvider : MembershipProvider
{
[Inject]
public ICustomerRepository repository { get; set; }
public override bool ValidateUser(string username, string password)
{
var cust = repository.GetAllCustomers().SingleOrDefault..
When I run the application normally i.e. not testing, the login works fine. In the application I inject the CustomerRepository into the custom membership provider in Global.asax:
public class MvcApplication : System.Web.HttpApplication
{
private IKernel _kernel = new StandardKernel(new MyNinjectModules());
internal class MyNinjectModules : NinjectModule
{
public override void Load()
{
Bind<ICustomerRepository>().To<CustomerRepository>();
}
}
protected void Application_Start()
{
_kernel.Inject(Membership.Provider);
...
Is it the case that the Global.asax code isn't run while unit testing? and so my custom provider isn't being injected, hence the fail?
UPDATE
I mocked my Provider class and passed the mocked CustomerRepository object to it.
Mock<AccountMembershipProvider> provider = new Mock<AccountMembershipProvider>();
provider.Object.repository = mockCustomerRepository.Object;
I then created a setup for the method I'm trying to test:
mockCustomerRepository.Setup(m => m.IsValidLogin("me#5.com", "password")).Returns(true);
But unfortunately I'm still getting a fail every time. To answer the question about whether I need a real or mocked object for the test - I'm not fussy, I just want to get it working at the moment!
UPDATE 2
I made those changes, and while it's still failing, it has allowed me to identify the specific problem. While debugging the test, I discovered that when I call the overridden
Membership.ValidateUser(LoginModel.Email, LoginModel.Password)
The Membership.Provider is of type SqlMembershipProvider (which is presumably the default type) and consequently validation fails.
If I cast the provider to my custom provider...
((AccountMembershipProvider)Membership.Provider).ValidateUser(LoginModel.Email, LoginModel.Password)
I get an InvalidCastException when running the test. So it seems that my mocked AccountMembershipProvider isn't being used for the test and instead the default provider is being used.
I think you have identified this already in the comment:
// set your mock provider in your AccountController
However I'm not sure what you mean exactly - I don't have a property on my AccountController to assign the provider to, and i'm not injecting it into the constructor.
Your original question:
"Is it the case that the Global.asax code isn't run while unit testing? and so my custom provider isn't being injected, hence the fail?"
My answer:
Yes.
The global.asax file is used by ASP.Net and ISS at run-time. It is compiled when the server receives it's first request.
When you are testing, you aren't in the context of a ASP.Net web-application running in ISS, but rather in program running in a test session. This means your global.asax won't get called.
More importantly, when you call:
Mock<ICustomerRepository> mockCustomerRepository = new Mock<ICustomerRepository>();
Ninject won't get called the fill the import. Moq will create a mock based on the interface. No real object will be created.
Since you did not define any Setup methods, ie:
mockCustomerRepository.Setup(mock => mock.MyAuthenticateMethod()).Returns(true);
You are passing around a mock with no defined behaviour. By default these will return false. Which probably explains why you are always getting a viewresult.
What you need to do is define setup methods for the methods you need to mock.
These are the methods of CustomerRepository that will get called you when call:
target.LogIn(model);
Also note that your AccountMembershipProvider won't get it's CustomerRepository injected since NInject won't be used. If you are testing the AccountController and it's not static (Moq doesn't work with static) you should consider mocking the AccountMembershipProvider. If you can't, then you would need to supply your mocked instance of CustomerRepository to AccountMembershipProvider.repository in your tests.
Another solution, instead of creating a Moq mock, you could also manually create (with new) a real instance of CustomerRepository in your test.
You could have Ninject do it, but at this point why? You can create it yourself and you know what specific type to create.
It all boils down to if you need a mock or a real instance of the object for this test.
Update:
If your provider is mocked, there is no need to set the repository on it. When you are calling a mock, the real object won't get called.
What you need to do is something like this:
Mock<AccountMembershipProvider> provider = new Mock<AccountMembershipProvider>();
// set your mock provider in your AccountController
provider.Setup(m => m.ValidateUser("me#5.com", "password")).Returns(true);
Update 2:
I think you are pretty close to making your test work. Without seeing all of your code (test and class under test) I can't really give you anymore help. I also feel I answered your original question, but if you are still stuck you might get more help by asking a new question relating to the problem you are currently tackling.
In your code you only create controller, but not run initialization for membership. I recommend you to create your own UserService with method ValidateUser and other you need instead of static class Membership usage.
What's a good way to validate a model when information external to the model is required in order for the validation to take place? For example, consider the following model:
public class Rating {
public string Comment { get; set; }
public int RatingLevel { get; set; }
}
The system administrator can then set the RatingLevels for which a comment is required. These settings are available through a settings service.
So, in order to fully validate the model I need information external to it, in this case the settings service.
I've considered the following so far:
Inject the service into the model. The DefaultModelBinder uses System.Activator to create the object so it doesn't go through the normal dependency resolver and I can't inject the service into the model without creating a new model binder (besides which, that doesn't feel like the correct way to go about it).
Inject the service into an annotation. I'm not yet sure this is possible but will investigate further soon. It still feels clumsy.
Use a custom model binder. Apparently I can implement OnPropertyValidating to do custom property validation. This seems the most preferable so far though I'm not yet sure how to do it.
Which method, above or not, is best suited to this type of validation problem?
Option 1 doesn't fit. The only way it would work would be to pull in the dependency via the service locator anti-pattern.
Option 2 doesn't work. Although I couldn't see how this was possible because of the C# attribute requirements, it is possible. See the following for references:
Resolving IoC Container Services for Validation Attributes in ASP.NET MVC
NInjectDataAnnotationsModelValidatorProvider
Option 3: I didn't know about this earlier, but what appears to be a very powerful way to write validators is to use the ModelValidator class and a corresponding ModelValidatorProvider.
First, you create your custom ModelValidatorProvider:
public class CustomModelValidatorProvider : ModelValidatorProvider
{
public CustomModelValidatorProvider(/* Your dependencies */) {}
public override IEnumerable<ModelValidator> GetValidators(ModelMetadata metadata, ControllerContext context)
{
if (metadata.ModelType == typeof(YourModel))
{
yield return new YourModelValidator(...);
}
}
}
ASP.NET MVC's IDependencyResolver will attempt to resolve the above provider, so as long as it's registered with your IoC container you won't need to do anything else. And then the ModelValidator:
public class EntryRatingViewModelValidatorMvcAdapter : ModelValidator
{
public EntryRatingViewModelValidatorMvcAdapter(
ModelMetadata argMetadata,
ControllerContext argContext)
: base(argMetadata, argContext)
{
_validator = validator;
}
public override IEnumerable<ModelValidationResult> Validate(object container)
{
if (/* error condition */)
{
yield return new ModelValidationResult
{
MemberName = "Model.Member",
Message = "Rating is required."
};
}
}
}
As the provider is retrieved through the IDependencyResolver and the provider has full control over the returned ModelValidators I was easily able to inject the dependencies and perform necessary validation.
You could try fluent validation. It supports asp.net mvc and DI so you can inject external services into your validators.
Assuming that you want both client and server-side validation of the model based upon the values returned from the service, I would opt for 2., Inject the service into an annotation.
I give some sample code in my response to this question about adding validators to a model. The only additional step in your case is that you will need to inject your service into your class inheriting from DataAnnotationsModelValidatorProvider.
What about just simply using IValidateableObject and in that method determine if validation is appropriate or not and setting the errors there?
How do I use IValidatableObject?
Using a DI container (in this case, Ninject) is it possible - - or rather, wise to cache a frequently used object for the entire application lifetime (or at least until it is refreshed)?
To cite example, say I have a Template. There are many Template objects, but each user will inherit at least the lowest level one. This is immutable and will never change without updating everything that connects to it (so it will only change on administration demand, never based on user input). It seems foolish to keep querying the database over and over for information I know is not changed.
Would caching this be best done in my IoC container, or should I outsource it to something else?
I already store ISessionFactory (nHibernate) as a Singleton. But that's a little bit different because it doesn't include a query to the database, just the back-end to open and close ISession objects to it.
So basically I would do something like this..
static class Immutable
{
[Inject]
public IRepository<Template> TemplateRepository { get; set; }
public static ITemplate Template { get; set; }
public void Initialize()
{
if(Immutable.Template == null)
{
Immutable.Template = TemplateRepository.Retrieve(1); // obviously better logic here.
}
}
class TemplateModule : Module
{
public void Load()
{
Bind<ITemplate>().ToMethod(() => Immutable.Initialize())InSingletonScope();
}
}
Is this a poor approach? And if so, can anyone recommend a more intelligent one?
I'd generally avoid using staticness and null-checking from your code - create normal classes without singleton wiring by default and layer that aspect on top via the container. Ditto, remove reliance on property injection - ctor injection is always better unless you have no choice
i.e.:
class TemplateManager
{
readonly IRepository<Template> _templateRepository;
public TemplateManager(IRepository<Template> templateRepository)
{
_templateRepository = templateRepository;
}
public ITemplate LoadRoot()
{
return _templateRepository.Retrieve(1); // obviously better logic here.
}
}
class TemplateModule : Module
{
public void Load()
{
Bind<ITemplate>().ToMethod(() => kernel.Get<TemplateManager>().LoadRoot()).InSingletonScope();
}
}
And then I'd question whether TemplateManager should become a ninject provider or be inlined.
As for the actual question... The big question is, how and when do you want to control clearing the cache to force reloading if you decided that the caching should be at session level, not app level due to authorization influences on the template tree? In general, I'd say that should be the Concern of an actual class rather than bound into your DI wiring or hardwired into whether a class is a static class or is a Singleton (as in the design pattern, not the ninject Scope).
My tendency would be to have a TemplateManager class with no static methods, and make that a singleton class in the container. However, to get the root template, consumers should get the TemplateManager injected (via ctor injection) but then say _templateManager.GetRootTemplate() to get the template.
That way, you can:
not have a reliance on fancy ninject providers and/or tie yourself to your container
have no singleton cruft or static methods
have simple caching logic in the TemplateManager
vary the Scoping of the manager without changing all the client code
have it clear that getting the template may or may not be a simple get operation
i.e, I'd manage it like so:
class TemplateManager
{
readonly IRepository<Template> _templateRepository;
public TemplateManager(IRepository<Template> templateRepository)
{
_templateRepository = templateRepository;
}
ITemplate _cachedRootTemplate;
ITemplate FetchRootTemplate()
{
if(_cachedRootTemplate==null)
_cachedRootTemplate = LoadRootTemplate();
return _cachedRootTemplate;
}
ITemplate LoadRoot()
{
return _templateRepository.Retrieve(1); // obviously better logic here.
}
}
register it like so:
class TemplateModule : Module
{
public void Load()
{
Bind<TemplateManager>().ToSelf().InSingletonScope();
}
}
and then consume it like so:
class TemplateConsumer
{
readonly TemplateManager _templateManager;
public TemplateConsumer(TemplateManager templateManager)
{
_templateManager = templateManager;
}
void DoStuff()
{
var rootTempalte = _templateManager.FetchRootTemplate();
Wild speculation: I'd also consider not having a separate IRepository being resolvable in the container (and
presumably having all sorts of ties into units of work). Instead, I'd have the TemplateRepository be a longer-lived thing not coupled to an ORM layer and Unit Of Work. IOW having a repository and a Manager none of which do anything well defined on their own isnt a good sign - the repository should not just be a Table Data Gateway - it should be able to be the place that an Aggregate Root such as Templates gets cached and collated together. But I'd have to know lots more about your code base before slinging out stuff like that without context!