So I know that unit testing is a must. I get the idea that TDD is the way to go when adding new modules. Even if, in practice, I don't actually do it. A bit like commenting code, really.
The real thing is, I'm struggling to get my head around how to unit-test the UI and more generally objects that generate events: user controls, asynchronous database operations, etc.
So much of my code relates to UI events that I can't quite see how to even start the unit testing.
There must be some primers and starter docs out there? Some hints and tips?
I'm generally working in C# (2.0 and 3.5) but I'm not sure that this is strictly relevant to the question.
the thing to remember is that unit testing is about testing the units of code you write. Your unit tests shouldn't test that clicking a button raises an event, but that the code being executed by that click event does as it's supposed to.
What you're really wanting to do is test the underlying code does what it should so that your UI layers can execute that code with confidence.
Read this if you're struggling with UI Testing
Manually test UI stuff where benefit to cost in automating it is minimal. Test everything under the UI skin ruthlessly. Use Humble Dialog, MVC or variants to keep logic and UI distinct and loosely coupled.
You should separate logic and presentation. Using MVP(Model-View-Presenter)/MVC (Model-View-Controller) patterns you can unit test you logic without relying on UI events.
Also you can use White framework to simulate user input.
I would highly recommend you to visit Microsoft's Patterns&Practices developer center, especially take a look at composite application block and Prism - you can get a lot of information on test driven design.
The parts of your application that talk to the outside world (ie UI, database etc.) are always a problem when unit-testing. The way around this is actually not to test those layers but make them as thin as possible. For the UI you can use a humble dialog or a view that doesn't do anything worth testing and then put all the logic in a controller or presenter class. You can then use a mocking framework or write your own mock objects to make fake versions of the views to test the logic in the presenters or controller. On the database side you can do something similar.
Testing events is not impossible. You can for example subscribe an anonymous method to the event that throws an exception if the event is thrown or counts the number of times the event is thrown.
Related
I have a Ruby program that uses a webdriver (Watir) to walk a page and perform tests alongside a BDD suite called RSpec.
I'm trying to optimize it for a slow server by improving its ability to navigate efficiently. Thus far It has been creating a new browser session for each test package, then closing it afterwards. This is very inefficient because it hits the login page again for every instance.
Of course, I don't want to hard-code navigation instructions into the tests because adding new spec files may change the order they are executed in, and not every page of the webapp has the main navigation bar, so navigation may need to change based on the page the last spec left the browser on.
I need some kind of master library or module that will take what page the program is at and what page it wants to go to, then bring the browser to that page so it can begin testing. What is the best way to do this?
I'm not fantastically experienced so I'd love input from more seasoned developers. Should I have each page be a class? Should I just stick with closing browsers after each test packet? Should I manually code brute-force methods (gotoPage1FromPage2)?
Okay, that last one was a joke. Seriously though, what is the best way to do this?
You are exactly correct about the difficulties of maintaining state in your tests. Shutting down a browser between each session is the best way to make sure that you always know the state of the browser at all times for a test. Saucelabs goes so far as to spin up a new virtual machine for each of the tests they run. Ideally you decrease test time by running multiple tests in parallel.
I'm not certain I know what you mean by "test package" or how many times that means you are starting a new browser and logging in, but... another thing to consider investigating is whether you can set a cookie or use oauth to log in without having to use the navigation. I've worked at places that allowed admin logins for their staging environments by passing a parameter in an url.
Your tests should be clear in their intention, which typically means your Page Object implementation does not know about what comes before or after the actions you are taking. You should be able to look at the RSpec code and reproduce exactly what it is testing. Abstracting methods for taking you from one place to another magically in the background is not a good idea.
Best practice used to be having methods from one Page Object return new Page Objects. So users could write methods like this in their tests: LoginPage.new.login.view_account.edit_address. Many of us have been bitten by this approach. Plus it isn't as easy to read as doing something like this:
LoginPage.new.login
HomePage.new.view_account
AccountPage.new.edit_address
This doesn't prevent you from using #visit methods as needed to navigate between Page Objects.
I realize this is a duplicate of about 20 different posts, but none of them are specific to MVC4, and none that I've seen really answer all of my questions. So far my first foray into the world of TDD has been frustrating to say the least. Most of what I've tried to do seems incompatible with MVC 4 or next to impossible without using poorly documented third party libraries I don't quite understand yet.
What I want to be able to do, is write a tests that will test my Controller Actions, The Model they're passing and The View the action is sending the model to. I want to test if the view exists, I want to test if the model being passed is the right type for the view, I'd like some way to test if it will process properly. I also want to be able to test my routes. And testing Authentication filters?
I want a way to unit test ASP.Net MVC that will leave very little to chance.
Testing the Model output of an Action seems easy enough, but testing the views has been next to impossible.
So here's my list of questions:
Once I test the action and get the action result, how do I test to see if the view it wants exists?
How do I test my routes?
How can I test to be sure my views are being processed properly?
What is really "best practice" for THOROUGH unit testing of ASP.Net MVC 4?
How do I unit test forms authentication?
How do I unit test Action Filters?
I'd prefer to use the built in Visual Studio test projects, but if I must use NUnit, I must. I just need to make sure it gets done properly.
Thank you in advance for your responses.
EDIT: I also couldn't get NUnit working with my MVC4 app because of some incompatibly with the version of .NET one of the assemblies was compiled in.
Making sure a view exists
http://haacked.com/archive/2007/12/17/testing-routes-in-asp.net-mvc.aspx/
http://blog.davidebbo.com/2011/06/unit-test-your-mvc-views-using-razor.html
see below
How can I unit test my ASP.NET MVC controller that uses FormsAuthentication?
How-to test action filters in ASP.NET MVC?
no. 4: This is a hard question. How does one test anything thoroughly? Personally, I don't really test the views, other than with the 3 major browsers and my two eyes, as it's hard to test a website and all it's components without actually using it. You have JavaScript firing, CSS stylizing, and it looks different across different browsers. So, to me, it seems like testing the view that thoroughly is a minor part of the overall usability of your site. If you're developing a simple table based report of financial data, test that data hard. If your view is a the base for a fancy Ajax site, maybe don't test the HTML so much as the experience. I know it's not an easy, cut-and-dry answer, but the acceptable level of coverage always involves trade offs.
What are some lesser known tips for implementing a loosely-coupled MVC structure in a non-trivial desktop application (e.g. having at least two levels of views/controllers and more than one model)?
Use interfaces.
A lot. I like using the "IDoThisForYou" style (even in a language where this isn't idiomatic) because an interface represents a role that another class can use.
Make the controllers responsible for controlling interaction
The controllers control interaction between domain objects, services, etc.
Use events to pass information between controllers
Let every controller who needs information subscribe to the event. Use an interface.
Don't put presentation information on your domain objects
Instead, allow the controller to create a presenter or view model which has the information you need. This includes no "ToString()". If you're in a language without multiple inheritance you might end up with a bit of duplication between presenters. That's OK - duplication is better than coupling, and the UI changes a lot anyway.
Don't put logic in your gui
Instead, allow the controller to create a presenter or view mdoel which has the information you need. This includes train wrecks like "MyAnimal.Species.Name" - make it present "SpeciesName" instead.
Test it
Manually. There is no substitute. Unit and acceptance testing goes a long way, but there's nothing like bringing the app up and actually using the mess you wrote for finding out how messy it is. Don't pass it to the QAs without having a go yourself.
Oh, and don't mock out domain objects in unit tests. It's not worth it. Use a builder.
Declare event handlers in your interfaces (important for the views). This way you can loosely couple event handling which is managed by the controller. You may need to use the InvokeRequired when working with the view if your application is multi-threaded.
When you start a new web application, which pattern are you choosing between MVC and MVP and why?
(This answer is specific to web applications. For regular GUIs, see What are MVP and MVC and what is the difference?.)
Traditional MVC for GUI applications
This isn't really relevant to web applications, but here's how MVC traditionally worked in GUI applications:
The model contained the business objects.
The controller responded to UI interactions, and forwarded them to the model.
The view "subscribed" to the model, and updated itself whenever the model changed.
With this approach, you can have (1) multiple ways to update a given piece of data, and (2) multiple ways to view the same data. But you don't have to let every controller know about every view, or vice versa—everybody can just talk to the model.
MVC on the server
Rails, Django and other server-side frameworks all tend to use a particular version of MVC.
The model provides approximately 1 class per database table, and contains most of the business logic.
The view contains the actual HTML for the site, and as little code as possible. Basically, it's just templates.
The controller responds to HTTP requests, processes parameters, looks up model objects, and passes values to the view.
This seems to work very well for server-based web applications, and I've been very happy with it.
MVP on the client
However, if most of your code is written in JavaScript and runs in the web browser, you'll find lots of people using MVP these days. In this case, the roles are a bit different:
The model still contains all the basic entities of your business domain.
The view is a layer of fairly dumb widgets with little logic.
The presenter installs event handlers on the view widgets, it responds to events and it updates the model. In the other direction, the presenter listens for changes to the model, and when those changes occur, it updates the view widgets. So the presenter is a bidirectional pipeline between the model and the view, which never interact directly.
This model is popular because you can easily remove the view layer and write unit tests against the presenter and model. It's also much better suited to interactive applications where everything is updated constantly, as opposed to server applications where you deal with discrete requests and responses.
Here's some background reading:
Martin Fowler's encyclopedic summary of MVC, MVP and related approaches. There's a lot of good history here.
Martin Fowler's description of "Passive View", a variation of MVP.
Google's MVP + event bus
This is a new approach, described in this video from the Google AdWords team. It's designed to work well with caching, offline HTML 5 applications, and sophisticated client-side toolkits like GWT. It's based on the following observations:
Anything might need to happen asynchronously, so design everything to be asynchronous from the very beginning.
Testing browser-based views is much slower than testing models and presenters.
Your real model data lives on the server, but you may have a local cache or an offline HTML 5 database.
In this approach:
The view is very dumb, and you can replace it with mock objects when running unit tests.
The model objects are just simple containers for data, with no real logic. You may have multiple model objects representing the same entity.
The presenter listens to events from the view. Whenever it needs to update or read from the model, it sends an asynchronous message to the server (or to a local caching service). The server responds by sending events to the "event bus". These events contain copies of the model objects. The event bus passes these events back to the various presenters, which update the attached views.
So this architecture is inherently asynchronous, it's easy to test, and it doesn't require major changes if you want to write an HTML 5 offline application. I haven't used it yet, but it's next on my list of things to try. :-)
Both MVP and MVC make sense and allow to separate logic from display.
I would choose MVC because it's widely used in web development these days (Rails, .NET MVC which is used for SO) so my application will be more easily maintainable by someone else. It is also -to me- cleaner (less "power" given to the view), but this is subjective.
Another alternative is MTV, Model-Template-View which Django uses.
I just started a new GWT project for a client and I'm interested in hearing people's experience with various GWT MVC architectures. On a recent project, I used both GXT MVC, as well as a custom messaging solution (based on Appcelerator's MQ). GXT MVC worked OK, but it seemed like overkill for GWT and was hard to make work with browser history. I've heard of PureMVC and GWTiger, but never used them. Our custom MQ solution worked pretty well, but made it difficult to test components with JUnit.
In addition, I've heard that Google Wave (a GWT application) is written using a Model-View-Presenter pattern. A sample MVP application was recently published, but looking at the code, it doesn't seem that intuitive.
If you were building a new GWT application, which architecture would you use? What are the pros and cons of your choice?
Thanks,
Matt
It's worth noting that google has finally written out a tutorial for designing using the mvp architecture. It clarifies a lot of the elements from the google i/o talk listed above. Take a looK: https://developers.google.com/web-toolkit/articles/mvp-architecture
I am glad this question has been asked, because GWT desperatley needs a rails-like way of structuring an application. A simple approach based on best practices that will work for 90 % of all use-cases and enables super easy testability.
In the past years I have been using my own implementation of MVP with a very passive view that enslaves itself to whatever the Presenter tells him to do.
My solution consisted of the following:
an interface per widget defining the methods to control the visual appearance
an implementing class that can be a Composite or use an external widget library
a central Presenter for a screen that hosts N views that are made up of M widgets
a central model per screen that holds the data associated with the current visual appearance
generic listener classes like "SourcesAddEvents[CustomerDTO]" (the editor does not like the real symbols for java generics here, so I used thoe brackets), because otherwise you will have lots of the same interfaces who just differ by the type
The Views get a reference to the presenter as their constructor parameter, so they can initialize their events with the presenter. The presenter will handles those events and notify other widgets/views and or call gwt-rpc that on success puts its result into the model. The model has a typical "Property[List[String]] names = ...." property change listener mechanism that is registered with the presenter so that the update of a model by an gwt-rpc request goes to all views/widgets that are interested.
With this appraoch I have gotten very easy testability with EasyMock for my AsynInterfaces. I also got the ability to easily exchange the implementation of a view/widget, because all I had to rewrite was the code that notified the presenter of some event - regardless of the underlying widget (Button, Links, etc).
Problems with my approach:
My current implementation makes it hard to synchronize data-values between the central models of different screens. Say you have a screen that displays a set of categories and another screen that lets you add/edit those items. Currently it is very hard to propagate those change events across the boundaries of the screens, because the values are cached in those models and it is hard to find our whether some things are dirty (would have been easy in a traditional web1.0-html-dumb-terminal kind of scenario with serverside declarative caching).
The constructor parameters of the views enable super-easy testing, but without a solid Dependency-Injection framework, one will have some UGLY factory/setup code inside "onModuleLoad()". At the time I started this, I was not aware of Google GIN, so when I refactor my app, I will use that to get rid of this boilerplate. An interesting example here is the "HigherLower" game inside the GIN-Trunk.
I did not get History right the first time, so it is hard to navigate from one part of my app to another. My approach is not aware of History, which is a serious downturn.
My Solutions to those problems:
Use GIN to remove the setup boilerplate that is hard to maintain
While moving from Gwt-Ext to GXT, use its MVC framework as an EventBus to attach/detach modular screens, to avoid the caching/synchronization issues
Think of some kind of "Place"-Abstraction like Ray Ryan described in his talk at I/O 09, which bridges the Event-Gap between GXT-MVC and GWTs-Hitory approach
Use MVP for widgets to isolate data access
Summary:
I dont think one can use a single "MVP" approach for an entire app. One definetly needs history for app-navigation, a eventbus like GXT-MVC to attach/detach screens, and MVP to enable easy testing of data access for widgets.
I therefore propose a layered approach that combines these three elements, since I believe that the "one-event-mvp-system"-solution wont work. Navigation/Screen-Attaching/Data-Access are three separate concerns and I will refactor my app (move to GXT) in the following months to utilize all three event-frameworks for each concerns separately (best tool for the job). All three elements need not be aware of each other. I do know that my solution only applies for GXT-projects.
When writing big GWT apps, I feel like I have to reinvent something like Spring-MVC on the client, which really sucks, because it takes a lot of time and brain-power to spit out something elegant as Spring MVC. GWT needs an app framework much more than those tiny little JS-optimizations that the compiler-guys work so hard on.
Here is a recent Google IO presentation on architecting your GWT application.
Enjoy.
-JP
If you're interested in using the MVP architecture, you might want to take a look at GWTP: http://code.google.com/p/gwt-platform/ . It's an open source MVP framework I'm working on, that supports many nice features of GWT, including code splitting and history management, with a simple annotation-based API. It is quite recent, but is already being used in a number of projects.
You should have a look at GWT Portlets. We developed the GWT Portlets Framework while working on a large HR Portal application and it is now free and open source. From the GWT Portlets website (hosted on Google code):
The programming model is somewhat similar to writing JSR168 portlets for a portal server (Liferay, JBoss Portal etc.). The "portal" is your application built using the GWT Portlets framework as a library. Application functionality is developed as loosely coupled Portlets each with an optional server side DataProvider.
Every Portlet knows how to externalize its state into a serializable PortletFactory subclass (momento / DTO / factory pattern) making important functionality possible:
CRUD operations are handled by a single GWT RPC for all Portlets
The layout of Portlets on a "page" can be represented as a tree of WidgetFactory's (an interface implemented by PortletFactory)
Trees of WidgetFactory's can be serialized and marshalled to/from XML on the server, to store GUI layouts (or "pages") in XML page files
Other important features of the framework are listed below:
Pages can be edited in the browser at runtime (by developers and/or users) using the framework layout editor
Portlets are positioned absolutely so can use scrolling regions
Portlets are configurable, indicate when they are busy loading for automatic "loading spinner" display and can be maximized
Themed widgets including a styled dialog box, a CSS styled button replacement, small toolbuttons and a HTML template driven menu
GWT Portlets is implemented in Java code and does not wrap any external Javascript libraries. It does not impose any server side framework (e.g. Spring or J2EE) but is designed to work well in conjunction with such frameworks.