NUnit best practice - tdd

Environment: (C# WinForms application in Visual Studio Professional 2008)
I've been digging around a little for guidance on NUnit best practices. As a solo programmer working in a relatively isolated environment I'm hoping that collective wisdom here can help me.
Scott White has a few good starting points here but I'm not sure I totally agree with everything he's said -- particularly point 2. My instincts tell me that the closer a test is to the code being tested the more likely you are to get complete test coverage. In the comments to Scott's blog posting is a remark that just testing the public interface is considered best practice by some, but I would argue the test framework is not a typical class consumer.
What can you recommend as best practices for NUnit?

If by point 2, you mean the "bin folder per solution" -- I can see your point. Personally, I would simply add the reference to each test project. If, on the other hand, you really mean (1b) "don't put your tests in the same assembly as your code" I heartily agree with him and disagree with you. Your tests should be distinct from your production code in order to enhance code clarity and organization. Keeping your test classes separate helps the next programmer understand it more easily. If you need access to internals in your tests -- and you might since internal methods are "public" to the assembly, you can use the InternalsVisibleTo construct in the Assembly.cs file.
I, too, would recommend that, in general, it is sufficient to unit test only the public interface of the code. Done properly (using TDD), the private methods of your code will simply be refactorings of previous public code and will have sufficient test coverage through the public methods. Of course, this is a guideline not a law so there will be times that you might want to test a private method. In those instances, you can create an accessor and use reflection to invoke the private method.
Another recommendation that I would make is to use unit testing and code coverage in tandem. Code coverage can be a useful heuristic to identify when you need more tests. Lack of coverage should be used as a guide to indicate where more testing may be needed. This isn't to say that you need 100% coverage -- some code may be simple enough not to warrant a unit test (automatic properties, for instance) and they may not be touched by your existing tests.
There were a couple of issues that I had with the article. Probably the biggest is the lack of abstraction away from the database for unit tests. There probably are some integration tests that need to go against the db -- perhaps when testing trigger or constraint functionality if you can't convince yourself of their correctness otherwise. In general, though, I'm of the opinion that you should implement your data access as interfaces, then mock out the actual implementations in your unit tests so that there is no need to actually connect to the database. I find that my tests run faster, and thus I run them more often when I do this. Building up a "fake" database interface might take a little while but can be reused as long as you stick with the same design pattern for your data access.
Lastly, I would recommend using nUnit with TestDriven.Net - a very useful plugin whether you're doing nUnit or MSTest. Makes it very handy to run or debug tests with a right-click context menu.

My instincts tell me that the closer a
test is to the code being tested the
more likely you are to get complete
test coverage. In the comments to
Scott's blog posting is a remark that
just testing the public interface is
considered best practice by some, but
I would argue the test framework is
not a typical class consumer.
If your code cannot be tested using only public entry points, then you have a design problem. You should read more about TDD and SOLID principles (especially single responsibility principle and dependency inversion). Then you will understand that following TDD approach will help you write more testable, flexible and maintainable code, without the need for using such "hacks" as testing classes' private parts.
I also highly recommend reading Google's guide to testability by Miško Hevery, it has plenty of code samples which cover these topics.

I'm in a fairly similar situation and this question describes what I do keep-your-source-close-and-your-unit-tests-closer. There weren't too many others enamoured with my approach but it works perfectly for me.

Related

Sharing Specflow Feature Files with Multiple Applications

My goal is to be able to write core testing that I can use within a unit testing framework as well as UI testing with selenium.
For simple test like:
Scenario: Add two numbers
Given I have entered 50 into the calculator
And I have entered 70 into the calculator
When I press add
Then the result should be 120
I would create both unit tests to prove that my core API would pass as well as a Selenium test that would prove my UI is doing the correct thing as well.
I briefly tried to find anyone doing something similar through Google, but couldn't find any examples. So I guess my question is, has anyone here done anything similar?
On approach I had thought of was simple adding the feature files to a project or directory and using the add existing item as link as the solution.
Update: Adding feature files to a common directory and adding them as a link appears to be working great. The feature bindings regenerates for each project the feature file was included in so I can run unit tests in one and Selenium UI tests in the other.
First, lets start with why you might want to do this. Its laziness of the good kind.
The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don't have to answer so many questions about it. Hence, the first great virtue of a programmer. Also hence, this book. See also impatience and hubris. (p.609)
Larry Wall, Programming Perl
Except it isn't, because we aren't going to reduce our overall energy expenditure.
When you are using SpecFlow, the easy part to keep up to date is the plain text. You will find yourself refactoring the [Binding]s again and again, but the scenarios tend to be quite easy to work with, and need very little revision once they have been agreed.
In addition the [Binding]s are global. Load them in from any assembly and they are available to the SpecFlow runner. In respect of what you are trying this actually makes things harder as you need to put effort in to keep the UI bindings from being mixed up with the non-UI bindings.
Also consider the way that SpecFlow actually runs the tests from feature files. It's a two stage process.
When you save the .feature file the SpecFlow VS plugin generates a .feature.cs file.
When you run your test engine (e.g. NUnit) it ignores the plain text and uses compiled code from .feature.cs
So if you start using linked .features I have no idea if the SpecFlow plugin will generate .feature.cs for both instances of the file. (If you try this please let us know)
Second lets consider the features themselves. I think you will constantly finding yourself compromising your tests to make them fit the other place they are used. Already in the example you have given you have on the screen. If you are working with just the core API then there won't be a screen, so do we change this to fit better in a non-UI scenario?
Finally you have another thing to consider, just how useful will your tests be. If you have already got a test that tests the Core API, then what will it mean to run same test via Selenium. All you will really test is the UI layer. In my current employment we have a great number of regression tests that perform this very kind of testing, running up a client that connects to a server and manipulating the UI to get the desired scenarios enacted. These are the most fragile tests we have due to their scale. They constantly break and we basically have to check our entire codebase to find the line that broke them. Often something like 10-100 of them break just for a one line change. If these tests weren't so important to the regression cycle then the effort in maintaining them would just be too much. In my own personal projects I tend to remove these tests completely and instead with UIs, I avoid testing the View layer. With WPF MVVM, I execute Commands and test for results in ViewModels. If somebody then decides the TextBox should be a ComboBox or that it will work better in mauve, then my testing is isolated.
In short, there is a reason you can't find anything about this on Google :-)
In general (see http://martinfowler.com/bliki/TestPyramid.html), one should limit the number of automated tests that test the UI directly, and prefer tests that start at the presentation layer (just below the view layer), or below.
SpecFlow is agnostic; the tests can be implemented using e.g. Selenium at the UI layer or just MSTest or NUnit at any of the layers below.
However having said that I appreciate that you will have situations where you are doing ATDD and want to implement SpecFlow scenarios to match each of the acceptance criteria. Some of the criteria will be perfectly fine to test at a lower architectural level, but one or two of them may be specific to the GUI-- for example testing Login and ensuring that the user is redirected to the home page after successful login. If using Angular2 or React routing (see https://en.wikipedia.org/wiki/Single-page_application), that redirect is likely done in the GUI layer itself.
I don't have a perfect answer yet, but as a certified SpecFlow trainer, I have a vested interest in this! The way I am currently leaning is to use a complementary tool like CucumberJS for the front-end specific tests (such as testing React router redirects) and SpecFlow for tests at lower architectural layers. Our front-end uses Node.JS/Express and our backend is .NET Core. The idea is that the front-end tests mostly use the front-end only with mocked out AJAX calls to the backend (see sinonjs), and the back-end tests use EF Core with the in-memory option (see docs.efproject.net/en/latest/providers/in-memory/. So the tests all run fast.
Of course, you still need a few tests that actually go all the way through, but those are different-- we should call those integration tests. I do not believe that acceptance tests need to be integration tests. That way, you have a suite of acceptance tests from doing ATDD, plus a relatively small set of integration tests that test all the way front-to-back. The integration tests run more slowly and require more maintenance, so you separate them out into a different part of the CI/CD build chain.
I hope this makes sense. It is not so much solving the problem as avoiding the problem.

Test Driven Development. How do I handle refactoring untested legacy code?

I am beginning to adopt Test Driven Design (TDD) behaviors and workflow for my iOS development projects. There is at least one impediment though in the context of legacy software. I will often have to add features to a pre-existing code base that I am new to. I will typically want to refactor at the beginning of working with the code base which will often have no tests available to ensure that my refactor-ings are not altering code functionality or worse, adding bugs.
My question is how do TDD folks bootstrap the whole process when the code is not written from scratch but rather legacy code that they are brought in to work on?
Thanks,
Doug
UPDATE
For a concrete example, I am using the example from Martin Fowler's Refactoring re-coded in Objective-C as a training device for TDD (and AppCode) >>
I built the code from tests. I found I needed to add instance variables to the Customer class to ensure I didn't screw up the cost calculations in the statement method as I grew the code. This is the fundamental issue I need insight into.
To begin with, if you don't understand the legacy code you're working with, you need to fix that before you do things which you're concerned may change behavior.
In your situation, after understanding the legacy code, I would write tests that will run against that legacy code. Once you're satisfied that these tests function as you expect, you're in a much better position to test your refactored code to ensure it functions as the old code did.

Whats the best way to describe what a framework is and the benefits of a framework over procedural code?

I am at a company that does not understand the concept of using frameworks and the benefits of them. I have tried to explain that it provides structure and organization but the people I am trying to explain to are still a little fuzzy about it. In your opinion, what is the best way to describe a framework in the most simplest terms and how it could overall benefit a company to transition their code from procedural and spaghetti code to a nice organized framework?
Thank you for your time.
I guess the best explanation I can think of for using a framework are to standardize your design process and save yourself a lot of effort as your code-base grows. Not to mention that a lot of work can be taken care of for you by the framework (which could save hours of coding). A framework can give you all the parts you need to build your application, you just have to assemble them.
The best reasons I can think of for using a framework are:
Code reuse -- If you try and follow the design of the framework you can save yourself a lot of coding time. However, some frameworks do require a time investment to master.
Encapsulation -- You can change the underlying implementation of different parts of the framework in a way that doesn't require a lot of code rewriting.
Extendability -- You can extend the code of the framework to add features you need and if you are careful about your design, you can reuse these features too.
I'm sure there are many other good reasons, but I'm sleepy.
EDIT: A good example of the benefits of a framework can be replacing the database adapter with another ie. switching from mysql to postgresql. This could be awful with functional programming but a framework could make this transition very easy.
Your coworkers most likely already use libraries, which one could define as code that exists outside of your project, and is meant to used in many projects.
A framework is like a library, but usually has other featues, such as
It might enforce changes to your code. For example, you wouldn't replace one method of your WebForms project with a call to the ASP.NET MVC framework - the entire project would be written differently to conform to the framework.
It might restrict the universe of applications that you can write. For example, you might be using a CRUD generating framework that lets you make data entry applications, but wouldn't let you make a video editing application.
However, a framework will usually give you a lot of value in return.
Let them do as they like ,first.
then pick up their shortcomings and
finally generalise your framework to avoid procedural code.
I'm going to concentrate on only a part of the question:
In your opinion, what is the best way to describe a framework in the most simplest terms
Framework == Library + Inversion of Control

Simple Question: Cocoa Unit Tests and MVC

I am new to unit testing, I understand the basic concepts, and I am able to get unit testing setup correctly in my Cocoa projects; however the thing that is giving me a hard time is what exactly I should be writing unit tests for. For example, I know that you should write tests for model objects, but is that all I should be writing tests for? Should I also be writing tests for controllers, and views? What exactly would I be testing then? Could somebody please try and clarify what you should write unit tests for and what I should be testing?
The rule I generally follow is that all public interface needs to be tested. It's always up to you what exactly to test however the bigger your test coverage the less possibility there is for some nasty bug to creep out.
BTW for testing views I suggest Google Toolbox which allows you to compare screenshots.

visual studio generate test

is there any vs addins that can take a class and set up all the wiring to generate the test class and methods as well as mocking the dependencies, etc. this seems like something that can be automated.
You can try to create some VS templates.
I've create some Resharper templates for mysel but they are as sophisticated as what you want.
Microsoft tried something like that awhile back, I believe, and was widely criticized for not understanding what Test-Driven Development/Design was all about.
Pex might be part of what you're looking for. It's an aid to unit testing, not a replacement for it.
There are also IoC Container frameworks (and I think mock/isolation frameworks as well) that support auto mocking, which might also help.
As Vadim mentioned, templates and snippets can take care of a lot of the boilerplate code.
I haven't used Pex or auto mocking; I just do what Vadim does.
I suspect not; even if the sigs can be automated, You'll need to provide all the cases; it can't infer what is supposed to equal what, what properties matter, what are the edge cases, etc.

Resources