Determining which classes would benefit most from unit testing? - visual-studio

I am working on a project where we have only 13% of code coverage with our unit tests. I would like to come up with a plan to improve that but by focusing first on the areas where increasing coverage would bring the greatest value.
This project is in C#, we're using VS 2008 and TFS 2008 and out unit tests are written using MSTest.
What methodology should I use to determine which classes we should tackle first?
Which metrics (code or usage) should I be looking at (and how can I get those metrics if this is not obvious)?

I would recommend adding unit tests to all the classes you touch, not retrofitting existing classes.
Most of the advantages of unit testing is in helping programmers code and ensuring that "Fixes" don't actually break anything, if you are not adding code to a new section of code that isn't every modified, the benefits of unit tests start to drop off.
You might also want to add unit tests to classes that you rely on if you have nothing better to do.
You absolutely should add tests to new functionality you add, but you should probably also add tests to existing functionality you may break.
If you are doing a big refactor, consider getting 80-100% coverage on that section first.

For some good statistics and deterministic querying of certain methods you could definitely look at NDepend: http://www.ndepend.com/
NDepend exposes a query language called CQL (Code Query Language) that allows you to write queries against your code relating to certain statistics and static analysis.
There is no true way to determine which classes might benefit the most, however by setting your own thresholds in CQL you could establish some rules and conventions.

The biggest value of a unit test is for maintenance, to ensure that the code still works after changes.
So, concentrate on methods/classes that are most likely / most frequently changed.
Next in importance are classes/methods with less-than-obvious logic. The unit tests will make them less fragile while serving as extra "documentation" for their contracted API

In general unit tests are a tool to protect against regression and regression is most likely to occur in classes with the most dependencies. You should not have to choose, you should test all classes, but if you have to, test the classes that have the most dependencies in general.

Arrange all your components into levels. Every class in a given level should only depend on components at a lower level. (A "component" is any logical group of one or more classes.)
Write unit tests for all your level 1 components first. You usually don't need mocking frameworks or another other such nonsense because these components only rely on the .NET Framework.
Once level 1 is done, start on level 2. If you level 1 tests are good, you won't need to mock those classes when you write your level 2 tests.
Continue in like fashion, working your way up the application stack.
Tip: Break all your components into level specific DLLs. That way you can ensure that low level components don't accidentally take a dependency on a higher level component.

Related

While using test driven development, should i remove previous tests if I know they work

I am currently doing a project for college and it requires me to write a simple program with a few methods. Every time I create a new method I copy the previous test, and then add that method to the test. Currently I have 4 tests with all 4 methods being tested in the fourth. Should I remove the first and second test which only test the first method, and then the first and second method. Sorry if this is confusing. Thanks
That's not how test-driven development (TDD) is usually done. TDD usually serves two purposes:
By writing the test first, you get feedback on the usability of your API design. A unit test uses the API of the System Under Test (SUT), so if the test is difficult to write, the SUT is difficult to use.
The tests subsequently become artefacts that serve as regression tests.
Test code is also code, and comes with the same maintenance cost as regular code. It should be kept to the same standard as all other code, since you'll have to maintain it for the lifetime of the code base.
For that reason, rules about duplication, cohesion, coupling, etc. also apply for tests. In other words, don't copy and paste.
I've never heard about anyone following the process described in the OP.
Each test should test only one thing.

PHPUnit - Creating tests after development

I've watched and read a handful of tutorials on PHPUnit and Test Driven Development and have recently begun working with Laravel which extends the PHPUnit Framework with it's TestCase class. All of these things make sense to me, as far as, creating tests as you develop. And I find Laravel's extensions particularly intuitive (especially in regards to testing Controller routes)
However, I've recently been tasked with creating unit tests for a sizable app that's near completion. The app is built in Codeigniter, and it was not built with any tests
I find that I'm not entirely sure where to begin, or what steps to take in order to determine the tests I should create.
Should I be looking to test each controller method? Or do I need to break it down more than that? Admittedly, many of these controller methods are doing more than one task.
It is really difficult to write tests for existing project. I will suggest you to first start with writing tests for classes which are not dependent on other classes. Then you can continue to write tests to classes which coupled with classes for which you wrote tests. You will increase your test coverage step by step by repeating this process.
Also don't forget that some times you will need to refactor your code to make it testable. You should improve design of code for example if your controller methods doing more than one task you should divide this method to sub methods and test each of these methods independently.
I also will suggest you to look at this question
You are in a bit of a tight spot, but here is what I would do in your situation. You need to refactor (ie. change) the existing code so that you end up with three types of functions.
The first type are those that deal with the outside world. By this I mean anything that talks to I/O, or your framework or your operating system or even libraries or code from stable modules. Basically everything that has a dependency on code that you can not, or may not change.
The second group of functions are where you transform or create data structures. The only thing they should know about are the data structures that they receive as parameters and the only way they communicate back is by changing those structures or by creating and populating a new structure.
The third group consists of co-ordinating functions which make the calls to the outside world functions, get their returned data structures and pass those structures to the transforming functions.
Your testing strategy is then as follows: the second group can be tested by creating fake data structures, passing them in and checking that the transforms were done correctly. The third group of co-ordinating functions can be tested by dependency injection and mocking to see that they call the outside world and transform functions correctly. Finally the last group of functions should not be tested. You follow the maxim - "make it so simple that their is obviously nothing wrong". See if you can keep it to a single line of code. If you go over four lines of code for these then you are probably doing it wrong.
If you are completely new to TDD I do however strongly suggest that you first get used to doing it on green field projects/modules. I made a couple of false starts on unit testing because I tried to bolt it onto projects afterwards. TDD is really a joy when you finally grok it so it would not be good if you get discouraged early on because of a too steep learning curve.

TDD FIRST principle

I am not understanding how the TDD FIRST principle isn't being adhered to in the following code.
These are my notes about the FIRST principle:
Fast: run (subset of) tests quickly (since you'll be running them all the time)
Independent: no tests depend on others, so can run any subset in any order
Repeatable: run N times, get same result (to help isolate bugs and enable automation)
Self-checking: test can automatically detect if passed (no human checking of output)
Timely: written about the same time as code under test (with TDD, written first!)
The quiz question:
Sally wants her website to have a special layout on the first Tuesday of every month. She has the following controller and test code:
# HomeController
def index
if Time.now.tuesday?
render 'special_index'
else
render 'index'
end
end
# HomeControllerSpec
it "should render special template on Tuesdays" do
get 'index'
if Time.now.tuesday?
response.should render_template('special_index')
else
response.should render_template('index')
end
end
What FIRST principle is not being followed?
Fast
Independent
Repeatable
Self-checking
Timely
I'm not sure which FIRST principle is not being adhered to:
Fast: The code seems to be fast because there is nothing complex about its tests.
Independent: The test doesn't depend on other tests.
Repeatable: The test will get the same result every time. 'special_index' if it's Tuesday and 'index' if it's not Tuesday.
Self-checking: The test can automatically detect if it's passed.
Timely: Both the code and the test code are presented here at the same time.
I chose Timely on the quiz because the test code was presented after the controller code. But I got the question wrong, and in retrospect, this wasn't a good choice. I'm not sure which FIRST principle isn't being followed here.
It's not Repeatable as not everyday is Tuesday :) If you run this test on Monday you will get one result, if you run it on Tuesday, a different one.
F.I.R.S.T., F.I.I.R.S.T. and FASRCS
Yes, the confusion partly has the reason, that the F.I.R.S.T. principle is not complete or concise enough concerning the "I". In courses I attended the principle was called F.I.I.R.S.T.
The second "I" stands for "Isolated". The test above is independent from other tests, but is not isolated in a separate class or project.
[Updated]:
Isolation can mean:
A unit tests isolates functionality out of a SUT (system under test). You can isolate functionality even out of one single function. This draws the line between unit tests or their relatives component tests and integration tests, and of course to system tests.
"Tests isolate failures. A developer should never have to reverse-engineer tests or the code being tested to know what went wrong. Each test class name and test method name with the text of the assertion should state exactly what is wrong and where." Ref.: History of FIRST principle
A unit test could be isolated from the SUT which it tests in a different developer artifact (class, package, development project) and/or delivery artifact (Dll, package, assembly).
Unit tests, testing the same SUT, especially their containing Asserts, should be isolated from each other in different test functions, but this is only a recommendation. Ideally each unit test contains only one assert.
Unit tests, testing different SUTs, should be isolated from each other or from other kind of tests of course further more in different classes or other mentioned artifacts.
Independence can mean:
Unit tests should not rely on each other (explicit independency), with the exception of special "setup" and "teardown" functions, but even this is subject for a discussion.
Especially unit tests should be order-independant (implicit independency). The result should not depend on unit tests executed before. While this sounds trivial, it isn't. There are tests which cannot avoid doing initializations and/or starting runtimes. Just one shared (e.g. class) variable and the SUT could react differently, if it was started before. You make an outside call to the operating system? Some dll will be loaded first time? You already have a potential dependency, at least on OS level- sometimes only minor, sometimes essential to not discover an error. It may be necessary to add cleanup code to reach optimal independency of tests.
Unit tests should be independant as much as possible from the runtime environment and not depend on a specific test environment or setting. This belongs also partly to "Repeatable".
No need to fillout twenty user dialogs before. No need to start the server. No need to make the database available. No need for another component. No need for a network. To accomplish that, often test doubles are used (stubs, mocks, fakes, dummies, spies, ..).
(Gerard Meszaros' classic work on: xUnit patterns, here coining the name 'test double' and defining different kinds of)
(Test double zoo quickly explained)
(Follow Martin Fowler 2007, thinking about stubs, mocks, etc. Classic)
While a unit test is never totally independant from it's SUT, ideally it should be independant as much as possible from the current implementation and only rely on the public interface of the function or class tested (SUT).
Conclusion: In this interpretations the word 'isolation' stresses more the physical location which often implies logical independence to some extent (e.g. class level isolation).
No completeness concerning potentially more accentuations and meanings claimed.
See also comments here.
But there are more properties of (good) unit tests: Roy Osherove found some more attributes in his book "The art of unit testing" which I don't find exactly in the F.I.I.R.S.T. principle (link to his book site), and which are cited here with my own words (and acronym):
Full control of SUT: the unit test should have full control of the SUT. I see this effectively identical as being independent from the test and runtime environment (e.g. using mocks, etc.). But because of independency is so ambigous, it makes sense to spend a separate letter.
Automated (related to repeatable and self-checking, but not the same)
identically). This one requires a test (runner) infrastructure.
Small, simple or in his words: "easy to implement" (again, related, but not identical). Often related to "Fast"
Relevant: The test should be relevant tomorrow. This one of the most difficult requirements to acchieve, and depending of the "school" there may be a need for temporary unit tests during TDD too. When a test is only testing the contracts, this is acchieved, but that may be not enough for high code coverage requirements.
Consistent result: Effectively a result of "enough" independency. Consistency is, what some people include in "Repeatable". There is an essential overlap, but they are not identical.
Self-explaining: In terms of naming, structure of the whole test, and specifically of the syntax of the assert as the key line, it should be clear what is tested, and what could be wrong if a test fails. Related to "Tests isolate failures", see above.
Given all these spedific points, it should be more clear than before, that it is all but simple to write (good) unit tests.
Independent and Repeatable
It is not independent from date and then it would able to run repeat but technically you get the same result because you choose to
The proper way to make a test for HomeController regarding to FIRST concept is change the time before the evaluation stage

Unit Testing highly interdependent code

So I have some challenging code I would like to refactor. The challenge is that it depends on Database queries, EJB and Java serverFaces. Not simultaneously but close to it.
A good example would be a geocoder. Getting meaningful results depending on multiple queries to the DB depending on the data entered and stored. The code might also reference other helper classes and look them up via the JSF framework.
What are the best strategies for testing this sort of code? Should I try to separate out my code as much as possible? Should I use mocking instead? What has worked for other people?
Well, the short answer is "yes".
You're going to need, first of all, to factor the code sufficiently to construct unit tests at all. What you're describing is excessively complicated to apply the usual unit test methods, and what you would get in any case is more like a higher-level acceptance test.
Now, as far as that factoring goes, you have several possible approaches, and you will probably use them all.
Test the data base queries themselves, using an external script.
Construct an appropriate mock for the components directly accessing the DB, in order to see what happens against known results.
Build unit tests using a JUnit like framework for units of functionality.
Examine the state of the art to see if you can usefully test the output HTML against unit tests.

Test Driven Development - What exactly is the test?

I've been learning what TDD is, and one question that comes to mind is what exactly is the "test". For example, do you call the webservice and then build the code to make it work? or is it more unit testing oriented?
In general the test may be...
unit test which tests an individual subcomponent of your software without any external dependencies to other classes
integration test which are tests that test the connection between two separate systems, ie. their integration capability
acceptance test for validating the functionality of the system
...and some others I've most likely temporarily forgotten for now.
In TDD, however, you're mostly focusing on the unit tests when creating your software.
It's entirely Unit Test driven.
The basic idea is to write the unit tests first, and then do the absolute minimum amount of work necessary to pass the tests.
Then write more tests to cover more of the requirements, and implement a bit more to make it pass.
It's an iterative process, with cycles of test writing, and then code writing.
Here are a couple of good articles by Unclebob
Three rules of TDD
TDD with Acceptance and Unit tests
I suggest you not to emphasize on Test because TDD is actually is a software development methodology, not a testing methodology.
I would say it is about unit testing and code coverage. It is about shipping bugless code and being able to make changes easily in the future.
See Uncle Bob's words of wisdom.
How I use it, it's unit testing oriented. Suppose I want a method that square ints I write this method :
int square(int x) { return null; }
and then write some tests like :
[Test]
TestSquare()
{
Assert.AreEqual(square(0),0);
Assert.AreEqual(square(1),1);
Assert.AreEqual(square(10),100);
Assert.AreEqual(square(-1),1);
Assert.AreEqual(square(-10),100);
....
}
Ok, maybe square is a bad example :-)
In each case I test expected behaviour and all borderline vals like maxint and zero and null-value (remember you can test on errors too) and see to it the test fails (which isn't hard :-)) then I keep working on the function until it works.
So : first a unit test that fails an covers what you want it to cover, then the method.
Generally, unit tests in "TDD" shouldn't involve any IO at all.
In fact, you'll have a ton more effectiveness if you write objects that do not create side effects (I/O is almost always, if not always, a side effect!), and define your the behavior of your class either in terms of return values of methods, or calls made to interfaces that have been passed into the object.
I just want to give my view on the topic which may help to understand TDD a bit more in a different way.
TDD is a design method that relies in testing first. because you asked about how the test is, ill go like this:
If you want to build an application, you know the purpose of what you want to build and you know generally that when you are done, or along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc.
on the other hand TDD changes your mindset and i'll point out one of such ways. commonly , you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. I call this style SDDD (Syntax debugging driven development ).
but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).
by the way even though i said "you know the purpose of what you want to build ..", in practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)

Resources