Why we should start with a failing test in TDD? - tdd

I have started practicing TDD in one of my play project. It is really interesting. However I do not understand, why we should start with a failing test? Is it because it's a rule and we have to follow that, or is there any philosophy behind this? Please share your ideas.
Regards, Rajib

It is to prove that the test itself is in fact doing its job.
If the test passes before you've written or changed any code then clearly the test is not very effective, so write the test, ensure that it fails, then write the code to satisfy the test.
Really, with TDD, each piece of code you write should be to fix a failing test. This way you ensure that your code is fully tested.

You write a failing test case in order to observe that the test case CAN fail.
This is dramatized in episode 22, "TDD with a Stackoverflow Master Programmer." If you're interested in TDD, you'll enjoy the entire audio drama on Agile Thoughts.

Related

MVC3 and TDD pre or post coding

I am still getting to grips with MVC3, and now I am looking into TDD, and the thing that keeps coming up that makes no sense is.
The first step is to quickly add a test, basically just enough code to fail.
why create a test for your code to just pass.To me it makes much more sense to write my code then test it and see if it fails, and fix any and all bugs with that may occur then.
If you write the code and then write the tests, then you are not doing Test-Driven Development...
That's what TDD stands for; you write your code to enable pre-written tests to pass. If you don't do it that way, you aren't doing TDD.
The idea is that your tests represent your application's requirements. You write those first, just like you would otherwise write your requirements down on paper before you started coding.
This way, you know when all your tests pass, you are done.
Writing the test first makes you start thinking about how the method will pass and fail - you start thinking about the method more deeply.
Otherwise, it's easy to get straight into the method without much thought, leading to methods that aren't so easy to test. It's all too easy to come back to the unit-test later - it often doesn't happen!
Moreover, if you write the method first at what point do you write the test? When you know it passes, when you're "happy" with it... It's a slippery slope to writing code without thinking about testing.

Planning unit tests with TDD

When you approach a class you want to write, how do you plan its unit tests?
Are there formal templates which you follow or do you use pen and paper/notepad?
I am looking for some way to let other programmers/QAs know what tests should be implemented (and if something was overlooked it can be easier to spot it).
With TDD, tests drive the feature you are writing. If you're needing to write formal templates for it, then chances are you're not entering into the spirit of things!
TDD should be used to generate the test cases as you write the code. Simply put, before you write the next line of code, encode in a test what the code should do.
Check out Bob Martin's bowling game example which should give you more of a feel for TDD.
I do not think that having a template goes well with using TDD. I assume that you have read Kent Beck's Book on Test Driven Development by Example. If no please do so.
But the general idea is simple. When we start a class, we will have a general idea on the responsibility of the class. This is the steps that we use:
Have a general idea on class
responsibility and use that
information to name the class.
Create a test case for this class.
If you start with a concrete information on what the units inside the class are, just write those stubs inside the class and write test cases for those stubs. Initially all of it will fail and the signatures for most of those methods will change. That's the whole idea.
In most cases, the developer may not have that degree of information. In that case, it's OK to start writing code in the First Test. Once the test passes, then migrate the logic to the class.
So what I am driving at is, the whole point of TDD is to make the development process more organic. The class grows, with the knowledge on what it should do. Having a formal template, or writing things down, will probably not help.
The only thing I could think that you could do, is to sit with your developers before each iteration and come up with a pretty detailed idea on each of the component classes and its responsibilities (we only use this discussion to finalize public APIs).
If you want to know the quality of test cases written by your developer, then you can conduct an ad hoc code review to see if the classes are correctly broken down to units and all the units are tested.
TDD is not the methodology you are looking for ;-)
way to let other programmers/QAs know what tests should be implemented
This statement implies you are after tests, but TDD itself is driven by requirements and produces features - the fact that it also produces a suite of tests is an incidental (but hugely powerful) appendage which happens to result in a regression suite.
Although TDD harnesses 'tests' to drive development of code, you do not need to specify tests up-front. Even if you did (and sometimes is helpful to 'thinking' to do so) your programmer may not need to write all the tests in order to produce the desired behaviour in the code. Indeed, in TDD, you are encouraged to stop work when all the tests pass - you need not keep on writing tests only to find they already pass; that is something akin to makework.
Also, the other side-effect of TDD is in having (and continuously running) a regression suite. If at a later date a bug is found, it makes it easier, just by having a test suite, to write another test which demonstrates the bug with a failing test. Once the bug is fixed, the test should pass - along with all the other tests in the suite.
You cannot commit to TDD and let others do the unit testing. This requirement of yours strongly suggest that You haven't understood the TDD paradigm.
In my (admittedly fairly new) experience
You write test that, if passed, will confirm your initial understanding of the target funcitonality. Not a single line of production code is written at this point.
You then Implement the production code so the unit tests are passed.
If your understanding evolves, you then change your unit tests or/and add new ones
You then implement the changes in the production code so the tests will pass
By then, it is not forbidden to write additional unit tests, if you discover that parts of your production code are not covered.
Remove tests that no longer make sense.
You arrive at beautiful crisp and clean code :o)
TDD is NOT a QA method; It is a way of DEVELOPING. The whole idea is that the unit test guide the development process. So you really can't let others do the unit tests for you.
I start by designing the class first, usually with a simple UML class diagram. I try not to make the diagram just detailed enough so that I can run tests against it (e.g. params and return types specified for each method, and I know how the method's behavior affects object state).
Then, I write unit tests. Generally when it comes to automated testing you should have 1 test method for every method defined in your class. As far as convention goes, if I have a method in my class called myMethod, then my unit test method will be called testMyMethod.
I write unit tests using what I know about the method's intended behavior, and then write the method and check to make sure that it passes the unit test.

When applying TDD, what heuristics do you use to select which test to write next?

The first part of the TDD cycle is selecting a test to fail. I'd like to start a community wiki about this selection process.
Sometimes selecting the test to start with is obvious, start with the low hanging fruit. For example when writing a parser, an easy test to start with is the one that handles no input:
def testEmptyInput():
result = parser.parse("")
assertNullResult(result)
Some tests are easy to pass requiring little implementation code, as in the above example.
Other tests require complex slabs of implementation code to pass, and I'm left with the feeling I haven't done the the "easiest thing possible to get the test to pass". It's at this point I stop trying to pass this test, and select a new test to try to pass, in the hope that it will reveal an easier implementation for the problematic implementation.
I'd like to explore some of the characteristic of these easy and challenging tests, how they impact testcase selection and ordering.
How does test selection relate to topdown and bottom up strategies? Can anyone recommend writings that addresses these strategies in relation to TDD?
I start by anchoring the most valuable behaviour of the code.
For instance, if it's a validator, I'll start by making sure it says that valid objects are valuable. Now we can showcase the code, train users not to do stupid things, etc. - even if the validator never gets implemented any further. After that, I start adding the edge cases, with the most dangerous validation mistakes first.
If I start with a parser, rather than start with an empty string, I might start with something typical but simple that I want to parse and something I'd like to get out of that. For me unit tests are more like examples of how I'm going to want to use the code.
I also follow BDD's practice of naming the tests should - so for your example I'd have shouldReturnNullIfTheInputIsEmpty(). This helps me identify the next most important thing the code should do.
This is also related to BDD's "outside-in". Here are a couple of blog posts I wrote which might help: Pixie Driven Development and Bug Driven Development. Bug Driven Development helps me to work out what the next bit of system-level functionality I need should be, which then helps me find the next unit test.
Hope this gives you a slightly different perspective, anyway - good luck!
To begin with, I'll pick a simple typical case and write a test for it.
While I'm writing the implementation I'll pay attention to corner cases that my code handles incorrectly. Then I'll write tests for those cases.
Otherwise I'll look for things that the function in question should do, but doesn't.

Test Driven Development - What exactly is the test?

I've been learning what TDD is, and one question that comes to mind is what exactly is the "test". For example, do you call the webservice and then build the code to make it work? or is it more unit testing oriented?
In general the test may be...
unit test which tests an individual subcomponent of your software without any external dependencies to other classes
integration test which are tests that test the connection between two separate systems, ie. their integration capability
acceptance test for validating the functionality of the system
...and some others I've most likely temporarily forgotten for now.
In TDD, however, you're mostly focusing on the unit tests when creating your software.
It's entirely Unit Test driven.
The basic idea is to write the unit tests first, and then do the absolute minimum amount of work necessary to pass the tests.
Then write more tests to cover more of the requirements, and implement a bit more to make it pass.
It's an iterative process, with cycles of test writing, and then code writing.
Here are a couple of good articles by Unclebob
Three rules of TDD
TDD with Acceptance and Unit tests
I suggest you not to emphasize on Test because TDD is actually is a software development methodology, not a testing methodology.
I would say it is about unit testing and code coverage. It is about shipping bugless code and being able to make changes easily in the future.
See Uncle Bob's words of wisdom.
How I use it, it's unit testing oriented. Suppose I want a method that square ints I write this method :
int square(int x) { return null; }
and then write some tests like :
[Test]
TestSquare()
{
Assert.AreEqual(square(0),0);
Assert.AreEqual(square(1),1);
Assert.AreEqual(square(10),100);
Assert.AreEqual(square(-1),1);
Assert.AreEqual(square(-10),100);
....
}
Ok, maybe square is a bad example :-)
In each case I test expected behaviour and all borderline vals like maxint and zero and null-value (remember you can test on errors too) and see to it the test fails (which isn't hard :-)) then I keep working on the function until it works.
So : first a unit test that fails an covers what you want it to cover, then the method.
Generally, unit tests in "TDD" shouldn't involve any IO at all.
In fact, you'll have a ton more effectiveness if you write objects that do not create side effects (I/O is almost always, if not always, a side effect!), and define your the behavior of your class either in terms of return values of methods, or calls made to interfaces that have been passed into the object.
I just want to give my view on the topic which may help to understand TDD a bit more in a different way.
TDD is a design method that relies in testing first. because you asked about how the test is, ill go like this:
If you want to build an application, you know the purpose of what you want to build and you know generally that when you are done, or along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc.
on the other hand TDD changes your mindset and i'll point out one of such ways. commonly , you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. I call this style SDDD (Syntax debugging driven development ).
but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).
by the way even though i said "you know the purpose of what you want to build ..", in practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)

TDD. When you can move on?

When doing TDD, how to tell "that's enough tests for this class / feature"?
I.e. when could you tell that you completed testing all edge cases?
With Test Driven Development, you’ll write a test before you write the code it tests. Once you’re written the code and the test passes, then it’s time to write another test. If you follow TDD correctly, you’ve written enough tests once you’re code does all that is required.
As for edge cases, let's take an example such as validating a parameter in a method. Before you add the parameter to you code, you create tests which verify the code will handle each case correctly. Then you can add the parameter and associated logic, and ensure the tests pass. If you think up more edge cases, then more tests can be added.
By taking it one step at a time, you won't have to worry about edge cases when you've finished writing your code, because you'll have already written the tests for them all. Of course, there's always human error, and you may miss something... When that situation occurs, it's time to add another test and then fix the code.
Kent Beck's advice is to write tests until fear turns into boredom. That is, until you're no longer afraid that anything will break, assuming you start with an appropriate level of fear.
On some level, it's a gut feeling of
"Am I confident that the tests will catch all the problems I can think of
now?"
On another level, you've already got a set of user or system requirements that must be met, so you could stop there.
While I do use code coverage to tell me if I didn't follow my TDD process and to find code that can be removed, I would not count code coverage as a useful way to know when to stop. Your code coverage could be 100%, but if you forgot to include a requirement, well, then you're not really done, are you.
Perhaps a misconception about TDD is that you have to know everything up front to test. This is misguided because the tests that result from the TDD process are like a breadcrumb trail. You know what has been tested in the past, and can guide you to an extent, but it won't tell you what to do next.
I think TDD could be thought of as an evolutionary process. That is, you start with your initial design and it's set of tests. As your code gets battered in production, you add more tests, and code that makes those tests pass. Each time you add a test here, and a test there, you're also doing TDD, and it doesn't cost all that much. You didn't know those cases existed when you wrote your first set of tests, but you gained the knowledge now, and can check for those problems at the touch of a button. This is the great power of TDD, and one reason why I advocate for it so much.
Well, when you can't think of any more failure cases that doesn't work as intended.
Part of TDD is to keep a list of things you want to implement, and problems with your current implementation... so when that list runs out, you are essentially done....
And remember, you can always go back and add tests when you discover bugs or new issues with the implementation.
that common sense, there no perfect answer. TDD goal is to remove fear, if you feel confident you tested it well enough go on...
Just don't forget that if you find a bug later on, write a test first to reproduce the bug, then correct it, so you will prevent future change to break it again!
Some people complain when they don't have X percent of coverage.... some test are useless, and 100% coverage does not mean you test everything that can make your code break, only the fact it wont break for the way you used it!
A test is a way of precisely describing something you want. Adding a test expands the scope of what you want, or adds details of what you want.
If you can't think of anything more that you want, or any refinements to what you want, then move on to something else. You can always come back later.
Tests in TDD are about covering the specification, in fact they can be a substitute for a specification. In TDD, tests are not about covering the code. They ensure the code covers the specification, because the code will fail a test if it doesn't cover the specification. Any extra code you have doesn't matter.
So you have enough tests when the tests look like they describe all the expectations that you or the stakeholders have.
maybe i missed something somewhere in the Agile/XP world, but my understanding of the process was that the developer and the customer specify the tests as part of the Feature. This allows the test cases to substitute for more formal requirements documentation, helps identify the use-cases for the feature, etc. So you're done testing and coding when all of these tests pass...plus any more edge cases that you think of along the way
Alberto Savoia says that "if all your tests pass, chances are that your test are not good enough". I think that it is a good way to think about tests: ask if you are doing edge cases, pass some unexpected parameter and so on. A good way to improve the quality of your tests is work with a pair - specially a tester - and get help about more test cases. Pair with testers is good because they have a different point of view.
Of course, you could use some tool to do mutation tests and get more confidence from your tests. I have used Jester and it improve both my tests and the way that I wrote them. Consider to use something like it.
Kind Regards
Theoretically you should cover all possible input combinations and test that the output is correct but sometimes it's just not worth it.
Many of the other comments have hit the nail on the head. Do you feel confident about the code you have written given your test coverage? As your code evolves do your tests still adequately cover it? Do your tests capture the intended behaviour and functionality for the component under test?
There must be a happy medium. As you add more and more test cases your tests may become brittle as what is considered an edge case continuously changes. Following many of the earlier suggestions it can be very helpful to get everything you can think of up front and then adding new tests as the software grows. This kind of organic grow can help your tests grow without all the effort up front.
I am not going to lie but I often get lazy when going back to write additional tests. I might miss that property that contains 0 code or the default constructor that I do not care about. Sometimes not being completely anal about the process can save you time n areas that are less then critical (the 100% code coverage myth).
You have to remember that the end goal is to get a top notch product out the door and not kill yourself testing. If you have that gut feeling like you are missing something then chances are you are have and that you need to add more tests.
Good luck and happy coding.
You could always use a test coverage tool like EMMA (http://emma.sourceforge.net/) or its Eclipse plugin EclEmma (http://www.eclemma.org/) or the like. Some developers believe that 100% test coverage is a worthy goal; others disagree.
Just try to come up with every way within reason that you could cause something to fail. Null values, values out of range, etc. Once you can't easily come up with anything, just continue on to something else.
If down the road you ever find a new bug or come up with a way, add the test.
It is not about code coverage. That is a dangerous metric, because code is "covered" long before it is "tested well".

Resources