BDD and TDD, what is the correct workflow? - tdd

My understanding is such that:
BDD is the process of evaluating how software needs to behave, and then writing acceptance tests on which to base your code. You would write code using a TDD approach, by writing unit tests for methods and building your classes around the unit tests (code, test, refactor). When the code is written, you test it to see that is satisfies the original acceptance test.
Can anyone with experience of the entire comment on my interpretation and give a walk through of a simple application using these Agile principles? I see there is plenty of text on BDD and TDD in separate publications, but I am looking at how the two processes complement one another in real world development.

Try thinking of them as examples, rather than tests.
For the whole application, we come up with an example of how a user might use that application. The example is a specific instance of behaviour which illustrates that behaviour. So, for instance, we might say that a till application allows for refunds. A till operator who uses that till will be familiar with the scenario in which Fred brings back a microwave for a refund:
Given Fred bought a microwave for $100
When he brings the microwave back for a refund
Then he should get $100 refunded to his credit card.
Now it's easy to think of other scenarios too; for instance, the one where Fred got a discount and only gets $90 back, or the one where Fred had broken the microwave himself and we refuse his refund, etc.
When we actually start coding the till software, we break our code down into small pieces; classes, functions, modules, etc. We can describe the behaviour of a piece of code, and provide an example. So, for instance, we might say that a refund calculator should take discounts into account. This is only a small part of the refund scenario. We have a class, RefundCalculator, and a unit test with a method that says shouldTakeDiscountsIntoAccount.
We might put the steps for our example in comments, for instance:
// Given a microwave was sold at 10% discount for $100
...
// When we calculate the refund due
...
// Then the calculator should tell us it's $90.
...
Then we fill in the code to turn this into a unit test, and write the code that makes it pass.
Normally "BDD" refers to the scenario which describes the whole application, but the ideas actually started at a unit level, and the principles are the same. The only difference is that one is an example of a user using an application, and the other is an example of a class using another class (or function, or what have you). So BDD on the outside of an application is like ATDD (Acceptance-Test-Driven Development) and BDD for classes is like TDD. Hopefully this helps give you an idea of how the concepts hang together.
The only difference is that we got rid of the word "test", because we find it easier to ask people for an example than a test, and it helps keep people thinking about whether they understand the problem, rather than thinking about how to test a solution.
This answer on "top down" (or outside-in) vs. "bottom-up" may also help you.

Your summary is basically correct. The labels can be misleading: people who call what they do 'BDD' will write acceptance tests and unit tests, people who call what they do 'TDD' will write acceptance tests and unit tests. To me, the distinction between the two is much ado about nothing. You will read many people's experiences with different flavors of this basic process. Try the approaches that seem to make sense in your situation and always be ready to make adjustments based on what works and doesn't work for you - that's the essence of agile.

There are two approaches to BDD stories, imperative and declarative. A developer will likely find imperative stories easier to write especially when being used to script unit tests.
However when approaching this from Agile Test First/Test Driven Development approach a declarative approach results in BDD narratives that are cogent with the development stories. This is because the BDD narrative continues to reflect the domain language of the business rather than programming domain.
How do you capture requirements with declarative acceptance tests?

Related

Does TDD preclude designing first?

I've been reading about TDD lately, and it is advocated because it supposedly results in code that is more testable and less coupled (and a bunch of other reasons).
I haven't been able to find much in the way of practical examples, except for a Roman numeral conversion and a number-to-English converter.
Observing these two examples, I observed the typical red-green-refactor cycles typical of TDD, as well as the application of the rules of TDD. However, this seemed like a big waste of time when normally I would observe a pattern and implement it in code, and then write tests for it after. Or possibly write a stub for the code, write the unit tests, and then write the implementation - which might arguably be TDD - but not this continuous case-by-case refactoring.
TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture. My opinion so far is that the benefits of TDD can be achieved by a proper architectural design, although admittedly not everyone can do this reasonably well.
So here I have two questions:
Am I right in understanding that using TDD pretty much doesn't allow you to design first (see the rules of TDD)?
Is there anything that TDD gives you that you can't get from doing proper design before you start coding?
well, I was in your shoes some time ago and had the same questions. Since then I have done quite some reading about TDD and decided to mess with it a little.
I can summarize my experience about TDD in these points:
TDD is unit testing done right, ATDD/BDD is TDD done right.
Whether you design beforehand or not is totally up to you. Just make sure you don't do BDUF. Believe me you will end up changing most of it midways because you can never fully understand the requirements until your hands get dirty.
OTOH, you can do enough design to get you started. Class diagrams, sequence diagrams, domain models, actors and collaborators are perfectly fine as long as you don't get hung up in the design phase trying to figure everything out.
Some people don't do any design at all. They just let the code talk and concentrate on refactoring.
IMHO, balance your approach. Do some design till you get the hang of it then start testing. When you reach a dead end then go back to the white board.
One more thing, some things can't be solved by TDD like figuring out an algorithm. This is a very interesting post that shows that some things just need to be designed first.
Unit testing is hard when you have the code already. TDD forces you to think from your API users perspective. This way you can early on decide if the public interface from your API is usable or not. If you decide to do unit testing after implementing everything you will find it tedious and most probably it will be only for some cases and I know some people who will right only passing test cases just to get the feature done. I mean who wants to break his own code after all that work?
TDD breaks this mentality. Tests are first class citizens. You aren't allowed to skip tests. You aren't allowed to postpone some tests till the next release because we don't have enough time.
Finally to answer your question if there anything that TDD gives you that you can't get from doing proper design before you start coding, I would say commitment.
As long as your doing TDD you are committed to apply good OO principles, so that your code is testable.
To answer your questions:
"Test Driven Development" (TDD) is often referred to as "Test Driven Design", in that this practice will result in a good design of the code. When you have written a failing unit test, you are forced into a test driven design approach, so that you can implement just what is needed to make the test pass i.e. you have to consider the design of the code you are writing to make the test pass.
When using a TDD approach a developer will implement the minimum amount of code required to pass the test. Doing proper design beforehand usually results in waste if the requirements change once the project has started.
You say "TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture" - If you are following an Agile approach to your software development, then you do still need to do some up front architectural investigation (e.g. if you were using the Scrum methodology you would create a development "Spike" at the start of a project) to ascertain what the minimum amount of architecture needed to start the project. At this stage you make decisions based on what you know now e.g. if you had to work with a small dataset you'd choose to use a regular DB, if you have a huge DB you might to choose to use a NoSQL big data DB etc.
However, once you have a general idea of the architecture, you should let the design evolve as the project progresses leaving further architectural decisions as late in the process as possible; Invariably as a project progresses the architecture and requirements will change.
Further more this rather popular post on SO will give you even more reasons why TDD is definetly worth the effort.

Can we come up with a better name for TDD?

I keep hearing how Test Driven Development is an unfortunate name for the technique because TDD is not about Testing, it's about design.
Personally, I've never had a problem with the name simply because I've always thought about it more as a way to program to a specification. I've never had a problem with the fact that the code that is verifying that I've met the spec has the word 'test' in a class name, method name, annotation, or attribute. It's a convention I've followed in order to have test frameworks do the heavy lifting for me.
I'm not the most dogmatic about my approach, although I do try to write my tests first, I often find myself inspired to write some extra code which I then try to wrap in tests. I do tend to find that TDD outsmarts me from an API-design perspective every time I do that though (and I inevitably end up refactoring my non-tested code as soon as I start writing tests around it), but the point is I'm okay with not writing all tests up-front as long as I end up with a test harness around everything of interest when I'm done.
So, back to the question. What would be a better name for TDD? Would a name that didn't involve the word 'test' make a difference?
Test driven development is not a way of testing, it's a way of developing (including design, coding and testing). Otherwise it would be called development driven testing.
The verb of the phrase "test driven development" is development, "test-driven" is just the adverb. You should explain that to whoever is giving you grief over the term.
I have absolutely no problem with the name since it accurately reflects the intent. And you don't have to have all your tests finished before starting development but you should have them specified at least.
I think that's what Behavior-Driven Development tried to achieve (among other things).
I've stopped caring about definitive names -- it's all open to interpretation.
In my (subjective) opinion, I believe that the reason some people started the campaign against the word 'Test' in Test-Driven Development was because of bad experience with explaining the concept to certain types of developers.
There are developers who, as soon as they hear the word 'Test', stop listening - they think it's not their job, so why should they care? That attitude is obviously detrimental if you need people to adopt the practice of TDD. If one solution to that problem is to rename TDD into something that does not include the word 'Test', then fine by me, but for the rest of us, let's just continue to call it TDD.
I've seen at least the following terms suggested:
Test-Driven Development (the original term)
Test-Driven Design (because it's more about design)
Example-Driven Design (because tests could be viewed as examples of how the code is supposed to work)
Design-by-Example (just another phrase for EDD)
Behavior-Driven Design (the declared intent was different, but most attempts so far has looked a lot like TDD with some extra verbosity added)
Even though 'Design' is very important to me, I still think the tests provide value. Sure, they are not QA tests, but the entire mass of tests as a by-product of TDD still provide the safety net that allow you to refactor with confidence. In fact, Refactoring has an entire section on unit testing, because Fowler viewed it to be very dangerous to attempt refactoring without a regression test suite.
Anyone really, truly believing that the tests are of no importance in TDD should put their money where their mouth is and delete the unit tests when their code is done.
For me, TDD makes sense. Yes, it is first and foremost about Development and Design, but the Tests are important artifacts too.
I think Test Driven Development is fine. That's what it is. Yes, it affects the design. Yes, it improves the quality. Yes, it improves the overall maintainability and ability to refactor. Yes it's faster. Etc.
I think of BDD as a different, more specific practice. But maybe it's more in line with the original intent of TDD than tools like JUnit promoted.
Is it a bad name because it speaks more towards the process than the benefits? Maybe.
Other ideas:
responsible develoment
test first development
holistic develompent
not hacking
I think "Code-By-Example" is the best name for what we do. I think that's what Dan North calls it. Code By Example is then a part of BDD (so BDD is larger/more specific that TDD).
The concept of "Context Specifications" is also quite similar, promoted by people like Scott Bellware.
I agree that using the word "test" makes it difficult to communicate the technique to new developers.

Test Driven Development is the way to go. But how should it be done?

A number of developers are designing their applications using Test Driven Development (TDD) but I would like to know at which stage should I incorporate TDD into my projects? Should I first design my classes, do some unit tests and determine what methods are needed or design methods first and then do some unit tests after?
What is the best way to go about this?
TDD is a coding and design-in-the-small technique. It is not a big-picture envisioning technique. If you starting to write an application, you want to do some storyboards, or wireframes, or even some simple sketches. You should have an idea about the larger-scale design i.e. the classes and relationships in your system. Before you get to the point where you'd start to do interaction design (e.g. methods and arguments) you start doing TDD.
The wireframes you have done will give you an idea of how the system should appear. The large scale design will give you an idea of what classes to create. However, none of these models should be considered correct or permanent. As you write tests, you'll find better design ideas that will change your high-level design, and even your wireframe design.
The point of it being test driven is that the tests drive the design as well as the implementation.
In some cases that's not appropriate - it may be blatantly obvious what the design should look like, especially if it's implementing an existing interface (or some similarly restricted choice) but when you've got a large "design space" TDD encourages you to writes tests demonstrating how you want to be able to use the class, rather than starting off with you how you think you want to implement it.
Generally if something is easy to test, it will be easy to use.
The mantra for test driven design:
create the test.
compile test (and verify that it fails)
write the interface
compile test (it should compile now)
run test (it should fail now)
write the code
compile/run test (it should do both now)
Repeat for each item.
I'm very skeptical about writing unit tests before implementation. Maybe it would be efficient if you had a good idea about the exact design about the classes and methods required but generally at the start of a new project you don't.
My preferred approach is to acknowledge that the start of a new project can be a bit of organised chaos. One part of the design process is writing code to validate certain ideas or to find the best way. A lot of time the code hits a dead end and gets thrown away and any unit tests written for it would have been a waste of time.
Don't get me wrong I'm all for well designed software and at a certain point it's essential that the organised chaos morphes into a well understood design. At that point the class structure and UML diagrams start to solidify. This is the point TDD becomes important to me. Every class should be able to be tested in isolation. If the design is good with the right degree of decoupling that is easy to achieve usually with a sprinkling of mock objects around the place. If the design is bad testing becomes a real nightmare and consequently gets ignored.
At the end of the day it's high quality productive code that you're after. Unit tests are one of the best tools to help achieve that but they shouldn't take on a life of their own and become more important than the code they are testing which you sometimes get the feeling of from the TDD mantra.
Start small
a) The next time you add a method to a Utility class, write the tests first for the method. Methods that do string handerling are a good place to start, as they tend to have lot of bugs, but are don’t need complex dataset to test them.
b) Then once you get confidant with writing unit tests, move on the classes that don’t touch UI and don’t talk directly or indirectly to the database.
c) You have then got to decide if you will design your application to make unit testing easier. E.g
it is hard to test UI code so look at
MVP etc,
it is head to test code
that access the database, so separate
your logic into method that don’t need
to access the database to run.
a) and b) has a big payback without having to change how you design the overall system. You have to consider if c) is worth the cost.... (I think it is, but a lot of people don't and often you can't control the overall system design.)
Ideally you will start to apply TDD when you begin coding. Think about your design first, you can draw diagrams on paper ; no need to use CASE tool, just envision the main classes and their interactions, pick one, write a first simple test, write the code to make it pass, write another test, make it pass, refactor ...
If your process mandates documenting and reviewing the design before the coding stage, then you cannot follow TDD and let your design emerge. You can however make your design first, and then do the implementation test-first. You'll get unit-tests as if you applied TDD, but still applying your process.
Keep your design as hi-level as possible, so that you can let the low-level details of the design emerge.
Please look at
http://en.wikipedia.org/wiki/Test-driven_development
They have stated a high level tasks
TDD ensures that developers give equal importance to test cases as compared to their business logic. I followed it in my project and i could see the advantages: Since i started with writing my test cases first, i ensured that my test cases handle all the possible coverages of a business logic, thereby catching/reducing the number of bugs. Some times its not practical since it requires enough time to write these test cases.
The bottom line is you need to unit test your code really well.So either you start with implementation and then ensure that there are enough test cases for all the scenarios (Most people dont hence this practice has emerged.) or write the test methods do a dry run and then write your implementation. Your implementation is said to be complete only when all your testcases pass successfully.
You should listen to stack overflow podcast 41 where they talk about TTD. It's hard to say everybody is using TDD and it would be take a lot of time and resources to have a high code coverage percentage. Unit testing could effectively double the development time if you write tests for all of your code.
One of the points from the podcast is that your time and resources could be better used doing other tasks such as usability and new features.

Exercises to enforce good practices such as TDD and Mocking

I'm looking for resources that provide an actual lesson plan or path to encourage and reinforce programming practices such as TDD and mocking. There are plenty of resources that show examples, but I'm looking for something that actually provides a progression that allows the concepts to be learned instead of forcing emulation.
My primary goal is speeding up the process for someone to understand the concepts behind TDD and actually be effective at implementing them. Are there any free resources like this?
It's a difficult thing to encourage because it can be perceived (quite fairly) as a sea-change; not so much a progression to a goal but an entirely different approach to things.
The short-list of advice is:
You need to be the leader, you need to become proficient before you can convince others to, you need to be able to show others the path and settle their uncertainties.
First become proficient in writing unit tests yourself
Practice writing tests for existing methods. You'll probably beat your head on the desk trying to test lots of your code--it's not because testing is hard or you can't understand testing; it's more likely because your existing code and coding style isn't very testable.
If you have a hard time getting started then find the simplest methods you can and use them as a starting point.
Then focus on improving the testability of the code you produce
The single biggest tip: make things smaller and more to the point. This one is the big change--this is the hardest part to get yourself to do, and even harder to convince others of.
Personally I had my "moment of clarity" while reading Bob Martin's "Clean Code" book; an early chapter talks about what a clean method will look like and as an example he takes a ~40 line method that visually resembled something I'd produce and refactors it out into a class which is barely larger line-count wise but consists of nothing but bite-sized methods that are perhaps 3-7 lines each.
Looking at these itty-bitty methods it suddenly clicked that the unit-testing cornerstone "each test only tests one thing" is easiest to achieve when your methods only do one thing (and do that one thing without having 30 internal mechanisms at play).
The good thing is that you can begin to apply your findings immediately; practice writing small methods and small classes and testing along the way. You'll probably start out slow, and hit a few snags fairly quickly, but the first couple months will help get you pointed in the right direction.
You could try attending (or hosting one if there is none near you!) a coding dojo
I attended one such excercise and it was fun learning TDD.
Books are always a good resource - even though not free - they may be worth your time searching for the good free resources - for the money those books cost.
"Test driven development by example" by Kent Beck.
"Test Driven Development in Microsoft .NET" by James W. Newkirk and Alexei A. Vorontsov
please feel free to add to this list
One thing I worked through that helped me appreciate TDD more was NHibernate and the Unit of Work Pattern. Although it's specific to NHibernate and .NET, I liked the way that it was arranged. Using TDD, you develop something (a UnitofWork) that's actually useful rather than some simple "this is what a mock looks like" example.
How I learn a concept best is by putting it to use towards an actual need. I suggest you take a look at the structure of the article and see if it's along the lines of what you're looking for.
Geeks are excellent at working to metrics, whether they are good for them or not!
You can use this to your advantage. Set up a CI server and fail the build whenever code coverages drops below 50 percent. Let them know that the threshold will rise 10 percent every month until it's 90. You could perhaps use some commit hooks to stop them being able to check code in to begin with but I've never tried this myself.
Let them know the coverage by the team will be taken into effect in any performance reviews, etc. By emphasising it is the coverage of the team, you should get peer pressure helping you ensure good coverage.
This will only ensure they are testing their code, not how well they are testing their code, nor whether they are writing the tests first. However, it is strongly encouraging (or forcing) them to incorporate testing into their daily development process.
Generally, once people have something in their process they'll want to do something as easily/ efficiently as possible. TDD is the easiest way to write code with high coverage as you don't write a line of code without it being covered.
Find someone with experience and talk to them. If there isn't a local developer group, then start one.
You should also try pushing things too far to start with, and then learn when to back off. For example, the whole mock thing started when someone asked "What if we program with no getters".
Finally, learn to "listen to the tests". When the tests look dreadful, consider whether it's the code that's at fault, not your testing technique.

How do you unit test a unit test? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I was watching Rob Connerys webcasts on the MVCStoreFront App, and I noticed he was unit testing even the most mundane things, things like:
public Decimal DiscountPrice
{
get
{
return this.Price - this.Discount;
}
}
Would have a test like:
[TestMethod]
public void Test_DiscountPrice
{
Product p = new Product();
p.Price = 100;
p.Discount = 20;
Assert.IsEqual(p.DiscountPrice,80);
}
While, I am all for unit testing, I sometimes wonder if this form of test first development is really beneficial, for example, in a real process, you have 3-4 layers above your code (Business Request, Requirements Document, Architecture Document), where the actual defined business rule (Discount Price is Price - Discount) could be misdefined.
If that's the situation, your unit test means nothing to you.
Additionally, your unit test is another point of failure:
[TestMethod]
public void Test_DiscountPrice
{
Product p = new Product();
p.Price = 100;
p.Discount = 20;
Assert.IsEqual(p.DiscountPrice,90);
}
Now the test is flawed. Obviously in a simple test, it's no big deal, but say we were testing a complicated business rule. What do we gain here?
Fast forward two years into the application's life, when maintenance developers are maintaining it. Now the business changes its rule, and the test breaks again, some rookie developer then fixes the test incorrectly...we now have another point of failure.
All I see is more possible points of failure, with no real beneficial return, if the discount price is wrong, the test team will still find the issue, how did unit testing save any work?
What am I missing here? Please teach me to love TDD, as I'm having a hard time accepting it as useful so far. I want too, because I want to stay progressive, but it just doesn't make sense to me.
EDIT: A couple people keep mentioned that testing helps enforce the spec. It has been my experience that the spec has been wrong as well, more often than not, but maybe I'm doomed to work in an organization where the specs are written by people who shouldn't be writing specs.
First, testing is like security -- you can never be 100% sure you've got it, but each layer adds more confidence and a framework for more easily fixing the problems that remain.
Second, you can break tests into subroutines which themselves can then be tested. When you have 20 similar tests, making a (tested) subroutine means your main test is 20 simple invocations of the subroutine which is much more likely to be correct.
Third, some would argue that TDD addresses this concern. That is, if you just write 20 tests and they pass, you're not completely confident that they are actually testing anything. But if each test you wrote initially failed, and then you fixed it, then you're much more confident that it's really testing your code. IMHO this back-and-forth takes more time than it's worth, but it is a process that tries to address your concern.
A test being wrong is unlikely to break your production code. At least, not any worse than having no test at all. So it's not a "point of failure": the tests don't have to be correct in order for the product to actually work. They might have to be correct before it's signed off as working, but the process of fixing any broken tests does not endanger your implementation code.
You can think of tests, even trivial tests like these, as being a second opinion what the code is supposed to do. One opinion is the test, the other is the implementation. If they don't agree, then you know you have a problem and you look closer.
It's also useful if someone in future wants to implement the same interface from scratch. They shouldn't have to read the first implementation in order to know what Discount means, and the tests act as an unambiguous back-up to any written description of the interface you may have.
That said, you're trading off time. If there are other tests you could be writing using the time you save skipping these trivial tests, maybe they would be more valuable. It depends on your test setup and the nature of the application, really. If the Discount is important to the app, then you're going to catch any bugs in this method in functional testing anyway. All unit testing does is let you catch them at the point you're testing this unit, when the location of the error will be immediately obvious, instead of waiting until the app is integrated together and the location of the error might be less obvious.
By the way, personally I wouldn't use 100 as the price in the test case (or rather, if I did then I'd add another test with another price). The reason is that someone in future might think that Discount is supposed to be a percentage. One purpose of trivial tests like this is to ensure that mistakes in reading the specification are corrected.
[Concerning the edit: I think it's inevitable that an incorrect specification is a point of failure. If you don't know what the app is supposed to do, then chances are it won't do it. But writing tests to reflect the spec doesn't magnify this problem, it merely fails to solve it. So you aren't adding new points of failure, you're just representing the existing faults in code instead of waffle documentation.]
All I see is more possible points of failure, with no real beneficial return, if the discount price is wrong, the test team will still find the issue, how did unit testing save any work?
Unit testing isn't really supposed to save work, it's supposed to help you find and prevent bugs. It's more work, but it's the right kind of work. It's thinking about your code at the lowest levels of granularity and writing test cases that prove that it works under expected conditions, for a given set of inputs. It's isolating variables so you can save time by looking in the right place when a bug does present itself. It's saving that suite of tests so that you can use them again and again when you have to make a change down the road.
I personally think that most methodologies are not many steps removed from cargo cult software engineering, TDD included, but you don't have to adhere to strict TDD to reap the benefits of unit testing. Keep the good parts and throw out the parts that yield little benefit.
Finally, the answer to your titular question "How do you unit test a unit test?" is that you shouldn't have to. Each unit test should be brain-dead simple. Call a method with a specific input and compare it to its expected output. If the specification for a method changes then you can expect that some of the unit tests for that method will need to change as well. That's one of the reasons that you do unit testing at such a low level of granularity, so only some of the unit tests have to change. If you find that tests for many different methods are changing for one change in a requirement, then you may not be testing at a fine enough level of granularity.
Unit tests are there so that your units (methods) do what you expect. Writing the test first forces you to think about what you expect before you write the code. Thinking before doing is always a good idea.
Unit tests should reflect the business rules. Granted, there can be errors in the code, but writing the test first allows you to write it from the perspective of the business rule before any code has been written. Writing the test afterwards, I think, is more likely to lead to the error you describe because you know how the code implements it and are tempted just to make sure that the implementation is correct -- not that the intent is correct.
Also, unit tests are only one form -- and the lowest, at that -- of tests that you should be writing. Integration tests and acceptance tests should also be written, the latter by the customer, if possible, to make sure that the system operates the way it is expected. If you find errors during this testing, go back and write unit tests (that fail) to test the change in functionality to make it work correctly, then change your code to make the test pass. Now you have regression tests that capture your bug fixes.
[EDIT]
Another thing that I have found with doing TDD. It almost forces good design by default. This is because highly coupled designs are nearly impossible to unit test in isolation. It doesn't take very long using TDD to figure out that using interfaces, inversion of control, and dependency injection -- all patterns that will improve your design and reduce coupling -- are really important for testable code.
How does one test a test? Mutation testing is a valuable technique that I have personally used to surprisingly good effect. Read the linked article for more details, and links to even more academic references, but in general it "tests your tests" by modifying your source code (changing "x += 1" to "x -= 1" for example) and then rerunning your tests, ensuring that at least one test fails. Any mutations that don't cause test failures are flagged for later investigation.
You'd be surprised at how you can have 100% line and branch coverage with a set of tests that look comprehensive, and yet you can fundamentally change or even comment out a line in your source without any of the tests complaining. Often this comes down to not testing with the right inputs to cover all boundary cases, sometimes it's more subtle, but in all cases I was impressed with how much came out of it.
When applying Test-Driven Development (TDD), one begins with a failing test. This step, that might seem unecessary, actually is here to verify the unit test is testing something. Indeed, if the test never fails, it brings no value and worse, leads to wrong confidence as you'll rely on a positive result that is not proving anything.
When following this process strictly, all ''units'' are protected by the safety net the unit tests are making, even the most mundane.
Assert.IsEqual(p.DiscountPrice,90);
There is no reason the test evolves in that direction - or I'm missing something in your reasoning. When the price is 100 and the discount 20, the discount price is 80. This is like an invariant.
Now imagine your software needs to support another kind of discount based on percentage, perhaps depending on the volume bought, your Product::DiscountPrice() method may become more complicated. And it is possible that introducing those changes breaks the simple discount rule we had initially. Then you'll see the value of this test which will detect the regression immediately.
Red - Green - Refactor - this is to remember the essence of the TDD process.
Red refers to JUnit red bar when a tests fails.
Green is the color of JUnit progress bar when all tests pass.
Refactor under green condition: remove any dupliation, improve readability.
Now to address your point about the "3-4 layers above the code", this is true in a traditional (waterfall-like) process, not when the development process is agile. And agile is the world where TDD is coming from ; TDD is the cornerstone of eXtreme Programming.
Agile is about direct communication rather than thrown-over-the-wall requirement documents.
While, I am all for unit testing, I
sometimes wonder if this form of test
first development is really beneficial...
Small, trivial tests like this can be the "canary in the coalmine" for your codebase, alerting of danger before it's too late. The trivial tests are useful to keep around because they help you get the interactions right.
For example, think about a trivial test put in place to probe how to use an API you're unfamiliar with. If that test has any relevance to what you're doing in the code that uses the API "for real" it's useful to keep that test around. When the API releases a new version and you need to upgrade. You now have your assumptions about how you expect the API to behave recorded in an executable format that you can use to catch regressions.
...[I]n a real process, you have 3-4
layers above your code (Business
Request, Requirements Document,
Architecture Document), where the
actual defined business rule (Discount
Price is Price - Discount) could be
misdefined. If that's the situation,
your unit test means nothing to you.
If you've been coding for years without writing tests it may not be immediately obvious to you that there is any value. But if you are of the mindset that the best way to work is "release early, release often" or "agile" in that you want the ability to deploy rapidly/continuously, then your test definitely means something. The only way to do this is by legitimizing every change you make to the code with a test. No matter how small the test, once you have a green test suite you're theoretically OK to deploy. See also "continuous production" and "perpetual beta."
You don't have to be "test first" to be of this mindset, either, but that generally is the most efficient way to get there. When you do TDD, you lock yourself into small two to three minute Red Green Refactor cycle. At no point are you not able to stop and leave and have a complete mess on your hands that will take an hour to debug and put back together.
Additionally, your unit test is another
point of failure...
A successful test is one that demonstrates a failure in the system. A failing test will alert you to an error in the logic of the test or in the logic of your system. The goal of your tests is to break your code or prove one scenario works.
If you're writing tests after the code, you run the risk of writing a test that is "bad" because in order to see that your test truly works, you need to see it both broken and working. When you're writing tests after the code, this means you have to "spring the trap" and introduce a bug into the code to see the test fail. Most developers are not only uneasy about this, but would argue it is a waste of time.
What do we gain here?
There is definitely a benefit to doing things this way. Michael Feathers defines "legacy code" as "untested code." When you take this approach, you legitimize every change you make to your codebase. It's more rigorous than not using tests, but when it comes to maintaining a large codebase, it pays for itself.
Speaking of Feathers, there are two great resources you should check out in regard to this:
Working Effectively with Legacy Code
Brownfield Application Development in .NET
Both of these explain how to work these types of practices and disciplines into projects that aren't "Greenfield." They provide techniques for writing tests around tightly coupled components, hard wired dependencies, and things that you don't necessarily have control over. It's all about finding "seams" and testing around those.
[I]f the discount price is wrong, the
test team will still find the issue,
how did unit testing save any work?
Habits like these are like an investment. Returns aren't immediate; they build up over time. The alternative to not testing is essentially taking on debt of not being able to catch regressions, introduce code without fear of integration errors, or drive design decisions. The beauty is you legitimize every change introduced into your codebase.
What am I missing here? Please teach
me to love TDD, as I'm having a hard
time accepting it as useful so far. I
want too, because I want to stay
progressive, but it just doesn't make
sense to me.
I look at it as a professional responsibility. It's an ideal to strive toward. But it is very hard to follow and tedious. If you care about it, and feel you shouldn't produce code that is not tested, you'll be able to find the will power to learn good testing habits. One thing that I do a lot now (as do others) is timebox myself an hour to write code without any tests at all, then have the discipline to throw it away. This may seem wasteful, but it's not really. It's not like that exercise cost a company physical materials. It helped me to understand the problem and how to write code in such a way that it is both of higher quality and testable.
My advice would ultimately be that if you really don't have a desire to be good at it, then don't do it at all. Poor tests that aren't maintained, don't perform well, etc. can be worse than not having any tests. It's hard to learn on your own, and you probably won't love it, but it is going to be next to impossible to learn if you don't have a desire to do it, or can't see enough value in it to warrant the time investment.
A couple people keep mentioned that
testing helps enforce the spec. It has
been my experience that the spec has
been wrong as well, more often than
not...
A developer's keyboard is where the rubber meets the road. If the spec is wrong and you don't raise the flag on it, then it's highly probable you'll get blamed for it. Or at least your code will. The discipline and rigor involved in testing is difficult to adhere to. It's not at all easy. It takes practice, a lot of learning and a lot of mistakes. But eventually it does pay off. On a fast-paced, quickly changing project, it's the only way you can sleep at night, no matter if it slows you down.
Another thing to think about here is that techniques that are fundamentally the same as testing have been proven to work in the past: "clean room" and "design by contract" both tend to produce the same types of "meta"-code constructs that tests do, and enforce those at different points. None of these techniques are silver bullets, and rigor is going to cost you ultimately in the scope of features you can deliver in terms of time to market. But that's not what it's about. It's about being able to maintain what you do deliver. And that's very important for most projects.
Unit testing works very similar to double entry book keeping. You state the same thing (business rule) in two quite different ways (as programmed rules in your production code, and as simple, representative examples in your tests). It's very unlikely that you make the same mistake in both, so if they both agree with each other, it's rather unlikely that you got it wrong.
How is testing going to be worth the effort? In my experience in at least four ways, at least when doing test driven development:
it helps you come up with a well decoupled design. You can only unit test code that is well decoupled;
it helps you determine when you are done. Having to specify the needed behavior in tests helps to not build functionality that you don't actually need, and determine when the functionality is complete;
it gives you a safety net for refactorings, which makes the code much more amenable to changes; and
it saves you a lot of debugging time, which is horribly costly (I've heard estimates that traditionally, developers spend up to 80% of their time debugging).
Most unit tests, test assumptions. In this case, the discount price should be the price minus the discount. If your assumptions are wrong I bet your code is also wrong. And if you make a silly mistake, the test will fail and you will correct it.
If the rules change, the test will fail and that is a good thing. So you have to change the test too in this case.
As a general rule, if a test fails right away (and you don't use test first design), either the test or the code is wrong (or both if you are having a bad day). You use common sense (and possilby the specs) to correct the offending code and rerun the test.
Like Jason said, testing is security. And yes, sometimes they introduce extra work because of faulty tests. But most of the time they are huge time savers. (And you have the perfect opportunity to punish the guy who breaks the test (we are talking rubber chicken)).
Test everything you can. Even trivial mistakes, like forgetting to convert meters to feet can have very expensive side effects. Write a test, write the code for it to check, get it to pass, move on. Who knows at some point in the future, someone may change the discount code. A test can detect the problem.
I see unit tests and production code as having a symbiotic relationship. Simply put: one tests the other. And both test the developer.
Remember that the cost of fixing defects increases (exponentially) as the defects live through the development cycle. Yes, the testing team might catch the defect, but it will (usually) take more work to isolate and fix the defect from that point than if a unit test had failed, and it will be easier to introduce other defects while fixing it if you don't have unit tests to run.
That's usually easier to see with something more than a trivial example ... and with trivial examples, well, if you somehow mess up the unit test, the person reviewing it will catch the error in the test or the error in the code, or both. (They are being reviewed, right?) As tvanfosson points out, unit testing is just one part of an SQA plan.
In a sense, unit tests are insurance. They're no guarantee that you'll catch every defect, and it may seem at times like you're spending a lot of resources on them, but when they do catch defects that you can fix, you'll be spending a lot less than if you'd had no tests at all and had to fix all defects downstream.
I see your point, but it's clearly overstated.
Your argument is basically: Tests introduce failure. Therefore tests are bad/waste of time.
While that may be true in some cases, it's hardly the majority.
TDD assumes: More Tests = Less Failure.
Tests are more likely to catch points of failure than introduce them.
Even more automation can help here !
Yes, writing unit tests can be a lot of work, so use some tools to help you out.
Have a look at something like Pex, from Microsoft, if you're using .Net
It will automatically create suites of unit tests for you by examining your code. It will come up with tests which give good coverage, trying to cover all paths through your code.
Of course, just by looking at your code it can't know what you were actually trying to do, so it doesn't know if it's correct or not. But, it will generate interesting tests cases for you, and you can then examine them and see if it is behaving as you expect.
If you then go further and write parameterized unit tests (you can think of these as contracts, really) it will generate specific tests cases from these, and this time it can know if something's wrong, because your assertions in your tests will fail.
I've thought a bit about a good way to respond to this question, and would like to draw a parallel to the scientific method. IMO, you could rephrase this question, "How do you experiment an experiment?"
Experiments verify empirical assumptions (hypotheses) about the physical universe. Unit tests will test assumptions about the state or behavior of the code they call. We can talk about the validity of an experiment, but that's because we know, through numerous other experiments, that something doesn't fit. It doesn't have both convergent validity and empirical evidence. We don't design a new experiment to test or verify the validity of an experiment, but we may design a completely new experiment.
So like experiments, we don't describe the validity of a unit test based on whether or not it passes a unit test itself. Along with other unit tests, it describes the assumptions we make about the system it is testing. Also, like experiments, we try to remove as much complexity as we can from what we are testing. "As simple as possible, but no simpler."
Unlike experiments, we have a trick up our sleeve to verify our tests are valid other than just convergent validity. We can cleverly introduce a bug we know should be caught by the test, and see if the test does indeed fail. (If only we could do that in the real world, we'd depend much less on this convergent validity thing!) A more efficient way to do this is watch your test fail before implementing it (the red step in Red, Green, Refactor).
You need to use the correct paradigm when writing tests.
Start by first writing your tests.
Make sure they fail to start off with.
Get them to pass.
Code review before you checkin your code (make sure the tests are reviewed.)
You cant always be sure but they improve overall tests.
Even if you do not test your code, it will surely be tested in production by your users. Users are very creative in trying to crash your soft and finding even non-critical errors.
Fixing bugs in production is much more costly than resolving issues in development phase.
As a side-effect, you will lose income because of an exodus of customers. You can count on 11 lost or not gained customers for 1 angry customer.

Resources