Test Driven Development - What exactly is the test? - tdd

I've been learning what TDD is, and one question that comes to mind is what exactly is the "test". For example, do you call the webservice and then build the code to make it work? or is it more unit testing oriented?

In general the test may be...
unit test which tests an individual subcomponent of your software without any external dependencies to other classes
integration test which are tests that test the connection between two separate systems, ie. their integration capability
acceptance test for validating the functionality of the system
...and some others I've most likely temporarily forgotten for now.
In TDD, however, you're mostly focusing on the unit tests when creating your software.

It's entirely Unit Test driven.
The basic idea is to write the unit tests first, and then do the absolute minimum amount of work necessary to pass the tests.
Then write more tests to cover more of the requirements, and implement a bit more to make it pass.
It's an iterative process, with cycles of test writing, and then code writing.

Here are a couple of good articles by Unclebob
Three rules of TDD
TDD with Acceptance and Unit tests

I suggest you not to emphasize on Test because TDD is actually is a software development methodology, not a testing methodology.

I would say it is about unit testing and code coverage. It is about shipping bugless code and being able to make changes easily in the future.
See Uncle Bob's words of wisdom.

How I use it, it's unit testing oriented. Suppose I want a method that square ints I write this method :
int square(int x) { return null; }
and then write some tests like :
[Test]
TestSquare()
{
Assert.AreEqual(square(0),0);
Assert.AreEqual(square(1),1);
Assert.AreEqual(square(10),100);
Assert.AreEqual(square(-1),1);
Assert.AreEqual(square(-10),100);
....
}
Ok, maybe square is a bad example :-)
In each case I test expected behaviour and all borderline vals like maxint and zero and null-value (remember you can test on errors too) and see to it the test fails (which isn't hard :-)) then I keep working on the function until it works.
So : first a unit test that fails an covers what you want it to cover, then the method.

Generally, unit tests in "TDD" shouldn't involve any IO at all.
In fact, you'll have a ton more effectiveness if you write objects that do not create side effects (I/O is almost always, if not always, a side effect!), and define your the behavior of your class either in terms of return values of methods, or calls made to interfaces that have been passed into the object.

I just want to give my view on the topic which may help to understand TDD a bit more in a different way.
TDD is a design method that relies in testing first. because you asked about how the test is, ill go like this:
If you want to build an application, you know the purpose of what you want to build and you know generally that when you are done, or along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc.
on the other hand TDD changes your mindset and i'll point out one of such ways. commonly , you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. I call this style SDDD (Syntax debugging driven development ).
but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).

by the way even though i said "you know the purpose of what you want to build ..", in practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)

Related

While using test driven development, should i remove previous tests if I know they work

I am currently doing a project for college and it requires me to write a simple program with a few methods. Every time I create a new method I copy the previous test, and then add that method to the test. Currently I have 4 tests with all 4 methods being tested in the fourth. Should I remove the first and second test which only test the first method, and then the first and second method. Sorry if this is confusing. Thanks
That's not how test-driven development (TDD) is usually done. TDD usually serves two purposes:
By writing the test first, you get feedback on the usability of your API design. A unit test uses the API of the System Under Test (SUT), so if the test is difficult to write, the SUT is difficult to use.
The tests subsequently become artefacts that serve as regression tests.
Test code is also code, and comes with the same maintenance cost as regular code. It should be kept to the same standard as all other code, since you'll have to maintain it for the lifetime of the code base.
For that reason, rules about duplication, cohesion, coupling, etc. also apply for tests. In other words, don't copy and paste.
I've never heard about anyone following the process described in the OP.
Each test should test only one thing.

Testing two way dependent classes in TDD

Recently I wanted to learn TDD by developing a real thing, so I decided to go with simple data packer/unpacker. After design on paper everything looked good, but when I attempted to code it I realised I don't know how to test it, so in TDD - how to do anything.
I have two classes: ArchiveReader and ArchiveWriter. Problem is, when I save something to file with ArchiveWriter I can't test it properly without ArchiveReader, without it I am forced to compare output byte by byte and I think that's not good idea - minor, urrelevant changes can occur later. ArchiveReader tests also needs something to read, so I have to use ArchiveWriter to make test packages.
Is TDD failing in this area? Is there any method to test cases like this?
If you already have the code for the two classes then it's not really TDD, because tests did not drive the design.
You can still test your code, depending on how the dependencies of those classes are handled. For example, if the ArchiveWriter class writes to a stream, you can have it output to a memory stream instead of a file stream, the writer should not care what kind of stream it is, and it will allow you to compare the results of the write method.
Same goes for the ArchiveReader class, if it reads from a stream, then it can be a memory stream.
As for your question about ArchiveWriter and ArchiveReader are mutually dependent on verifying each other, I don't think this is necessarily a problem. While it would be ideal to be able to test both on their own, isolated, there is no rule that a unit-test can only test a single class.
As for these classes, in production, they are likely always to be used in tangent with each other, writing an archive is pointless if you cannot read it again later.
I develop this kind of code, which has two classes or methods that must read and write the same binary format, using TDD all the time.
You mentioned checking the output byte by byte. I prefer the analogous tests of the input code, as although I must hand craft the binary inputs, the behaviour of the input code is generally much easier to check. It will delegate to methods or create objects, which you can check it does correctly in the usual way.
I also have symmetry tests. These use the output code to create the binary representation, then have the input code create a new set of objects from that binary representation. The tests check that the original and new objects are equivalent. These tests are easy to write and produce useful duagnostics when they fail.
Now, some people would say, oh, but you are not doing unit testing, because your symmetry tests test both the output code and the input code; you are doing integration tests and therefore doing it wrong. This should not worry you. The idea that there is a neat division between unit and integration tests, and that everything must be tested by unit tests before having integration tests is wrong. In practice, almost no code is tested in perfect isolation; most tested code uses other classes, even if they are such basics as String and HashMap. It is more useful to view tests as being on a continuum between perfect unit tests and perfect integration tests, to prefer all code to have tests near the unit test end, but not to get too worried if some code does not do so.
TDD is almost about unit testing. Unit testing is about testing of fully isolated independent unit of work. Simply, it is about testing of some logic of some your public method with all dependencies and environment to be faked.
In your example it means that if you want to unit test ArchiveReader or ArchiveWriter, at first you should isolate it from real I/O and test just logic (C#/java/... "your code").
If you are testing your application in a complex way, it is mostly about integration testing. And thus your assertions is needed to be against files your ArchiveWriter produce.
Before going to TDD, I recommend you to read at least one good book about just unit testing. Roy Osherove's book is perfect for me.

Planning unit tests with TDD

When you approach a class you want to write, how do you plan its unit tests?
Are there formal templates which you follow or do you use pen and paper/notepad?
I am looking for some way to let other programmers/QAs know what tests should be implemented (and if something was overlooked it can be easier to spot it).
With TDD, tests drive the feature you are writing. If you're needing to write formal templates for it, then chances are you're not entering into the spirit of things!
TDD should be used to generate the test cases as you write the code. Simply put, before you write the next line of code, encode in a test what the code should do.
Check out Bob Martin's bowling game example which should give you more of a feel for TDD.
I do not think that having a template goes well with using TDD. I assume that you have read Kent Beck's Book on Test Driven Development by Example. If no please do so.
But the general idea is simple. When we start a class, we will have a general idea on the responsibility of the class. This is the steps that we use:
Have a general idea on class
responsibility and use that
information to name the class.
Create a test case for this class.
If you start with a concrete information on what the units inside the class are, just write those stubs inside the class and write test cases for those stubs. Initially all of it will fail and the signatures for most of those methods will change. That's the whole idea.
In most cases, the developer may not have that degree of information. In that case, it's OK to start writing code in the First Test. Once the test passes, then migrate the logic to the class.
So what I am driving at is, the whole point of TDD is to make the development process more organic. The class grows, with the knowledge on what it should do. Having a formal template, or writing things down, will probably not help.
The only thing I could think that you could do, is to sit with your developers before each iteration and come up with a pretty detailed idea on each of the component classes and its responsibilities (we only use this discussion to finalize public APIs).
If you want to know the quality of test cases written by your developer, then you can conduct an ad hoc code review to see if the classes are correctly broken down to units and all the units are tested.
TDD is not the methodology you are looking for ;-)
way to let other programmers/QAs know what tests should be implemented
This statement implies you are after tests, but TDD itself is driven by requirements and produces features - the fact that it also produces a suite of tests is an incidental (but hugely powerful) appendage which happens to result in a regression suite.
Although TDD harnesses 'tests' to drive development of code, you do not need to specify tests up-front. Even if you did (and sometimes is helpful to 'thinking' to do so) your programmer may not need to write all the tests in order to produce the desired behaviour in the code. Indeed, in TDD, you are encouraged to stop work when all the tests pass - you need not keep on writing tests only to find they already pass; that is something akin to makework.
Also, the other side-effect of TDD is in having (and continuously running) a regression suite. If at a later date a bug is found, it makes it easier, just by having a test suite, to write another test which demonstrates the bug with a failing test. Once the bug is fixed, the test should pass - along with all the other tests in the suite.
You cannot commit to TDD and let others do the unit testing. This requirement of yours strongly suggest that You haven't understood the TDD paradigm.
In my (admittedly fairly new) experience
You write test that, if passed, will confirm your initial understanding of the target funcitonality. Not a single line of production code is written at this point.
You then Implement the production code so the unit tests are passed.
If your understanding evolves, you then change your unit tests or/and add new ones
You then implement the changes in the production code so the tests will pass
By then, it is not forbidden to write additional unit tests, if you discover that parts of your production code are not covered.
Remove tests that no longer make sense.
You arrive at beautiful crisp and clean code :o)
TDD is NOT a QA method; It is a way of DEVELOPING. The whole idea is that the unit test guide the development process. So you really can't let others do the unit tests for you.
I start by designing the class first, usually with a simple UML class diagram. I try not to make the diagram just detailed enough so that I can run tests against it (e.g. params and return types specified for each method, and I know how the method's behavior affects object state).
Then, I write unit tests. Generally when it comes to automated testing you should have 1 test method for every method defined in your class. As far as convention goes, if I have a method in my class called myMethod, then my unit test method will be called testMyMethod.
I write unit tests using what I know about the method's intended behavior, and then write the method and check to make sure that it passes the unit test.

In test driven development, do you write every possible test first, then the code?

In doing test driven development I have been in the habit of writing the first unit test for a new piece of functionality first, then writing the code for that functionality. If I have additional tests to write to cover all scenarios, I usually write them after the code is written. Is this considered bad form? Should I try and write every conceivable test for a piece of functionality first, before ever writing that code?
In order to do TDD properly, you always write the test first, and then the functionality second.
To add to that, I would take one scenario at a time, don't write 20 tests and then write the code for those 20 tests. Write one test, red/green flag it, then move on to your next test. This makes sure you're also doing one of the core tenets of TDD, which is to do the simplest implementation possible that meets all of your requirements/scenarios.
actually no, I often discover functionality "on-the-go". Let me explain the "no" a bit further:
I usually start out writing a test case for a high level feature, defining its Interface. After that, I usually set this test to ignore and continue writing tests for each of the Interfaces functionality. My cycle goes like:
Integration Test for Story A (high level API)
Write Unit Test for method xyz called in Integration Test
Implement method (red/green/refactor)
Repeat 2+3 till Integration Test passes.
While doing so, I often realize I have forgotten some small functionality in my main test. I then usually take time to look back at my customers requirements. If its a fit, I go back and add a test for it, set to ignored as I first want to finish what I started.
Sometimes I see the chance to do a refactoring. I usually finish an implementation till I reach a commit point and do refactoring then, however sometimes I stash my changes, go back and do the refactoring (including new tests if nescessary) first. This workflow is powererd by Mercurial MQ.
For most people, TDD and incremental/agile development go together. This looks something like:
Write a test for some feature
Write just enough code to make the test pass, refactoring as necessary
Repeat.
If you happen to have a detailed specification ahead of time, you could write all of the tests first, but you'd have to live with having sone tests not passing for a while.
The sooner you write the tests, the better. I usually find writing tests being harder tasks than actually implementing the functionality because you have to be aware of all the possible outcomes. So I tend to write more tests when I'm "in the zone". And when during coding I realize I might have missed a test case I just note that down on the to-do lists.
So in my opinion it's up to your leisure but I would implement tests in multiple batches.
The way I see it, test driven development isn't necessarily tests first development. Your tests drive your development and you are really writing your tests as you develop your application. You start by writing a simple test that fails because you haven't written the functionality yet. Then you write your code to implement that so that the tests pass.
Then you go back to your test, make modifications that will force you to add more functionality or refactor your code to follow better practices or reduce duplicate code, go fix your code to make the test pass...repeat, repeat, repeat.
A couple of videos that demonstrates this is below, although you can probably find a lot more by googling "TDD Video"
http://agilesoftwaredevelopment.com/videos/test-driven-development-basic-tutorial
(oops, only one video, new users can't insert more than one link)
I try to write a test at some level before each bit of functionality. Sometimes, I have to write a little more code to get through the compiler, but I try to minimise that. Writing the test first means that I've thought about what the code is supposed to achieve before writing it.
One technique I find useful is to keep an index card or notepad handy, and make a note of all the cases that I think of along the way. That allows me to focus on the current task without losing track of all the other things I'm supposed to think about. Afterwards, I can work through the list and either complete the extra cases or drop them as not necessary.
You could do that, but you wouldn't be doing TDD. The problem (well, one of them, anyway) with writing all of your tests up front is that in any case where the requirements are non-trivial, your tests will be building in a lot of assumptions about the structure of the code you're test-driving. Big steps lead to missteps.
One of the keys of successful TDD involves taking small steps. Small steps mean fewer changes to back out when something goes wrong. Small steps mean you can more often get your head around the effects of the changes you're making. And because small steps are easier to take with confidence, they have the paradoxical effect of increasing your velocity.
The TDD cycle starts with requirements. Start by choosing a requirement you know how to define through tests immediately, in a few short steps. If you look at a requirement and you're not sure how to test it, or you think, "Yeah, but to do that, I'd need to [insert ill-defined steps] first", then you should either skip to another requirement that you know how to do, or you should break this requirement into smaller requirements that you know how to do.
Once you have that, you work in a short red-green-refactor cycle: Write a test that quantifies some part of the requirement ("red", because it fails, because it has no implementation to test yet), write any code that will pass the test ("green"), then rework the code to remove duplication, magic numbers, and other code smells ("refactor"). During the refactoring phase, you should continue working in small steps, frequently re-running the test to make sure you haven't broken anything. Continue this cycle until you can look your boss/client in the eye and call the requirement met.
Now that you have one simple piece of your system defined, you've opened up the list of requirements available to implement - requirements that are adjacent to or dependent on the one you just implemented can now be tested and implemented in smaller steps building on what you've already done.
So the upshot of all that is: Don't try to do all your tests at once. One (small) thing at a time.
The point of TDD is that you have to observe that test fails when feature is not yet implemented. So you have to write test before code.
When you get into the TDD rhythm you write one test at a time and make it work. Very short red-green-refactor cycles really feel the rhythm. That being said, there is nothing wrong with other approaches (and they may even make more sense for some types of problems) but typically the only thing you need to do about other tests you are thinking of is write them down (or have your pair if you are pair programming write them down) so you don't forget them. You have to do that anyway because you could forget about a test in the middle of writing a different test.
Do just enough tests to test 1 unit of code at a time.. then write the actual code until it passes the test.. rinse, wash, repeat until done.
If you find yourself needing to write many tests for one unit of code ( a method, a function etc) it might be a sign that you are trying to do too much in that unit... which in turn makes the unit dificult to test & to refactor at a later time.

How do you handle TDD in the continuous integration?

Imagine you are implementing the user story containing various new features and adding complexity to the code base. The existing code is quite well covered and you have just decided upon interfaces. You are starting to implement the functionality starting with tests.
Now you have fairly complex test cases based on the requirements but the implementation is nowhere near the point when you are able to commit to the SCM fully working code and many test are failing (as they should).
There is an assumption that in continuous integration all builds should be green if possible and thus you shouldn't commit as you would break the build. But you also shouldn't "Go dark" and keep such amount of code for yourself...
What is the suggested procedure in such situation?
Do not decide on all interfaces beforehand. Develop incrementally in a typical TDD rhythm: write a test; make the test pass; refactor. That should keep everything in good shape, the bar will always be green and you can check code in without worrying that you will break the build.
It requires a different style of writing code, but you will get used to the rhythm eventually.
What about skipping those tests that you know won't pass because the functionality is currently missing?
Make it obvious that you are skipping the tests too! Really make it scream "like a stuck pig", as they say in Oz! (-:
As you add functionality, enable the associated tests and keep "your bar green!"
Here's another great article over at The Pragmatic Programmers that covers making broken windows obvious to others.
HTH
cheers,
Rob

Resources