While using test driven development, should i remove previous tests if I know they work - tdd

I am currently doing a project for college and it requires me to write a simple program with a few methods. Every time I create a new method I copy the previous test, and then add that method to the test. Currently I have 4 tests with all 4 methods being tested in the fourth. Should I remove the first and second test which only test the first method, and then the first and second method. Sorry if this is confusing. Thanks

That's not how test-driven development (TDD) is usually done. TDD usually serves two purposes:
By writing the test first, you get feedback on the usability of your API design. A unit test uses the API of the System Under Test (SUT), so if the test is difficult to write, the SUT is difficult to use.
The tests subsequently become artefacts that serve as regression tests.
Test code is also code, and comes with the same maintenance cost as regular code. It should be kept to the same standard as all other code, since you'll have to maintain it for the lifetime of the code base.
For that reason, rules about duplication, cohesion, coupling, etc. also apply for tests. In other words, don't copy and paste.
I've never heard about anyone following the process described in the OP.
Each test should test only one thing.

Related

Testing two way dependent classes in TDD

Recently I wanted to learn TDD by developing a real thing, so I decided to go with simple data packer/unpacker. After design on paper everything looked good, but when I attempted to code it I realised I don't know how to test it, so in TDD - how to do anything.
I have two classes: ArchiveReader and ArchiveWriter. Problem is, when I save something to file with ArchiveWriter I can't test it properly without ArchiveReader, without it I am forced to compare output byte by byte and I think that's not good idea - minor, urrelevant changes can occur later. ArchiveReader tests also needs something to read, so I have to use ArchiveWriter to make test packages.
Is TDD failing in this area? Is there any method to test cases like this?
If you already have the code for the two classes then it's not really TDD, because tests did not drive the design.
You can still test your code, depending on how the dependencies of those classes are handled. For example, if the ArchiveWriter class writes to a stream, you can have it output to a memory stream instead of a file stream, the writer should not care what kind of stream it is, and it will allow you to compare the results of the write method.
Same goes for the ArchiveReader class, if it reads from a stream, then it can be a memory stream.
As for your question about ArchiveWriter and ArchiveReader are mutually dependent on verifying each other, I don't think this is necessarily a problem. While it would be ideal to be able to test both on their own, isolated, there is no rule that a unit-test can only test a single class.
As for these classes, in production, they are likely always to be used in tangent with each other, writing an archive is pointless if you cannot read it again later.
I develop this kind of code, which has two classes or methods that must read and write the same binary format, using TDD all the time.
You mentioned checking the output byte by byte. I prefer the analogous tests of the input code, as although I must hand craft the binary inputs, the behaviour of the input code is generally much easier to check. It will delegate to methods or create objects, which you can check it does correctly in the usual way.
I also have symmetry tests. These use the output code to create the binary representation, then have the input code create a new set of objects from that binary representation. The tests check that the original and new objects are equivalent. These tests are easy to write and produce useful duagnostics when they fail.
Now, some people would say, oh, but you are not doing unit testing, because your symmetry tests test both the output code and the input code; you are doing integration tests and therefore doing it wrong. This should not worry you. The idea that there is a neat division between unit and integration tests, and that everything must be tested by unit tests before having integration tests is wrong. In practice, almost no code is tested in perfect isolation; most tested code uses other classes, even if they are such basics as String and HashMap. It is more useful to view tests as being on a continuum between perfect unit tests and perfect integration tests, to prefer all code to have tests near the unit test end, but not to get too worried if some code does not do so.
TDD is almost about unit testing. Unit testing is about testing of fully isolated independent unit of work. Simply, it is about testing of some logic of some your public method with all dependencies and environment to be faked.
In your example it means that if you want to unit test ArchiveReader or ArchiveWriter, at first you should isolate it from real I/O and test just logic (C#/java/... "your code").
If you are testing your application in a complex way, it is mostly about integration testing. And thus your assertions is needed to be against files your ArchiveWriter produce.
Before going to TDD, I recommend you to read at least one good book about just unit testing. Roy Osherove's book is perfect for me.

TDD FIRST principle

I am not understanding how the TDD FIRST principle isn't being adhered to in the following code.
These are my notes about the FIRST principle:
Fast: run (subset of) tests quickly (since you'll be running them all the time)
Independent: no tests depend on others, so can run any subset in any order
Repeatable: run N times, get same result (to help isolate bugs and enable automation)
Self-checking: test can automatically detect if passed (no human checking of output)
Timely: written about the same time as code under test (with TDD, written first!)
The quiz question:
Sally wants her website to have a special layout on the first Tuesday of every month. She has the following controller and test code:
# HomeController
def index
if Time.now.tuesday?
render 'special_index'
else
render 'index'
end
end
# HomeControllerSpec
it "should render special template on Tuesdays" do
get 'index'
if Time.now.tuesday?
response.should render_template('special_index')
else
response.should render_template('index')
end
end
What FIRST principle is not being followed?
Fast
Independent
Repeatable
Self-checking
Timely
I'm not sure which FIRST principle is not being adhered to:
Fast: The code seems to be fast because there is nothing complex about its tests.
Independent: The test doesn't depend on other tests.
Repeatable: The test will get the same result every time. 'special_index' if it's Tuesday and 'index' if it's not Tuesday.
Self-checking: The test can automatically detect if it's passed.
Timely: Both the code and the test code are presented here at the same time.
I chose Timely on the quiz because the test code was presented after the controller code. But I got the question wrong, and in retrospect, this wasn't a good choice. I'm not sure which FIRST principle isn't being followed here.
It's not Repeatable as not everyday is Tuesday :) If you run this test on Monday you will get one result, if you run it on Tuesday, a different one.
F.I.R.S.T., F.I.I.R.S.T. and FASRCS
Yes, the confusion partly has the reason, that the F.I.R.S.T. principle is not complete or concise enough concerning the "I". In courses I attended the principle was called F.I.I.R.S.T.
The second "I" stands for "Isolated". The test above is independent from other tests, but is not isolated in a separate class or project.
[Updated]:
Isolation can mean:
A unit tests isolates functionality out of a SUT (system under test). You can isolate functionality even out of one single function. This draws the line between unit tests or their relatives component tests and integration tests, and of course to system tests.
"Tests isolate failures. A developer should never have to reverse-engineer tests or the code being tested to know what went wrong. Each test class name and test method name with the text of the assertion should state exactly what is wrong and where." Ref.: History of FIRST principle
A unit test could be isolated from the SUT which it tests in a different developer artifact (class, package, development project) and/or delivery artifact (Dll, package, assembly).
Unit tests, testing the same SUT, especially their containing Asserts, should be isolated from each other in different test functions, but this is only a recommendation. Ideally each unit test contains only one assert.
Unit tests, testing different SUTs, should be isolated from each other or from other kind of tests of course further more in different classes or other mentioned artifacts.
Independence can mean:
Unit tests should not rely on each other (explicit independency), with the exception of special "setup" and "teardown" functions, but even this is subject for a discussion.
Especially unit tests should be order-independant (implicit independency). The result should not depend on unit tests executed before. While this sounds trivial, it isn't. There are tests which cannot avoid doing initializations and/or starting runtimes. Just one shared (e.g. class) variable and the SUT could react differently, if it was started before. You make an outside call to the operating system? Some dll will be loaded first time? You already have a potential dependency, at least on OS level- sometimes only minor, sometimes essential to not discover an error. It may be necessary to add cleanup code to reach optimal independency of tests.
Unit tests should be independant as much as possible from the runtime environment and not depend on a specific test environment or setting. This belongs also partly to "Repeatable".
No need to fillout twenty user dialogs before. No need to start the server. No need to make the database available. No need for another component. No need for a network. To accomplish that, often test doubles are used (stubs, mocks, fakes, dummies, spies, ..).
(Gerard Meszaros' classic work on: xUnit patterns, here coining the name 'test double' and defining different kinds of)
(Test double zoo quickly explained)
(Follow Martin Fowler 2007, thinking about stubs, mocks, etc. Classic)
While a unit test is never totally independant from it's SUT, ideally it should be independant as much as possible from the current implementation and only rely on the public interface of the function or class tested (SUT).
Conclusion: In this interpretations the word 'isolation' stresses more the physical location which often implies logical independence to some extent (e.g. class level isolation).
No completeness concerning potentially more accentuations and meanings claimed.
See also comments here.
But there are more properties of (good) unit tests: Roy Osherove found some more attributes in his book "The art of unit testing" which I don't find exactly in the F.I.I.R.S.T. principle (link to his book site), and which are cited here with my own words (and acronym):
Full control of SUT: the unit test should have full control of the SUT. I see this effectively identical as being independent from the test and runtime environment (e.g. using mocks, etc.). But because of independency is so ambigous, it makes sense to spend a separate letter.
Automated (related to repeatable and self-checking, but not the same)
identically). This one requires a test (runner) infrastructure.
Small, simple or in his words: "easy to implement" (again, related, but not identical). Often related to "Fast"
Relevant: The test should be relevant tomorrow. This one of the most difficult requirements to acchieve, and depending of the "school" there may be a need for temporary unit tests during TDD too. When a test is only testing the contracts, this is acchieved, but that may be not enough for high code coverage requirements.
Consistent result: Effectively a result of "enough" independency. Consistency is, what some people include in "Repeatable". There is an essential overlap, but they are not identical.
Self-explaining: In terms of naming, structure of the whole test, and specifically of the syntax of the assert as the key line, it should be clear what is tested, and what could be wrong if a test fails. Related to "Tests isolate failures", see above.
Given all these spedific points, it should be more clear than before, that it is all but simple to write (good) unit tests.
Independent and Repeatable
It is not independent from date and then it would able to run repeat but technically you get the same result because you choose to
The proper way to make a test for HomeController regarding to FIRST concept is change the time before the evaluation stage

Planning unit tests with TDD

When you approach a class you want to write, how do you plan its unit tests?
Are there formal templates which you follow or do you use pen and paper/notepad?
I am looking for some way to let other programmers/QAs know what tests should be implemented (and if something was overlooked it can be easier to spot it).
With TDD, tests drive the feature you are writing. If you're needing to write formal templates for it, then chances are you're not entering into the spirit of things!
TDD should be used to generate the test cases as you write the code. Simply put, before you write the next line of code, encode in a test what the code should do.
Check out Bob Martin's bowling game example which should give you more of a feel for TDD.
I do not think that having a template goes well with using TDD. I assume that you have read Kent Beck's Book on Test Driven Development by Example. If no please do so.
But the general idea is simple. When we start a class, we will have a general idea on the responsibility of the class. This is the steps that we use:
Have a general idea on class
responsibility and use that
information to name the class.
Create a test case for this class.
If you start with a concrete information on what the units inside the class are, just write those stubs inside the class and write test cases for those stubs. Initially all of it will fail and the signatures for most of those methods will change. That's the whole idea.
In most cases, the developer may not have that degree of information. In that case, it's OK to start writing code in the First Test. Once the test passes, then migrate the logic to the class.
So what I am driving at is, the whole point of TDD is to make the development process more organic. The class grows, with the knowledge on what it should do. Having a formal template, or writing things down, will probably not help.
The only thing I could think that you could do, is to sit with your developers before each iteration and come up with a pretty detailed idea on each of the component classes and its responsibilities (we only use this discussion to finalize public APIs).
If you want to know the quality of test cases written by your developer, then you can conduct an ad hoc code review to see if the classes are correctly broken down to units and all the units are tested.
TDD is not the methodology you are looking for ;-)
way to let other programmers/QAs know what tests should be implemented
This statement implies you are after tests, but TDD itself is driven by requirements and produces features - the fact that it also produces a suite of tests is an incidental (but hugely powerful) appendage which happens to result in a regression suite.
Although TDD harnesses 'tests' to drive development of code, you do not need to specify tests up-front. Even if you did (and sometimes is helpful to 'thinking' to do so) your programmer may not need to write all the tests in order to produce the desired behaviour in the code. Indeed, in TDD, you are encouraged to stop work when all the tests pass - you need not keep on writing tests only to find they already pass; that is something akin to makework.
Also, the other side-effect of TDD is in having (and continuously running) a regression suite. If at a later date a bug is found, it makes it easier, just by having a test suite, to write another test which demonstrates the bug with a failing test. Once the bug is fixed, the test should pass - along with all the other tests in the suite.
You cannot commit to TDD and let others do the unit testing. This requirement of yours strongly suggest that You haven't understood the TDD paradigm.
In my (admittedly fairly new) experience
You write test that, if passed, will confirm your initial understanding of the target funcitonality. Not a single line of production code is written at this point.
You then Implement the production code so the unit tests are passed.
If your understanding evolves, you then change your unit tests or/and add new ones
You then implement the changes in the production code so the tests will pass
By then, it is not forbidden to write additional unit tests, if you discover that parts of your production code are not covered.
Remove tests that no longer make sense.
You arrive at beautiful crisp and clean code :o)
TDD is NOT a QA method; It is a way of DEVELOPING. The whole idea is that the unit test guide the development process. So you really can't let others do the unit tests for you.
I start by designing the class first, usually with a simple UML class diagram. I try not to make the diagram just detailed enough so that I can run tests against it (e.g. params and return types specified for each method, and I know how the method's behavior affects object state).
Then, I write unit tests. Generally when it comes to automated testing you should have 1 test method for every method defined in your class. As far as convention goes, if I have a method in my class called myMethod, then my unit test method will be called testMyMethod.
I write unit tests using what I know about the method's intended behavior, and then write the method and check to make sure that it passes the unit test.

Test Driven Development - What exactly is the test?

I've been learning what TDD is, and one question that comes to mind is what exactly is the "test". For example, do you call the webservice and then build the code to make it work? or is it more unit testing oriented?
In general the test may be...
unit test which tests an individual subcomponent of your software without any external dependencies to other classes
integration test which are tests that test the connection between two separate systems, ie. their integration capability
acceptance test for validating the functionality of the system
...and some others I've most likely temporarily forgotten for now.
In TDD, however, you're mostly focusing on the unit tests when creating your software.
It's entirely Unit Test driven.
The basic idea is to write the unit tests first, and then do the absolute minimum amount of work necessary to pass the tests.
Then write more tests to cover more of the requirements, and implement a bit more to make it pass.
It's an iterative process, with cycles of test writing, and then code writing.
Here are a couple of good articles by Unclebob
Three rules of TDD
TDD with Acceptance and Unit tests
I suggest you not to emphasize on Test because TDD is actually is a software development methodology, not a testing methodology.
I would say it is about unit testing and code coverage. It is about shipping bugless code and being able to make changes easily in the future.
See Uncle Bob's words of wisdom.
How I use it, it's unit testing oriented. Suppose I want a method that square ints I write this method :
int square(int x) { return null; }
and then write some tests like :
[Test]
TestSquare()
{
Assert.AreEqual(square(0),0);
Assert.AreEqual(square(1),1);
Assert.AreEqual(square(10),100);
Assert.AreEqual(square(-1),1);
Assert.AreEqual(square(-10),100);
....
}
Ok, maybe square is a bad example :-)
In each case I test expected behaviour and all borderline vals like maxint and zero and null-value (remember you can test on errors too) and see to it the test fails (which isn't hard :-)) then I keep working on the function until it works.
So : first a unit test that fails an covers what you want it to cover, then the method.
Generally, unit tests in "TDD" shouldn't involve any IO at all.
In fact, you'll have a ton more effectiveness if you write objects that do not create side effects (I/O is almost always, if not always, a side effect!), and define your the behavior of your class either in terms of return values of methods, or calls made to interfaces that have been passed into the object.
I just want to give my view on the topic which may help to understand TDD a bit more in a different way.
TDD is a design method that relies in testing first. because you asked about how the test is, ill go like this:
If you want to build an application, you know the purpose of what you want to build and you know generally that when you are done, or along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc.
on the other hand TDD changes your mindset and i'll point out one of such ways. commonly , you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. I call this style SDDD (Syntax debugging driven development ).
but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).
by the way even though i said "you know the purpose of what you want to build ..", in practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)

In test driven development, do you write every possible test first, then the code?

In doing test driven development I have been in the habit of writing the first unit test for a new piece of functionality first, then writing the code for that functionality. If I have additional tests to write to cover all scenarios, I usually write them after the code is written. Is this considered bad form? Should I try and write every conceivable test for a piece of functionality first, before ever writing that code?
In order to do TDD properly, you always write the test first, and then the functionality second.
To add to that, I would take one scenario at a time, don't write 20 tests and then write the code for those 20 tests. Write one test, red/green flag it, then move on to your next test. This makes sure you're also doing one of the core tenets of TDD, which is to do the simplest implementation possible that meets all of your requirements/scenarios.
actually no, I often discover functionality "on-the-go". Let me explain the "no" a bit further:
I usually start out writing a test case for a high level feature, defining its Interface. After that, I usually set this test to ignore and continue writing tests for each of the Interfaces functionality. My cycle goes like:
Integration Test for Story A (high level API)
Write Unit Test for method xyz called in Integration Test
Implement method (red/green/refactor)
Repeat 2+3 till Integration Test passes.
While doing so, I often realize I have forgotten some small functionality in my main test. I then usually take time to look back at my customers requirements. If its a fit, I go back and add a test for it, set to ignored as I first want to finish what I started.
Sometimes I see the chance to do a refactoring. I usually finish an implementation till I reach a commit point and do refactoring then, however sometimes I stash my changes, go back and do the refactoring (including new tests if nescessary) first. This workflow is powererd by Mercurial MQ.
For most people, TDD and incremental/agile development go together. This looks something like:
Write a test for some feature
Write just enough code to make the test pass, refactoring as necessary
Repeat.
If you happen to have a detailed specification ahead of time, you could write all of the tests first, but you'd have to live with having sone tests not passing for a while.
The sooner you write the tests, the better. I usually find writing tests being harder tasks than actually implementing the functionality because you have to be aware of all the possible outcomes. So I tend to write more tests when I'm "in the zone". And when during coding I realize I might have missed a test case I just note that down on the to-do lists.
So in my opinion it's up to your leisure but I would implement tests in multiple batches.
The way I see it, test driven development isn't necessarily tests first development. Your tests drive your development and you are really writing your tests as you develop your application. You start by writing a simple test that fails because you haven't written the functionality yet. Then you write your code to implement that so that the tests pass.
Then you go back to your test, make modifications that will force you to add more functionality or refactor your code to follow better practices or reduce duplicate code, go fix your code to make the test pass...repeat, repeat, repeat.
A couple of videos that demonstrates this is below, although you can probably find a lot more by googling "TDD Video"
http://agilesoftwaredevelopment.com/videos/test-driven-development-basic-tutorial
(oops, only one video, new users can't insert more than one link)
I try to write a test at some level before each bit of functionality. Sometimes, I have to write a little more code to get through the compiler, but I try to minimise that. Writing the test first means that I've thought about what the code is supposed to achieve before writing it.
One technique I find useful is to keep an index card or notepad handy, and make a note of all the cases that I think of along the way. That allows me to focus on the current task without losing track of all the other things I'm supposed to think about. Afterwards, I can work through the list and either complete the extra cases or drop them as not necessary.
You could do that, but you wouldn't be doing TDD. The problem (well, one of them, anyway) with writing all of your tests up front is that in any case where the requirements are non-trivial, your tests will be building in a lot of assumptions about the structure of the code you're test-driving. Big steps lead to missteps.
One of the keys of successful TDD involves taking small steps. Small steps mean fewer changes to back out when something goes wrong. Small steps mean you can more often get your head around the effects of the changes you're making. And because small steps are easier to take with confidence, they have the paradoxical effect of increasing your velocity.
The TDD cycle starts with requirements. Start by choosing a requirement you know how to define through tests immediately, in a few short steps. If you look at a requirement and you're not sure how to test it, or you think, "Yeah, but to do that, I'd need to [insert ill-defined steps] first", then you should either skip to another requirement that you know how to do, or you should break this requirement into smaller requirements that you know how to do.
Once you have that, you work in a short red-green-refactor cycle: Write a test that quantifies some part of the requirement ("red", because it fails, because it has no implementation to test yet), write any code that will pass the test ("green"), then rework the code to remove duplication, magic numbers, and other code smells ("refactor"). During the refactoring phase, you should continue working in small steps, frequently re-running the test to make sure you haven't broken anything. Continue this cycle until you can look your boss/client in the eye and call the requirement met.
Now that you have one simple piece of your system defined, you've opened up the list of requirements available to implement - requirements that are adjacent to or dependent on the one you just implemented can now be tested and implemented in smaller steps building on what you've already done.
So the upshot of all that is: Don't try to do all your tests at once. One (small) thing at a time.
The point of TDD is that you have to observe that test fails when feature is not yet implemented. So you have to write test before code.
When you get into the TDD rhythm you write one test at a time and make it work. Very short red-green-refactor cycles really feel the rhythm. That being said, there is nothing wrong with other approaches (and they may even make more sense for some types of problems) but typically the only thing you need to do about other tests you are thinking of is write them down (or have your pair if you are pair programming write them down) so you don't forget them. You have to do that anyway because you could forget about a test in the middle of writing a different test.
Do just enough tests to test 1 unit of code at a time.. then write the actual code until it passes the test.. rinse, wash, repeat until done.
If you find yourself needing to write many tests for one unit of code ( a method, a function etc) it might be a sign that you are trying to do too much in that unit... which in turn makes the unit dificult to test & to refactor at a later time.

Resources