When you approach a class you want to write, how do you plan its unit tests?
Are there formal templates which you follow or do you use pen and paper/notepad?
I am looking for some way to let other programmers/QAs know what tests should be implemented (and if something was overlooked it can be easier to spot it).
With TDD, tests drive the feature you are writing. If you're needing to write formal templates for it, then chances are you're not entering into the spirit of things!
TDD should be used to generate the test cases as you write the code. Simply put, before you write the next line of code, encode in a test what the code should do.
Check out Bob Martin's bowling game example which should give you more of a feel for TDD.
I do not think that having a template goes well with using TDD. I assume that you have read Kent Beck's Book on Test Driven Development by Example. If no please do so.
But the general idea is simple. When we start a class, we will have a general idea on the responsibility of the class. This is the steps that we use:
Have a general idea on class
responsibility and use that
information to name the class.
Create a test case for this class.
If you start with a concrete information on what the units inside the class are, just write those stubs inside the class and write test cases for those stubs. Initially all of it will fail and the signatures for most of those methods will change. That's the whole idea.
In most cases, the developer may not have that degree of information. In that case, it's OK to start writing code in the First Test. Once the test passes, then migrate the logic to the class.
So what I am driving at is, the whole point of TDD is to make the development process more organic. The class grows, with the knowledge on what it should do. Having a formal template, or writing things down, will probably not help.
The only thing I could think that you could do, is to sit with your developers before each iteration and come up with a pretty detailed idea on each of the component classes and its responsibilities (we only use this discussion to finalize public APIs).
If you want to know the quality of test cases written by your developer, then you can conduct an ad hoc code review to see if the classes are correctly broken down to units and all the units are tested.
TDD is not the methodology you are looking for ;-)
way to let other programmers/QAs know what tests should be implemented
This statement implies you are after tests, but TDD itself is driven by requirements and produces features - the fact that it also produces a suite of tests is an incidental (but hugely powerful) appendage which happens to result in a regression suite.
Although TDD harnesses 'tests' to drive development of code, you do not need to specify tests up-front. Even if you did (and sometimes is helpful to 'thinking' to do so) your programmer may not need to write all the tests in order to produce the desired behaviour in the code. Indeed, in TDD, you are encouraged to stop work when all the tests pass - you need not keep on writing tests only to find they already pass; that is something akin to makework.
Also, the other side-effect of TDD is in having (and continuously running) a regression suite. If at a later date a bug is found, it makes it easier, just by having a test suite, to write another test which demonstrates the bug with a failing test. Once the bug is fixed, the test should pass - along with all the other tests in the suite.
You cannot commit to TDD and let others do the unit testing. This requirement of yours strongly suggest that You haven't understood the TDD paradigm.
In my (admittedly fairly new) experience
You write test that, if passed, will confirm your initial understanding of the target funcitonality. Not a single line of production code is written at this point.
You then Implement the production code so the unit tests are passed.
If your understanding evolves, you then change your unit tests or/and add new ones
You then implement the changes in the production code so the tests will pass
By then, it is not forbidden to write additional unit tests, if you discover that parts of your production code are not covered.
Remove tests that no longer make sense.
You arrive at beautiful crisp and clean code :o)
TDD is NOT a QA method; It is a way of DEVELOPING. The whole idea is that the unit test guide the development process. So you really can't let others do the unit tests for you.
I start by designing the class first, usually with a simple UML class diagram. I try not to make the diagram just detailed enough so that I can run tests against it (e.g. params and return types specified for each method, and I know how the method's behavior affects object state).
Then, I write unit tests. Generally when it comes to automated testing you should have 1 test method for every method defined in your class. As far as convention goes, if I have a method in my class called myMethod, then my unit test method will be called testMyMethod.
I write unit tests using what I know about the method's intended behavior, and then write the method and check to make sure that it passes the unit test.
Related
I am currently doing a project for college and it requires me to write a simple program with a few methods. Every time I create a new method I copy the previous test, and then add that method to the test. Currently I have 4 tests with all 4 methods being tested in the fourth. Should I remove the first and second test which only test the first method, and then the first and second method. Sorry if this is confusing. Thanks
That's not how test-driven development (TDD) is usually done. TDD usually serves two purposes:
By writing the test first, you get feedback on the usability of your API design. A unit test uses the API of the System Under Test (SUT), so if the test is difficult to write, the SUT is difficult to use.
The tests subsequently become artefacts that serve as regression tests.
Test code is also code, and comes with the same maintenance cost as regular code. It should be kept to the same standard as all other code, since you'll have to maintain it for the lifetime of the code base.
For that reason, rules about duplication, cohesion, coupling, etc. also apply for tests. In other words, don't copy and paste.
I've never heard about anyone following the process described in the OP.
Each test should test only one thing.
I am planning to start using TDD. I have read on how RED-GREEN-Refactor cycle works. I am fine with writing Test before code and make it from Red to Green. Though I have basic question on re-factoring: For example: while doing the re-factoring, while I am improving my design, suppose I see a good case for introducing factory pattern and I added this in the code. And my tests may go to RED which I tried to fix to use this new improvement.
But where I am going to write tests for this new Factory Class which I added during re-factoring? Or it should be like now
I write tests for Factory class first -> RED
Add Factory class - make the test GREEN
Re-factor this Factory class
Fix other tests in RED
Am I doing thinking something wrong?
If you strictly follow the classical Red-Green-Refactor loop you should never have production code that isn't covered by tests. Your unit tests should only verify the behavior from your system under test through its public API and stay away from implementation details.
The goal of the "get to green" phase is to get to green as fast as possible. Any dirty hacks you make are excusable as long as you get to green in this step.
During the refactor phase you can (and should) clean up your code. If this means introducing a new class to tease out independent behavior that does not really belong in the initial class, by all means go for it. These changes are all "safe" since you covered the original code with unit tests. As a refactoring is not supposed to change the behavior of your code, the bar should stay green.
Should you write new unit tests for this newly extracted class? Not really, as it's currently part of the system under test and is covered by your original unit tests.
Note: there are other styles of unit testing that favor testing each class in heavy isolation, so dependening on your TDD style your mileage may vary.
To come back to your example: you are introducing a factory class. Where are you using this factory? Is that code covered by tests (again, if you strictly follow the red-green-refactor loop it should)? If that's the case, you shouldn't have to write new unit tests for the factory, as it is being tested indirectly and can be seen as an "implementation detail".
If a refactoring or a design improvement requires to change the external behavior of the code under test or to add new behavior, then it's not appropriate for the refactor phase of the TDD cycle.
A new cycle can be started by writing a test for the factory. When the factory is finished, the factory can be introduced in the code under test in a different TDD cycle.
Recently I wanted to learn TDD by developing a real thing, so I decided to go with simple data packer/unpacker. After design on paper everything looked good, but when I attempted to code it I realised I don't know how to test it, so in TDD - how to do anything.
I have two classes: ArchiveReader and ArchiveWriter. Problem is, when I save something to file with ArchiveWriter I can't test it properly without ArchiveReader, without it I am forced to compare output byte by byte and I think that's not good idea - minor, urrelevant changes can occur later. ArchiveReader tests also needs something to read, so I have to use ArchiveWriter to make test packages.
Is TDD failing in this area? Is there any method to test cases like this?
If you already have the code for the two classes then it's not really TDD, because tests did not drive the design.
You can still test your code, depending on how the dependencies of those classes are handled. For example, if the ArchiveWriter class writes to a stream, you can have it output to a memory stream instead of a file stream, the writer should not care what kind of stream it is, and it will allow you to compare the results of the write method.
Same goes for the ArchiveReader class, if it reads from a stream, then it can be a memory stream.
As for your question about ArchiveWriter and ArchiveReader are mutually dependent on verifying each other, I don't think this is necessarily a problem. While it would be ideal to be able to test both on their own, isolated, there is no rule that a unit-test can only test a single class.
As for these classes, in production, they are likely always to be used in tangent with each other, writing an archive is pointless if you cannot read it again later.
I develop this kind of code, which has two classes or methods that must read and write the same binary format, using TDD all the time.
You mentioned checking the output byte by byte. I prefer the analogous tests of the input code, as although I must hand craft the binary inputs, the behaviour of the input code is generally much easier to check. It will delegate to methods or create objects, which you can check it does correctly in the usual way.
I also have symmetry tests. These use the output code to create the binary representation, then have the input code create a new set of objects from that binary representation. The tests check that the original and new objects are equivalent. These tests are easy to write and produce useful duagnostics when they fail.
Now, some people would say, oh, but you are not doing unit testing, because your symmetry tests test both the output code and the input code; you are doing integration tests and therefore doing it wrong. This should not worry you. The idea that there is a neat division between unit and integration tests, and that everything must be tested by unit tests before having integration tests is wrong. In practice, almost no code is tested in perfect isolation; most tested code uses other classes, even if they are such basics as String and HashMap. It is more useful to view tests as being on a continuum between perfect unit tests and perfect integration tests, to prefer all code to have tests near the unit test end, but not to get too worried if some code does not do so.
TDD is almost about unit testing. Unit testing is about testing of fully isolated independent unit of work. Simply, it is about testing of some logic of some your public method with all dependencies and environment to be faked.
In your example it means that if you want to unit test ArchiveReader or ArchiveWriter, at first you should isolate it from real I/O and test just logic (C#/java/... "your code").
If you are testing your application in a complex way, it is mostly about integration testing. And thus your assertions is needed to be against files your ArchiveWriter produce.
Before going to TDD, I recommend you to read at least one good book about just unit testing. Roy Osherove's book is perfect for me.
I am not understanding how the TDD FIRST principle isn't being adhered to in the following code.
These are my notes about the FIRST principle:
Fast: run (subset of) tests quickly (since you'll be running them all the time)
Independent: no tests depend on others, so can run any subset in any order
Repeatable: run N times, get same result (to help isolate bugs and enable automation)
Self-checking: test can automatically detect if passed (no human checking of output)
Timely: written about the same time as code under test (with TDD, written first!)
The quiz question:
Sally wants her website to have a special layout on the first Tuesday of every month. She has the following controller and test code:
# HomeController
def index
if Time.now.tuesday?
render 'special_index'
else
render 'index'
end
end
# HomeControllerSpec
it "should render special template on Tuesdays" do
get 'index'
if Time.now.tuesday?
response.should render_template('special_index')
else
response.should render_template('index')
end
end
What FIRST principle is not being followed?
Fast
Independent
Repeatable
Self-checking
Timely
I'm not sure which FIRST principle is not being adhered to:
Fast: The code seems to be fast because there is nothing complex about its tests.
Independent: The test doesn't depend on other tests.
Repeatable: The test will get the same result every time. 'special_index' if it's Tuesday and 'index' if it's not Tuesday.
Self-checking: The test can automatically detect if it's passed.
Timely: Both the code and the test code are presented here at the same time.
I chose Timely on the quiz because the test code was presented after the controller code. But I got the question wrong, and in retrospect, this wasn't a good choice. I'm not sure which FIRST principle isn't being followed here.
It's not Repeatable as not everyday is Tuesday :) If you run this test on Monday you will get one result, if you run it on Tuesday, a different one.
F.I.R.S.T., F.I.I.R.S.T. and FASRCS
Yes, the confusion partly has the reason, that the F.I.R.S.T. principle is not complete or concise enough concerning the "I". In courses I attended the principle was called F.I.I.R.S.T.
The second "I" stands for "Isolated". The test above is independent from other tests, but is not isolated in a separate class or project.
[Updated]:
Isolation can mean:
A unit tests isolates functionality out of a SUT (system under test). You can isolate functionality even out of one single function. This draws the line between unit tests or their relatives component tests and integration tests, and of course to system tests.
"Tests isolate failures. A developer should never have to reverse-engineer tests or the code being tested to know what went wrong. Each test class name and test method name with the text of the assertion should state exactly what is wrong and where." Ref.: History of FIRST principle
A unit test could be isolated from the SUT which it tests in a different developer artifact (class, package, development project) and/or delivery artifact (Dll, package, assembly).
Unit tests, testing the same SUT, especially their containing Asserts, should be isolated from each other in different test functions, but this is only a recommendation. Ideally each unit test contains only one assert.
Unit tests, testing different SUTs, should be isolated from each other or from other kind of tests of course further more in different classes or other mentioned artifacts.
Independence can mean:
Unit tests should not rely on each other (explicit independency), with the exception of special "setup" and "teardown" functions, but even this is subject for a discussion.
Especially unit tests should be order-independant (implicit independency). The result should not depend on unit tests executed before. While this sounds trivial, it isn't. There are tests which cannot avoid doing initializations and/or starting runtimes. Just one shared (e.g. class) variable and the SUT could react differently, if it was started before. You make an outside call to the operating system? Some dll will be loaded first time? You already have a potential dependency, at least on OS level- sometimes only minor, sometimes essential to not discover an error. It may be necessary to add cleanup code to reach optimal independency of tests.
Unit tests should be independant as much as possible from the runtime environment and not depend on a specific test environment or setting. This belongs also partly to "Repeatable".
No need to fillout twenty user dialogs before. No need to start the server. No need to make the database available. No need for another component. No need for a network. To accomplish that, often test doubles are used (stubs, mocks, fakes, dummies, spies, ..).
(Gerard Meszaros' classic work on: xUnit patterns, here coining the name 'test double' and defining different kinds of)
(Test double zoo quickly explained)
(Follow Martin Fowler 2007, thinking about stubs, mocks, etc. Classic)
While a unit test is never totally independant from it's SUT, ideally it should be independant as much as possible from the current implementation and only rely on the public interface of the function or class tested (SUT).
Conclusion: In this interpretations the word 'isolation' stresses more the physical location which often implies logical independence to some extent (e.g. class level isolation).
No completeness concerning potentially more accentuations and meanings claimed.
See also comments here.
But there are more properties of (good) unit tests: Roy Osherove found some more attributes in his book "The art of unit testing" which I don't find exactly in the F.I.I.R.S.T. principle (link to his book site), and which are cited here with my own words (and acronym):
Full control of SUT: the unit test should have full control of the SUT. I see this effectively identical as being independent from the test and runtime environment (e.g. using mocks, etc.). But because of independency is so ambigous, it makes sense to spend a separate letter.
Automated (related to repeatable and self-checking, but not the same)
identically). This one requires a test (runner) infrastructure.
Small, simple or in his words: "easy to implement" (again, related, but not identical). Often related to "Fast"
Relevant: The test should be relevant tomorrow. This one of the most difficult requirements to acchieve, and depending of the "school" there may be a need for temporary unit tests during TDD too. When a test is only testing the contracts, this is acchieved, but that may be not enough for high code coverage requirements.
Consistent result: Effectively a result of "enough" independency. Consistency is, what some people include in "Repeatable". There is an essential overlap, but they are not identical.
Self-explaining: In terms of naming, structure of the whole test, and specifically of the syntax of the assert as the key line, it should be clear what is tested, and what could be wrong if a test fails. Related to "Tests isolate failures", see above.
Given all these spedific points, it should be more clear than before, that it is all but simple to write (good) unit tests.
Independent and Repeatable
It is not independent from date and then it would able to run repeat but technically you get the same result because you choose to
The proper way to make a test for HomeController regarding to FIRST concept is change the time before the evaluation stage
I've been learning what TDD is, and one question that comes to mind is what exactly is the "test". For example, do you call the webservice and then build the code to make it work? or is it more unit testing oriented?
In general the test may be...
unit test which tests an individual subcomponent of your software without any external dependencies to other classes
integration test which are tests that test the connection between two separate systems, ie. their integration capability
acceptance test for validating the functionality of the system
...and some others I've most likely temporarily forgotten for now.
In TDD, however, you're mostly focusing on the unit tests when creating your software.
It's entirely Unit Test driven.
The basic idea is to write the unit tests first, and then do the absolute minimum amount of work necessary to pass the tests.
Then write more tests to cover more of the requirements, and implement a bit more to make it pass.
It's an iterative process, with cycles of test writing, and then code writing.
Here are a couple of good articles by Unclebob
Three rules of TDD
TDD with Acceptance and Unit tests
I suggest you not to emphasize on Test because TDD is actually is a software development methodology, not a testing methodology.
I would say it is about unit testing and code coverage. It is about shipping bugless code and being able to make changes easily in the future.
See Uncle Bob's words of wisdom.
How I use it, it's unit testing oriented. Suppose I want a method that square ints I write this method :
int square(int x) { return null; }
and then write some tests like :
[Test]
TestSquare()
{
Assert.AreEqual(square(0),0);
Assert.AreEqual(square(1),1);
Assert.AreEqual(square(10),100);
Assert.AreEqual(square(-1),1);
Assert.AreEqual(square(-10),100);
....
}
Ok, maybe square is a bad example :-)
In each case I test expected behaviour and all borderline vals like maxint and zero and null-value (remember you can test on errors too) and see to it the test fails (which isn't hard :-)) then I keep working on the function until it works.
So : first a unit test that fails an covers what you want it to cover, then the method.
Generally, unit tests in "TDD" shouldn't involve any IO at all.
In fact, you'll have a ton more effectiveness if you write objects that do not create side effects (I/O is almost always, if not always, a side effect!), and define your the behavior of your class either in terms of return values of methods, or calls made to interfaces that have been passed into the object.
I just want to give my view on the topic which may help to understand TDD a bit more in a different way.
TDD is a design method that relies in testing first. because you asked about how the test is, ill go like this:
If you want to build an application, you know the purpose of what you want to build and you know generally that when you are done, or along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc.
on the other hand TDD changes your mindset and i'll point out one of such ways. commonly , you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. I call this style SDDD (Syntax debugging driven development ).
but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).
by the way even though i said "you know the purpose of what you want to build ..", in practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)