TDD for a Device Communicator - tdd

I've been reading about TDD, and would like to use it for my next project, but I'm not sure how to structure my classes with this new paradigm. The language I'd like to use is Java, although the problem is not really language-specific.
The Project
I have a few pieces of hardware that come with a ASCII-over-RS232 interface. I can issue simple commands, and get simple responses, and control them as if from their front panels. Each one has a slightly different syntax and very different sets of commands. My goal is to create an abstraction/interface so I can control them all through a GUI and/or remote procedure calls.
The Problem
I believe the first step is to create an abstract class (I'm bad at names, how about 'Communicator'?) to implement all the stuff like Serial I/O, and then create a subclass for each device. I'm sure it will be a little more complicated than that, but that's the core of the application in my mind.
Now, for unit tests, I don't think I really need the actual hardware or a serial connection. What I'd like to do is hand my Communicators an InputStream and OutputStream (or Reader and Writer) that could be from a serial port, file, stdin/stdout, piped from a test function, whatever. So, would I just have the Communicator constructor take those as inputs? If so, it would be easy to put the responsibility of setting it all up on the testing framework, but for the real thing, who makes the actual connection? A separate constructor? The function calling the constructor again? A separate class who's job it is to 'connect' the Communicator to the correct I/O streams?
Edit
I was about to rewrite the problem section in order to get answers to the question I thought I was asking, but I think I figured it out. I had (correctly?) identified two different functional areas.
1) Dealing with the serial port
2) Communicating with the device (understanding its output & generating commands)
A few months ago, I would have combined it all into one class. My first idea towards breaking away from this was to pass just the IO streams to the class that understands the device, and I couldn't figure out who would then be responsible for creating the streams.
Having done more research on inversion of control, I think I have an answer. Have a separate interface and class that solve problem #1 and pass it to the constructor of the class(es?) that solve problem #2. That way, it's easy to test both separately. #1 by connecting to the actual hardware and allowing the test framework to do different things. #2 can be tested by being given a mock of #1.
Does this sound reasonable? Do I need to share more information?

With TDD, you should let your design emerge, start small, with baby steps and grow your classes test by test, little by little.
CLARIFIED: Start with a concrete class, to send one command, unit test it with a mock or a stub. When it will work enough (perhaps not with all options), test it against your real device, once, to validate your mock/stub/simulator.
Once the class for the first command is operational, start implementing a second command, the same way: first again your mock/stub, then once against the device for validation. Now if you're seeing similarities between your two classes, you can refactor to your abstract class based design - or to something different.

Sorry for being a little Linux centric ..
My favorite way of simulating gadgets is to write character device drivers that simulate their behavior. This also gives you fun abilities, like providing an ioctl() interface that makes the simulated device behave abnormally.
At that point .. from testing to real world, it only matters which device(s) you actually open, read and write.
It should not be too hard to mimic the behavior of your gadgets .. it sounds like they take very basic instructions and return very basic responses. Again, a simple ioctl() could tell the simulated device that its time to misbehave, so you can ensure that your code is handling such events adequately. For instance, fail intentionally on every n'th instruction, where n is randomly selected upon the call to ioctl().

After seeing your edits I think you are heading in exactly the right direction. TDD tends to drive you towards a design composed of small classes with a well-defined responsibility. I would also echo tinkertim's advice - a device simulator which you can control and "provoke" into behaving in different ways is invaluable for testing.

Related

Test Driven Development initial implementation

A common practice of TDD is that you make tiny steps. But one thing which is bugging me is something I've seen a few people do, where by they just hardcode values/options, and then refactor later to make it work properly. For example…
describe Calculator
it should multiply
assert Calculator.multiply(4, 2) == 8
Then you do the least possible to make it pass:
class Calculator
def self.multiply(a, b)
return 8
And it does!
Why do people do this? Is it to ensure they're actually implementing the method in the right class or something? Cause it just seems like a sure-fire way to introduce bugs and give false-confidence if you forget something. Is it a good practice?
This practice is known as "Fake it 'til you make it." In other words, put fake implementations in until such time as it becomes simpler to put in a real implementation. You ask why we do this.
I do this for a number of reasons. One is simply to ensure that my test is being run. It's possible to be configured wrong so that when I hit my magic "run tests" key I'm actually not running the tests I think I'm running. If I press the button and it's red, then put in the fake implementation and it's green, I know I'm really running my tests.
Another reason for this practice is to keep a quick red/green/refactor rhythm going. That is the heartbeat that drives TDD, and it's important that it have a quick cycle. Important so you feel the progress, important so you know where you're at. Some problems (not this one, obviously) can't be solved in a quick heartbeat, but we must advance on them in a heartbeat. Fake it 'til you make it is a way to ensure that timely progress. See also flow.
There is a school of thought, which can be useful in training programmers to use TDD, that says you should not have any lines of source code that were not originally part of a unit test. By first coding the algorithm that passes the test into the test, you verify that your core logic works. Then, you refactor it out into something your production code can use, and write integration tests to define the interaction and thus the object structure containing this logic.
Also, religious TDD adherence would tell you that there should be no logic coded that a requirement, verified by an assertion in a unit test, does not specifically state. Case in point; at this time, the only test for multiplication in the system is asserting that the answer must be 8. So, at this time, the answer is ALWAYS 8, because the requirements tell you nothing different.
This seems very strict, and in the context of a simple case like this, nonsensical; to verify correct functionality in the general case, you would need an infinite number of unit tests, when you as an intelligent human being "know" how multiplication is supposed to work and could easily set up a test that generated and tested a multiplication table up to some limit that would make you confident it would work in all necessary cases. However, in more complex scenarios with more involved algorithms, this becomes a useful study in the benefits of YAGNI. If the requirement states that you need to be able to save record A to the DB, and the ability to save record B is omitted, then you must conclude "you ain't gonna need" the ability to save record B, until a requirement comes in that states this. If you implement the ability to save record B before you know you need to, then if it turns out you never need to then you have wasted time and effort building that into the system; you have code with no business purpose, that regardless can still "break" your system and thus requires maintenance.
Even in the simpler cases, you may end up coding more than you need if you code beyond requirements that you "know" are too light or specific. Let's say you were implementing some sort of parser for string codes. The requirements state that the string code "AA" = 1, and "AB" = 2, and that's the limit of the requirements. But, you know the full library of codes in this system include 20 others, so you include logic and tests that parse the full library. You go back the the client, expecting your payment for time and materials, and the client says "we didn't ask for that; we only ever use the two codes we specified in the tests, so we're not paying you for the extra work". And they would be exactly right; you've technically tried to bilk them by charging for code they didn't ask for and don't need.

TDD is a top-bottom or bottom-top design?

In my memory, most people told me I should design from top to bottom. If I want to implement a web page, I should image or draw this page on paper and then divide it into some functionality. For each functionality, I try design the external API, and implement their inside respectively.
But in TDD, they say I should consider a very very small functionality(a method?) first, write its test, fail, implement it and pass test. Composing them is the last step. I can't image how it gets good API.
And most strangely, they say TDD is not only unit tests and also function tests. I think it means top-bottom. If there is a functionality A composed of methods B, C and D. Because of TDD I write the function test for A first. But... B, C, D are all unimplemented. Should I use three stubs? If B depends on other three methods?
I used TDD to write some small programs. But when I striked an application with GUI, I got stuck.
Since TDD starts with what you can see from the outside (of whatever item you are working on at the moment), I'm not sure how it could qualify as bottom-up.
Taking TDD to the extreme (e.g., as in XP, aka extreme programming), you would certainly start from the end-user perspective, and only ever write as much code as you need to pass the tests created so far. If you find yourself starting with the tests for some internal function before reaching the point where the higher-level tests (plus good design for the code you are writing to make those tests pass) require this routine, you are working on some other paradigm, not strict TDD – because there was no test telling you to write that method in the first place. Not that this is necessarily a bad thing, but any problems you have with that is not really one of the TDD methodology.
For GUI programming, of course, you have the standard problem of automating tests, even before you created code. I only know of good tools for web apps for that; if you have good pointers on this topic in the desktop case, I'd sure love to see them.
I've been heavily writing rspec tests for my rails projects keeping a ratio of about 1:1.6 between code/tests. I never really had an issue of what to write first or what it depends on. If the method A that i want to write consists of B and C, then i would first implement B and C, again using the proper testing. To me the sequence is not so important as long as the tests are good and precise.
So, I don't really use stubs the way you describe it, but only if the functionality is already there and i just want to bypass/circumvent it.
BTW, it is considered a top-down approach. This is an excerpt from Rspec Book :
If you ask experienced software
delivery folks why they run a project
like that, front-loading it with all
the planning and analysis, then getting into the detailed design and
programming, and only really integrating and testing it at the end, they
will gaze into the distance, looking
older than their years, and patiently
explain that this is to mitigate
against the exponential cost of
change—the principle that introducing
a change or discovering a defect
becomes exponentially more expensive
the later you discover it. The
top-down approach seems the only
sensible way to hedge against the
possibility of discovering a defect
late in the day.
I would say it's top down. Say I had a single PDF that had 4 distinct documents in it and I was writing software to split them into 4 individual documents instead of a single document, the first test I would probably write is:
// Note: Keeping this very abstract
#Test
public void splitsDocumentsAccordingToDocumentType() {
List docs = new DocumentProcessor().split(new SinglePdfWithFourDocs());
assertEquals(4, docs());
...
}
I would consider the DocumentProcessor.split() method to be similar to "A" in your example. I could now implement the entire algorithm within the single method split and make the tests pass. I don't even need "B" or "C" right? Knowing that you're a good developer and you cringe at the thought of that 1500 line method you would start to look for ways to restructure your code to a more suitable design. Maybe you see that two additional objects (responsibilities) could be split out of this code:
1) Parsing the contents of the file to find the individual documents and
2) Reading and writing of the document from the file system
Let's tackle #1 first. Use a couple of "Extract Method" refactorings to localize code related to the parsing of the contents then an "Extract Class" refactoring, pulling out those methods into a class named, say DocumentParser. This could be analagous to "B" in your example. If you'd like, you could move tests related to document parsing from your DocumentProcessorTest to a new DocumentParserTest and mock or stub the DocumentParser in the DocumentProcessorTest.
For #2 it's pretty much lather, rinse, repeat and you'll end up with something like a DocumentSerializer class, AKA "C". You can mock this as well in your DocumentProcessorTest and you now have no file I/O and have test driven a component that has two additional collaborators without having to design your entire class (with individual methods). Notice that we took an "outside in" approach, which really enables the refactoring.
Thanks

Interface Insanity

I'm drinking the coolade and loving it - interfaces, IoC, DI, TDD, etc. etc. Working out pretty well. But I'm finding I have to fight a tendency to make everything an interface! I have a factory which is an interface. Its methods return objects which could be interfaces (might make testing easier). Those objects are DI'ed interfaces to the services they need. What I'm finding is that keeping the interfaces in sync with the implementations is adding to the work - adding a method to a class means adding it to the class + the interface, mocks, etc.
Am I factoring the interfaces out too early? Are there best practices to know when something should return an interface vs. an object?
Interfaces are useful when you want to mock an interaction between an object and one of its collaborators. However there is less value in an interface for an object which has internal state.
For example, say I have a service which talks to a repository in order to extract some domain object in order to manipulate it in some way.
There is definite design value in extracting an interface from the repository. My concrete implementation of the repository may well be strongly linked to NHibernate or ActiveRecord. By linking my service to the interface I get a clean separation from this implementation detail. It just so happens that I can also write super fast standalone unit tests for my service now that I can hand it a mock IRepository.
Considering the domain object which came back from the repository and which my service acts upon, there is less value. When I write test for my service, I will want to use a real domain object and check its state. E.g. after the call to service.AddSomething() I want to check that something was added to the domain object. I can test this by simple inspection of the state of the domain object. When I test my domain object in isolation, I don't need interfaces as I am only going to perform operations on the object and quiz it on its internal state. e.g. is it valid for my sheep to eat grass if it is sleeping?
In the first case, we are interested in interaction based testing. Interfaces help because we want to intercept the calls passing between the object under test and its collaborators with mocks. In the second case we are interested in state based testing. Interfaces don't help here. Try to be conscious of whether you are testing state or interactions and let that influence your interface or no interface decision.
Remember that (providing you have a copy of Resharper installed) it is extremely cheap to extract an interface later. It is also cheap to delete the interface and revert to a simpler class hierarchy if you decide that you didn't need that interface after all. My advice would be to start without interfaces and extract them on demand when you find that you want to mock the interaction.
When you bring IoC into the picture, then I would tend to extract more interfaces - but try to keep a lid on how many classes you shove into your IoC container. In general, you want to keep these restricted to largely stateless service objects.
Sounds like you're suffering a little from BDUF.
Take it easy with the coolade and let it flow naturally.
Remember that while flexibility is a worthy objective, added flexibility with IoC and DI (which to some extent are requirements for TDD) also increases complexity. The only point of flexibility is to make changes downstream quicker, cheaper or better. Each IoC/DI point increases complexity, and thus contributes to making changes elsewhere more difficult.
This is actually where you need a Big Design Up Front to some extent: identify what areas are most likely to change (and/or need extensive unit testing), and plan for flexibility there. Refactor to eliminate flexibility where changes are unlikely.
Now, I'm not saying that you can guess where flexibility will be needed with any kind of accuracy. You'll be wrong. But it's likely that you'll get something right. Where you later find you don't need flexibility, it can be factored out in maintenance. Where you need it, it can be factored in when adding features.
Now, areas which may or may not change depends on your business problem and IT environment. Here are some recurring areas.
I'd always consider external
interfaces where you integrate to
other systems to be highly mutable.
Whatever code provides a back end to
the user interface will need to support change in the UI. However, plan for changes in functionality primarily: don't go overboard and plan for different UI technologies (such as supporting both a smart client and a web application – usage patterns will differ too much).
On the other hand, coding for
portability to different databases
and platforms is usually a waste
of time at least in corporate
environments. Ask around and check
what plans may exist to replace or
upgrade technologies within the
likely lifespan of your software.
Changes to data content and formats are a tricky
business: while data will
occasionally change, most designs
I've seen handle such changes
poorly, and thus you get concrete
entity classes used directly.
But only you can make the judgement of what might or should not change.
I usually find that I want interfaces for "services" - whereas types which are primarily about "data" can be concrete classes. For instance, I'd have an Authenticator interface, but a Contact class. Of course, it's not always that clear-cut, but it's an initial rule of thumb.
I do feel your pain though - it's a little bit like going back to the dark days of .h and .c files...
I think the most important "agile" principle is YAGNI ("You Ain't Gonna Need It"). In other words, don't write extra code until it's actually needed, because if you write it in advance the requirements and constraints may well have changed when(if!) you finally do need it.
Interfaces, dependency injections, etc. - all this stuff adds complexity to your code, making it harder to understand and change. My rule of thumb is to keep things as simple as possible (but no simpler) and to not add complexity unless it gains me more than enough to offset the burden it imposes.
So if you are actually testing and having a mock object would be very useful then by all means define an interface that both your mock and real classes implement. But don't create a bunch of interfaces on the purely hypothetical grounds that it might be useful at some point, or that it is "correct" OO design.
Interfaces have the purpose of establishing a contract and are particularly useful when you want to change the class performing a task on the fly. If there is no need to change the classes an interface just might get in the way.
There are automatic tools to extract the interface, so maybe you'd better delay the interface extraction later in the process.
It depends very much on what you are providing... if you are working on internal things then the advice of "don't do them until needed" is reasonable. If, however, you are making an API that is to be consumed by other developers then changing things around to interfaces at a later date can be annoying.
A good rule of thumb is to make interfaces out of anything that needs to be subclasses. This is not an "always make an interface in that case" sort of thing, you still need to think about it.
So, the short answer is (and this works with both internal things and providing an API) is that if you anticipate more than one implementation is going to be needed then make it an interface.
Somethings that generally would not be interfaces would be classes that only hold data, like say a Location class that deals with x and y. THe odds of there being another implementation of that is slim.
Do not create the interfaces first - you ain't gonna need them. You cannot guess for which classes you'll need an interface of for which classes you don't. Therefore do not spend any time to burden the code with useless interface now.
But extract them when you feel the urge to do so - when you see the need of an interface - at the refactoring step.
Those answers can help too.

TDD ...how?

I'm about to start out my first TDD (test-driven development) program, and I (naturally) have a TDD mental block..so I was wondering if someone could help guide me on where I should start a bit.
I'm creating a function that will read binary data from socket and parses its data into a class object.
As far as I see, there are 3 parts:
1) Logic to parse data
2) socket class
3) class object
What are the steps that I should take so that I could incrementally TDD? I definitely plan to first write the test before even implementing the function.
The issue in TDD is "design for testability"
First, you must have an interface against which to write tests.
To get there, you must have a rough idea of what your testable units are.
Some class, which is built by a function.
Some function, which reads from a socket and emits a class.
Second, given this rough interface, you formalize it into actual non-working class and function definitions.
Third, you start to write your tests -- knowing they'll compile but fail.
Part-way through this, you may start head-scratching about your function. How do you set up a socket for your function? That's a pain in the neck.
However, the interface you roughed out above isn't the law, it's just a good idea. What if your function took an array of bytes and created a class object? This is much, much easier to test.
So, revisit the steps, change the interface, write the non-working class and function, now write the tests.
Now you can fill in the class and the function until all your tests pass.
When you're done with this bit of testing, all you have to do is hook in a real socket. Do you trust the socket libraries? (Hint: you should) Not much to test here. If you don't trust the socket libraries, now you've got to provide a source for the data that you can run in a controlled fashion. That's a large pain.
Your split sounds reasonable. I would consider the two dependencies to be the input and output. Can you make them less dependent on concrete production code? For instance, can you make it read from a general stream of data instead of a socket? That would make it easier to pass in test data.
The creation of the return value could be harder to mock out, and may not be a problem anyway - is the logic used for the actual population of the resulting object reasonably straightforward (after the parsing)? For instance, is it basically just setting trivial properties? If so, I wouldn't bother trying to introduce a factory etc there - just feed in some test data and check the results.
First, start thinking "the testS", plural, rather than "the test", singular. You should expect to write more than one.
Second, if you have mental block, consider starting with a simpler challenge. Lower the difficulty until it's real easy to do, then move on to more substantial work.
For instance, assume you already have a byte array with the binary data, so you don't even need to think about sockets. All you need to write is something that takes in a byte[] and return an instance of your object. Can you write a test for that ?
If you still have mental block, lower it yet another notch. Assume your byte array is only going to contain default values anyway. So you don't even have to worry about the parsing, just about being able to return an instance of your object that has all values set to the defaults. Can you write a test for that ?
I imagine something like:
public void testFooReaderCanParseDefaultFoo() {
FooReader fr = new FooReader();
Foo myFoo = fr.buildFoo();
assertEquals(0, myFoo.bar());
}
That's rock bottom, right ? You're only testing the Foo constructor. But you can then move up to the next level:
public void testFooReaderGivenBytesBuildsFoo() {
FooReader fr = new FooReader();
byte[] fooData = {1};
fr.consumeBytes(fooData);
Foo myFoo = fr.buildFoo();
assertEquals(1, myFoo.bar());
}
And so on...
'The best testing framework is the application itself'
I believe that a common misconception amongst developers is, they mistakenly make a strong association between testing frameworks and TDD principles. I would advise re-reading the official docs on TDD; bearing in mind that, there is no real relationship between testing frameworks and TDD. After all, TDD is a paradigm not a framework.
Upon reading the wiki on TDD (https://en.wikipedia.org/wiki/Test-driven_development), I've come to realise that to an extent things are a little bit open to interpretation.
There are various personal styles of TDD mainly due to the fact that TDD principles are open to interpretation.
I'm not here to say anyone is wrong, but I would like to share my techniques with you and explain how they have served me well. Bear in mind that I have been programming for 36 years; making my programming habits very well evolved.
Code reuse is over rated. Reuse code too much and you'll end up with bad abstraction and it will become very difficult to fix or change something without it affecting something else. The obvious advantage being less code to manage.
Repeating too much code leads to code management problems and oversized code bases. However it does have the advantage of good separation of concerns (the ability to tweak, change and fix things without affecting other parts of the app).
Don't repeat/refactor too much, don't reuse too much. Code needs to be maintainable. It’s important to understand and respect the balance between code reuse and abstraction/separation of concerns.
When deciding whether to reuse code I base the decision on: .... Will the nature of this code change in context throughout the app codebase? If the answer is no, then I reuse it. If the answer is yes or I'm not sure, I repeat/refactor it. I will however revise my codebases from time to time and see if any of my repeated code can be merged without compromising separation of concerns/abstraction.
As far as my basic programming habits are concerned, I like to write the conditions (if then else switch case etc) first; test them, then fill the conditions with the code and test again. Keep in mind there's no rule that you have to do this in a unit test. I refer to this as the low level stuff.
Once my low level stuff is done, I'll either reuse the code or refactor it into another part of the app, but not after testing it very thoroughly. Problem with repeating/refactoring badly tested code is that, if it’s broken, you have to fix it in multiple places.
BDD To me is a natural follow on from TDD. Once my code base is well tested I can easily tweak behaviours by moving entire blocks of code around. Cool thing is about my programming habits is that sometimes I move code around and discover useful behaviours that I didn’t even intend. It can sometimes even be useful for rebranding stuff to seem like a completely different code base.
To this end my code bases tend to start out a bit slow and pick up momentum because as I advance toward the end of development I have more and more code to refactor from or reuse.
The advantages for me in the way that I code is that, I am able to take on very high levels of complexity as this is promoted by good separation of concerns. It’s also awesome for writing highly optimised code. However the well optimised code tends to be a bit bloated, but to my knowledge there is no way to write optimized code without a bit of bloating. If the app doesn't need high processor efficiency, there's nothing stopping me from de-bloating my code. I'm of the opinion that server side code should be optimised and most client side code normally doesn't require it.
Going back to the topic of testing frameworks, I use them to just save a bit of compiler time.
As far as following story boards is concerned, that comes naturally to me without actually considering it. I've noticed most devs develop in the natural order of story boards even when they are not available.
As a general separation of concerns strategy, in most apps I separate concerns based on UI forms. For example I’ll reuse code within a form and repeat/refactor across forms. This is only a generalistic rule. There are times when I have to think outside the box. Sometimes repeating code can serve well for making code processor efficient.
As a little addendum to my TDD habits; I do optimizations and fault tolerance last. I will try to avoid using try catch blocks as much as possible and write my code in such a way as to not need them. For example rather than catch a null, I will check for null, or rather than catch an index out of bounds, I will scrutinise my code so that it never happens. I find that error trapping too early in app development, leads to semantic errors (behavioural errors that don't crash the app). Semantic errors can be very hard to trace or even notice.
Well that’s my 10 cents. Hope it helps.
Test Driven Development ?
So, this means you should start with writing a test first.
Write a test which contains the code like 'how you want to use your class'. This class or method that you are going to test with this test, is not even there yet.
For instance, you could write a test first like this:
[Test]
public void CanReadDataFromSocket()
{
SocketReader r = new SocketReader( ... ); // create a socketreader instance which takes a socket or a mock-socket in its constructor
byte[] data = r.Read();
Assert.IsTrue (data.Length > 0);
}
For instance; I'm just making up an example here.
Next, once you're able to read data from a socket, you can start thinking on how you'll parse it, and write a test in where you use the 'Parser' class which takes the data that you've read, and outputs an instance of your data class.
etc...
Knowing where to start writing tests and when to stop writing tests while using TDD, is a common problem when starting out.
I have found that it can sometimes help to write an integration test first. Doing so will help create some of the common objects you will be using. It will also allow you to focus your thoughts and tests, since you will need to start writing tests to make the integration test pass.
When I was starting with TDD, I read these 3 rules by Uncle Bob that really helped me out:
You are not allowed to write any production code unless it is to
make a failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
In a shorter version it would be:
Write only enough of a unit test to fail.
Write only enough production code to make the failing unit test pass.
as you can see, this is very simple.

How do you do TDD in a non-trivial application? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I've read a number of books and websites on the subject of TDD, and they all make a lot of sense, especially Kent Beck's book. However, when I try to do TDD myself, i find myself staring at the keyboard wondering how to begin. Is there a process you use? What is your thought process? How do you identify your first tests?
The majority of the books on the subject do a great job of describing what TDD is, but not how to practice TDD in real world non-trivial applications. How do you do TDD?
It's easier than you think, actually. You just use TDD on each individual class. Every public method that you have in the class should be tested for all possible outcomes. So the "proof of concept" TDD examples you see can also be used in a relatively large application which has many hundreds of classes.
Another TDD strategy you could use is simulating application test runs themselves, by encapsulating the main app behavior. For example, I have written a framework (in C++, but this should apply to any OO language) which represents an application. There are abstract classes for initialization, the main runloop, and shutting down. So my main() method looks something like this:
int main(int argc, char *argv[]) {
int result = 0;
myApp &mw = getApp(); // Singleton method to return main app instance
if(mw.initialize(argc, argv) == kErrorNone) {
result = mw.run();
}
mw.shutdown();
return(result);
}
The advantage of doing this is twofold. First of all, all of the main application functionality can be compiled into a static library, which is then linked against both the test suite and this main.cpp stub file. Second, it means that I can simulate entire "runs" of the main application by creating arrays for argc & argv[], and then simulating what would happen in main(). We use this process to test lots of real-world functionality to make sure that the application generates exactly what it's supposed to do given a certain real-world corpus of input data and command-line arguments.
Now, you're probably wondering how this would change for an application which has a real GUI, web-based interface, or whatever. To that, I would simply say to use mock-ups to test these aspects of the program.
But in short, my advice boils down to this: break down your test cases to the smallest level, then start looking upwards. Eventually the test suite will throw them all together, and you'll end up with a reasonable level of automated test coverage.
I used to have the same problem. I used to start most development by starting a window-designer to create the UI for the first feature I wanted to implement. As the UI is one of the hardest things to test this way of working doesn't translate very well to TDD.
I found the atomic object papers on Presenter First very helpful. I still start by envisioning user actions that I want to implement (if you've got usecases that's a great way to start) and using a MVP or MVC-ish model I start with writing a test for the presenter of the first screen. By mocking up the view until the presenter works I can get started really fast this way. http://www.atomicobject.com/pages/Presenter+First here's more information on working this way.
If you're starting a project in a language or framework that's unknown to you or has many unknown you can start out doing a spike first. I often write unit tests for my spikes too but only to run the code I'm spiking. Doing the spike can give you some input on how to start your real project. Don't forget to throw away your spike when you start on your real project
I start with thinkig of requirements.
foreach UseCase
analyze UseCase
think of future classes
write down test cases
write tests
testing and implementing classes (sometimes adding new tests if I missed sth at point 4).
That's it. It's pretty simple, but I think it's time consuming. I like it though and I stick to it. :)
If I have more time I try to model some sequential diagrams in Enterprise Architect.
I agree that it is especially hard to bootstrap the process.
I usually try to think of the first set of tests like a movie script, and maybe only the first scene to the movie.
Actor1 tells Actor2 that the world is
in trouble, Actor2 hands back a
package, Actor1 unpacks the package,
etc.
That is obviously a strange example, but I often find visualizing the interactions a nice way to get over that initial hump. There are other analogous techniques (User stories, RRC cards, etc.) that work well for larger groups, but it sounds like you are by yourself and may not need the extra overhead.
Also, I am sure the last thing that you want to do is read another book, but the guys at MockObjects.com have a book in early draft stages, currently titled Growing Object-Oriented Software, Guided by Tests. The chapters that are currently for review may give you some further insight in how to start TDD and continue it throughout.
The problem is that you are looking at your keyboard wondering what tests you need to write.
Instead think of the code that you want to write, then find the first small part of that code, then try and think of the test that would force you to write that small bit of code.
In the beginning it helps to work in very small pieces. Even over the course of a single day you'll be working in larger chunks. But any time you get stuck just think of the smallest piece of code that you want to write next, then write the test for it.
I don't think you should really begin with TDD. Seriously, where are your specs? Have you agreed on a general/rough overall design for your system yet, that may be appropriate for your application? I know TDD and agile discourages Big Design Up-Front, but that doesn't mean that you shouldn't be doing Design Up-Front first before TDDing your way through implementing that design.
Sometimes you don't know how to do TDD because your code isn't "test friendly" (easily testable).
Thanks to some good practices your classes can become easier to test in isolation, to achieve true unit testing.
I recently came across a blog by a Google employee, which describes how you can design your classes and methods so that they are easier to test.
Here is one of his recent talks which I recommand.
He insists on the fact that you have to separate business logic from object creation code (i.e. to avoid mixing logic with 'new' operator), by using the Dependency Injection pattern. He also explains how the Law of Demeter is important to testable code. He's mainly focused on Java code (and Guice) but his principles should apply to any language really.
The easiest is to start with a class that has no dependencies, a class that is used by other classes, but does not use another class. Then you should pick up a test, asking yourself "how would I know if this class (this method) is implemented correctly ?".
Then you could write a first test to interrogate your object when it's not initialized, it could return NULL, or throw an exception. Then you can initialize (perhaps only partially) your object, and test test it returns something valuable. Then you can add a test with another initialization value - should behaves the same way. At that time, I usually test an error condition - such as trying to initialize the object with a invalid value.
When you're done with the method, goes to another method of the same class until you're done with the whole class.
Then you could pick another class - either another independent class, or class that use the first class you've implemented.
If you go with a class that relies on your first class, I think it is acceptable to have your test environment - or your second class - instantiating the first class as it has be fully tested. When one test about the class fails, you should be able to determine in which class the problem lies.
Should you discover a problem in the first class, or ask whether it will behave correctly under some particular conditions, then write a new test.
If climbing up the dependencies you think that the tests you're writing are spanning over to many classes to be qualified as unit-tests, then you can use a mock object to isolate a class from the rest of the system.
If you already have your design - as you indicated in a comment in the answer from Jon LimJap, then you're not doing pure TDD since TDD is about using unit tests to let your design emerge.
That being said, not all shops allow strict TDD, and you have a design at hand, so let's use it and do TDD - albeit it would be better to say Test-First-Programming but that's not the point, as that's also how I started TDD.

Resources