How do you do TDD in a non-trivial application? [closed] - tdd

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I've read a number of books and websites on the subject of TDD, and they all make a lot of sense, especially Kent Beck's book. However, when I try to do TDD myself, i find myself staring at the keyboard wondering how to begin. Is there a process you use? What is your thought process? How do you identify your first tests?
The majority of the books on the subject do a great job of describing what TDD is, but not how to practice TDD in real world non-trivial applications. How do you do TDD?

It's easier than you think, actually. You just use TDD on each individual class. Every public method that you have in the class should be tested for all possible outcomes. So the "proof of concept" TDD examples you see can also be used in a relatively large application which has many hundreds of classes.
Another TDD strategy you could use is simulating application test runs themselves, by encapsulating the main app behavior. For example, I have written a framework (in C++, but this should apply to any OO language) which represents an application. There are abstract classes for initialization, the main runloop, and shutting down. So my main() method looks something like this:
int main(int argc, char *argv[]) {
int result = 0;
myApp &mw = getApp(); // Singleton method to return main app instance
if(mw.initialize(argc, argv) == kErrorNone) {
result = mw.run();
}
mw.shutdown();
return(result);
}
The advantage of doing this is twofold. First of all, all of the main application functionality can be compiled into a static library, which is then linked against both the test suite and this main.cpp stub file. Second, it means that I can simulate entire "runs" of the main application by creating arrays for argc & argv[], and then simulating what would happen in main(). We use this process to test lots of real-world functionality to make sure that the application generates exactly what it's supposed to do given a certain real-world corpus of input data and command-line arguments.
Now, you're probably wondering how this would change for an application which has a real GUI, web-based interface, or whatever. To that, I would simply say to use mock-ups to test these aspects of the program.
But in short, my advice boils down to this: break down your test cases to the smallest level, then start looking upwards. Eventually the test suite will throw them all together, and you'll end up with a reasonable level of automated test coverage.

I used to have the same problem. I used to start most development by starting a window-designer to create the UI for the first feature I wanted to implement. As the UI is one of the hardest things to test this way of working doesn't translate very well to TDD.
I found the atomic object papers on Presenter First very helpful. I still start by envisioning user actions that I want to implement (if you've got usecases that's a great way to start) and using a MVP or MVC-ish model I start with writing a test for the presenter of the first screen. By mocking up the view until the presenter works I can get started really fast this way. http://www.atomicobject.com/pages/Presenter+First here's more information on working this way.
If you're starting a project in a language or framework that's unknown to you or has many unknown you can start out doing a spike first. I often write unit tests for my spikes too but only to run the code I'm spiking. Doing the spike can give you some input on how to start your real project. Don't forget to throw away your spike when you start on your real project

I start with thinkig of requirements.
foreach UseCase
analyze UseCase
think of future classes
write down test cases
write tests
testing and implementing classes (sometimes adding new tests if I missed sth at point 4).
That's it. It's pretty simple, but I think it's time consuming. I like it though and I stick to it. :)
If I have more time I try to model some sequential diagrams in Enterprise Architect.

I agree that it is especially hard to bootstrap the process.
I usually try to think of the first set of tests like a movie script, and maybe only the first scene to the movie.
Actor1 tells Actor2 that the world is
in trouble, Actor2 hands back a
package, Actor1 unpacks the package,
etc.
That is obviously a strange example, but I often find visualizing the interactions a nice way to get over that initial hump. There are other analogous techniques (User stories, RRC cards, etc.) that work well for larger groups, but it sounds like you are by yourself and may not need the extra overhead.
Also, I am sure the last thing that you want to do is read another book, but the guys at MockObjects.com have a book in early draft stages, currently titled Growing Object-Oriented Software, Guided by Tests. The chapters that are currently for review may give you some further insight in how to start TDD and continue it throughout.

The problem is that you are looking at your keyboard wondering what tests you need to write.
Instead think of the code that you want to write, then find the first small part of that code, then try and think of the test that would force you to write that small bit of code.
In the beginning it helps to work in very small pieces. Even over the course of a single day you'll be working in larger chunks. But any time you get stuck just think of the smallest piece of code that you want to write next, then write the test for it.

I don't think you should really begin with TDD. Seriously, where are your specs? Have you agreed on a general/rough overall design for your system yet, that may be appropriate for your application? I know TDD and agile discourages Big Design Up-Front, but that doesn't mean that you shouldn't be doing Design Up-Front first before TDDing your way through implementing that design.

Sometimes you don't know how to do TDD because your code isn't "test friendly" (easily testable).
Thanks to some good practices your classes can become easier to test in isolation, to achieve true unit testing.
I recently came across a blog by a Google employee, which describes how you can design your classes and methods so that they are easier to test.
Here is one of his recent talks which I recommand.
He insists on the fact that you have to separate business logic from object creation code (i.e. to avoid mixing logic with 'new' operator), by using the Dependency Injection pattern. He also explains how the Law of Demeter is important to testable code. He's mainly focused on Java code (and Guice) but his principles should apply to any language really.

The easiest is to start with a class that has no dependencies, a class that is used by other classes, but does not use another class. Then you should pick up a test, asking yourself "how would I know if this class (this method) is implemented correctly ?".
Then you could write a first test to interrogate your object when it's not initialized, it could return NULL, or throw an exception. Then you can initialize (perhaps only partially) your object, and test test it returns something valuable. Then you can add a test with another initialization value - should behaves the same way. At that time, I usually test an error condition - such as trying to initialize the object with a invalid value.
When you're done with the method, goes to another method of the same class until you're done with the whole class.
Then you could pick another class - either another independent class, or class that use the first class you've implemented.
If you go with a class that relies on your first class, I think it is acceptable to have your test environment - or your second class - instantiating the first class as it has be fully tested. When one test about the class fails, you should be able to determine in which class the problem lies.
Should you discover a problem in the first class, or ask whether it will behave correctly under some particular conditions, then write a new test.
If climbing up the dependencies you think that the tests you're writing are spanning over to many classes to be qualified as unit-tests, then you can use a mock object to isolate a class from the rest of the system.
If you already have your design - as you indicated in a comment in the answer from Jon LimJap, then you're not doing pure TDD since TDD is about using unit tests to let your design emerge.
That being said, not all shops allow strict TDD, and you have a design at hand, so let's use it and do TDD - albeit it would be better to say Test-First-Programming but that's not the point, as that's also how I started TDD.

Related

TDD is a top-bottom or bottom-top design?

In my memory, most people told me I should design from top to bottom. If I want to implement a web page, I should image or draw this page on paper and then divide it into some functionality. For each functionality, I try design the external API, and implement their inside respectively.
But in TDD, they say I should consider a very very small functionality(a method?) first, write its test, fail, implement it and pass test. Composing them is the last step. I can't image how it gets good API.
And most strangely, they say TDD is not only unit tests and also function tests. I think it means top-bottom. If there is a functionality A composed of methods B, C and D. Because of TDD I write the function test for A first. But... B, C, D are all unimplemented. Should I use three stubs? If B depends on other three methods?
I used TDD to write some small programs. But when I striked an application with GUI, I got stuck.
Since TDD starts with what you can see from the outside (of whatever item you are working on at the moment), I'm not sure how it could qualify as bottom-up.
Taking TDD to the extreme (e.g., as in XP, aka extreme programming), you would certainly start from the end-user perspective, and only ever write as much code as you need to pass the tests created so far. If you find yourself starting with the tests for some internal function before reaching the point where the higher-level tests (plus good design for the code you are writing to make those tests pass) require this routine, you are working on some other paradigm, not strict TDD – because there was no test telling you to write that method in the first place. Not that this is necessarily a bad thing, but any problems you have with that is not really one of the TDD methodology.
For GUI programming, of course, you have the standard problem of automating tests, even before you created code. I only know of good tools for web apps for that; if you have good pointers on this topic in the desktop case, I'd sure love to see them.
I've been heavily writing rspec tests for my rails projects keeping a ratio of about 1:1.6 between code/tests. I never really had an issue of what to write first or what it depends on. If the method A that i want to write consists of B and C, then i would first implement B and C, again using the proper testing. To me the sequence is not so important as long as the tests are good and precise.
So, I don't really use stubs the way you describe it, but only if the functionality is already there and i just want to bypass/circumvent it.
BTW, it is considered a top-down approach. This is an excerpt from Rspec Book :
If you ask experienced software
delivery folks why they run a project
like that, front-loading it with all
the planning and analysis, then getting into the detailed design and
programming, and only really integrating and testing it at the end, they
will gaze into the distance, looking
older than their years, and patiently
explain that this is to mitigate
against the exponential cost of
change—the principle that introducing
a change or discovering a defect
becomes exponentially more expensive
the later you discover it. The
top-down approach seems the only
sensible way to hedge against the
possibility of discovering a defect
late in the day.
I would say it's top down. Say I had a single PDF that had 4 distinct documents in it and I was writing software to split them into 4 individual documents instead of a single document, the first test I would probably write is:
// Note: Keeping this very abstract
#Test
public void splitsDocumentsAccordingToDocumentType() {
List docs = new DocumentProcessor().split(new SinglePdfWithFourDocs());
assertEquals(4, docs());
...
}
I would consider the DocumentProcessor.split() method to be similar to "A" in your example. I could now implement the entire algorithm within the single method split and make the tests pass. I don't even need "B" or "C" right? Knowing that you're a good developer and you cringe at the thought of that 1500 line method you would start to look for ways to restructure your code to a more suitable design. Maybe you see that two additional objects (responsibilities) could be split out of this code:
1) Parsing the contents of the file to find the individual documents and
2) Reading and writing of the document from the file system
Let's tackle #1 first. Use a couple of "Extract Method" refactorings to localize code related to the parsing of the contents then an "Extract Class" refactoring, pulling out those methods into a class named, say DocumentParser. This could be analagous to "B" in your example. If you'd like, you could move tests related to document parsing from your DocumentProcessorTest to a new DocumentParserTest and mock or stub the DocumentParser in the DocumentProcessorTest.
For #2 it's pretty much lather, rinse, repeat and you'll end up with something like a DocumentSerializer class, AKA "C". You can mock this as well in your DocumentProcessorTest and you now have no file I/O and have test driven a component that has two additional collaborators without having to design your entire class (with individual methods). Notice that we took an "outside in" approach, which really enables the refactoring.
Thanks

How did/would you "force" yourself to do TDD rather than TAD? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been trying to jump on the TDD bandwagon for some time now, and it's been going well except for one crucial thing, normally what I end up doing is Test After Development.
I need a mental shift and am wondering how did you force yourself to write tests first?
The mental shift for me was realizing that TDD is about design, not testing. TDD allows you to reason critically about the API of the thing you're constructing. Write the tests first and it's often very clear what API is most convenient and appropriate. Then write your implementation.
Of course you should be writing tests after too (regression tests, integration tests, etc). TDD often produces good design, but not necessarily good testing code.
A big moment came for me with TDD when I read a certain quote (I can't recall where) that the moment of triumph for a test is the moment when the test changes from red to green.
This is probably all stuff you've read before, but unless I start with a failing test, and it becomes a passed test, that's when I reap the huge psychological benefits. It feels good to change from red to green. And if you are consistent with delivering that moment to yourself, it becomes addictive, and then much easier to make yourself do.
But the trick for me was to isolate that moment and revel in it.
Once i started leveraging dependency injection, my classes became smaller and more specialized which allowed me to write simple unit tests to confirm they worked. Given the limited number of tests i knew my class had to pass to work, the goal of my TDD effort became more clear. It was also easier to identify which classes required integration tests due to dependencies on external resources and which classed required unit tests that injected mock/stub/fake objects into the SUT to narrow the focus of my test.
Pair Programming
I realize that this may not be an option for everyone, and that many devs don't like this idea. But I've found that if I pair program with someone who's also committed to TDD, we tend to "keep each other honest" and stay with TDD much more than I could programming alone by sheer will.
It helps if you have a generic test framework.
Have a library of generic functions applicable to various sorts of tests you run. Then re-use those as building blocks to build tests for the project you're on.
To get there, note the common things you do in the tests you write after. Abstract them away into generalized library one by one.
Doing so will enable you to do many simpler tests very quickly easily by not having to re-do the boring and time consuming test driver code, instead concentrating on actual test cases.
Do "test as documentation" approach. Don't add/change any wording in documentation not backed up by appropriate tests.
This saves time - you don't have to re-parse documentation/requirements another tijme just to build the tests later - as well as helps with mental shift you asked about.
Do gradual phase-in - add tests for new features/changes as they are being started to work in.
Nobody likes to change their ways cold turkey - human nature. Let the good habit slip in and eventually it becomes second nature.
Right away budget the time for writing tests at the beginning of your development schedule on a project
This will both force you into proper habits (assuming you tend to follow your project plan) and protect you from running over due dates due to "extra" time spent building tests.
Of course, the "extra" time for TDD ends up net time saver, but it is not always realized at the very beginning stage of the project, which puts negative pressure on the TDD practice ("Where are the prototype screenshots??? What do you mean you're still writing tests???").
Also, try to follow the usual recommended practices of small one-purpose classes and functions. This - among all the other benefit - allows much easier unit test writing. Couple that with #2 (by writing unit test as part of the API documentation, when designing the API), and your API designs "magically" improve since you start noticing weak points immediately due to writing the tests based on them. As other people noted, using some sort of Dependency Injection pattern/framework helps simplify building the tests.
As a solo developer, one thing that helped me make the shift to TDD was setting a code coverage threshold for myself.
In my build script I used a code coverage tool (NCover) to determine the percentage of code that was covered by tests, and initially set the threshold at 80%. If I stopped writing my tests first the coverage percentage would fall below the 80% threshold, and my build script would cause a failure. I would then immediately slap myself on the wrist and write the missing tests.
I gradually increased the code coverage threshold and eventually became a full TDD convert.
For me it was all about realizing the benefits. Fixing bugs after the fact is so much harder than never writing them in the first place.
the easiest way to get start, imo, is to start with it on a new component. TDD, and effective unit testing in general, require that you architect your code in a way that allows for testing without dependencies (meaning you need to have interfaces for mock object implementations, etc). in any complex piece of software this has real implications on the structure of your code.
What helped me instill habitual discipline, was before making any change, say to myself, "What test do I write to demonstrate that the change worked?". While not strictly TDD (since the focus is quite small) it brought testing to the forefront of my thinking whenever the system was changed.
Start small, with a low barrier to entry, practice daily and the habit becomes second nature. After a while your scope for thinking about tests naturally widens to include design, and well as integration and system testing.
I found that "start small" worked well on legacy projects that had little unit testing in place and where the intertia to bring it up to scratch was too large so that no-one bothered. Small changes, bugfixes etc. could often be easily unit tested even when the test landscape for the whole project was pretty barren.
Our TDD drives the development, from the name. Best learned from someone that already is extreme/disciplined about it. If your velocity affected applying TDD immediately on the work project, what is stopping you from growing your TDD muscles outside of work on a side project?
Here's a repost of how I became a BDD / TDD convert:
A year ago, I had little idea how to do TDD (but really wanted to (how frustrating)) and had never heard of BDD... now I do both compulsively. I have been in a .Net development environment, not Java, but I even replaced the "F5 - Run" button with a macro to either run Cucumber (BDD) or MBUnit (TDD) depending if it is a Feature/Scenario or Specification. No debugger if at all possible. $1 in the jar if you use the debugger (JOKING (sort of)).
The process is very awesome. The framework we are additionally using is by The Oracle I've been blessed to come across, and absorbing information from, and that framework he/we use is MavenThought.
Everything starts with BDD. Our BDD is straight up cucumber ontop of iron ruby.
Feature:
Scenario: ....
Given I do blah...
When I do something else...
Then wonderful things happen...
Scenario: ...
And that's not unit testing itself, but it drives the feature, scenario by scenario, and in turn the unit (test) specifications.. So you start on a scenario, and with each step you need to complete in the scenario it drives your TDD.
And the TDD we have been using is kind of BDD in a way, because we look at the behaviours the SUT (System Under Test) requires and one behaviour is specified per specification (class "test" file).
Example:
Here is the Specification for one behaviour: When the System Under Test is created.
There is one more specification (C# When_blah_happens class file) for another behaviour when a property changes, but that is separated out into a another file.
using MavenThought.Commons.Testing;
using SharpTestsEx;
namespace Price.Displacement.Module.Designer.Tests.Model.Observers
{
/// <summary>
/// Specification when diffuser observer is created
/// </summary>
[ConstructorSpecification]
public class When_diffuser_observer_is_created
: DiffuserObserverSpecification
{
/// <summary>
/// Checks the diffuser injection
/// </summary>
[It]
public void Should_return_the_injected_diffuser()
{
Sut.Diffuser.Should().Be.SameInstanceAs(this.ConcreteDiffuser);
}
}
}
This is probably the simplest behaviour for a SUT, because in this case when it is created, the Diffuser property should be the same as the injected diffuser. I had to use a Concrete Diffuser instead of a Mock because in this case the Diffuser is a Core/Domain object and has no property notification for the interface. 95% of the time we refer to all our dependencies like Dep(), instead of injecting the real thing.
Often we have more than one [It] Should_do_xyz(), and sometimes a fair bit of setup like perhaps upto 10 lines of stubbing. This is just a very simple example with no GivenThat() or AndGivenThatAfterCreated() in that specification.
For setup of each specification we generally only ever need to override a couple methods of the specification:
GivenThat() ==> this happens before the SUT is created.
CreatSut() ==> We auto mock creation of the sut with StructureMap and 90% of time never need to override this, but if you are constructor injecting a Concrete, you have to override this.
AndGivenThatAfterCreated() => this happens after the SUT is created.
WhenIRun() => unless it is a [ConstructorSpecification] we use this to run ONE line of code that is the behaviour we are specifiying for the SUT
Also, if there is common behaviour of two or more specifications of the same SUT, we move that into the base specifcation.
All I gotta do to run the Specification is highlight it's name, example "When_diffuser_observer_is_created" and press F5, because remember, for me F5 runs a Rake task either test:feature[tag] if Cucumber, or test:class[SUT]. Makes sense to me because everytime you run the debugger it's a throw away, no code is created (oh and it costs a $1 (joking)).
This is a very, very clean way of specifying behaviour with TDD and having really, really simple SUTs and simple specifications. If you try and be cowboy coder and write the SUT crappy with hard dependencies, etc, you will feel the pain of trying to do TDD and get fed up / give up OR bite the bullet and do it right.
And here's the actual SUT. We got a little fancy and use PostSharp to add property notify changed on the Diffuser, so hence the Post.Cast<>. And again, that's why I injected a Concrete rather than Mock. Anyway, as you can see the missing behaviour defined in another specification is when anything changes on the Diffuser.
using System.ComponentModel;
using MavenThought.Commons.Events;
using PostSharp;
using Price.Displacement.Core.Products;
using Price.Displacement.Domain;
namespace Price.Displacement.Desktop.Module.Designer.Model.Observers
{
/// <summary>
/// Implementation of current observer for the selected product
/// </summary>
public class DiffuserObserver : AbstractNotifyPropertyChanged, IDiffuserObserver
{
/// <summary>
/// gets the diffuser
/// </summary>
public IDiffuser Diffuser { get; private set; }
/// <summary>
/// Initialize with a diffuser
/// </summary>
/// <param name="diffuser">The diffuser to observe</param>
public void Initialize(IDiffuser diffuser)
{
this.Diffuser = diffuser;
this.NotifyInterface().PropertyChanged += (x, e) => this.OnPropertyChanged(e.PropertyName);
}
/// <summary>
/// Gets the notify interface to use
/// </summary>
/// <returns>The instance of notify property changed interface</returns>
protected INotifyPropertyChanged NotifyInterface()
{
return Post.Cast<Diffuser, INotifyPropertyChanged>((Diffuser)Diffuser);
}
}
}
In conclusion, this BDD / TDD style of development rocks. It took one year but I am a total convert as a way of life. I would not have learned this on my own. I picked up everything from The Oracle http://orthocoders.com/.
Red or Blue pill, the choice is yours.
Read test code!
What blocked me from testing first is the lack of insight when trying to recreate the envirionment in which that module needs to run inside a test harness.
To overcome this dificulties, you have to read test code from other programmers and apply that code to your needs. The same way you do when you are learning how to program using a new language or a new library.
Although we were never able to implement a full TDD, the concept of reporting a defect and creating a phpunit (in our php shop) test case that failed, but would pass when the defect was addressed, proved to be desirable to all parties (dev and qa) due to the clarity of the defect specification and the verification of the changed code. We also incorporated these unit tests in the regression suite, in case we missed these 'defect fix' branches in the released code.
TDD works better and better the more experience you have with it. But it is difficuilt to reach the break even experience level that makes it easier to do tdd instead of tad.
so the opposite question: "What prevents me from doing tdd" may help to get a good starting point:
time pressure if i am not experienced with tdd
bugfixing or improving applications that where not developped testdriven or where there are no unit-tests yet (brown field apps).
rapid application develpment environments where i change some code and i can (nearly) immediately see if this works using the end-user gui.
motivation comes for me if the benefits outrage the pain.
Currently i am developping parts in a java/tomcat/web shop where compiling all and starting the servers/shop requires about 10 minutes which is the opposit of rapid-app-developing.
Compiling and running businesslogic and the unittests requires less than 10 seconds.
So doing tdd is much faster than tad as long as it is easy to write the unit-test.
the same rad applies to my current android project where i develop the android independant java lib part that can be easily unittested. No need to deploy code on the device to see if the code is running.
In my opinion a good starting point to get more experienced with tdd: wait for a new green-field application where you do not have time pressure in the beginning.

Test Driven Development - What exactly is the test?

I've been learning what TDD is, and one question that comes to mind is what exactly is the "test". For example, do you call the webservice and then build the code to make it work? or is it more unit testing oriented?
In general the test may be...
unit test which tests an individual subcomponent of your software without any external dependencies to other classes
integration test which are tests that test the connection between two separate systems, ie. their integration capability
acceptance test for validating the functionality of the system
...and some others I've most likely temporarily forgotten for now.
In TDD, however, you're mostly focusing on the unit tests when creating your software.
It's entirely Unit Test driven.
The basic idea is to write the unit tests first, and then do the absolute minimum amount of work necessary to pass the tests.
Then write more tests to cover more of the requirements, and implement a bit more to make it pass.
It's an iterative process, with cycles of test writing, and then code writing.
Here are a couple of good articles by Unclebob
Three rules of TDD
TDD with Acceptance and Unit tests
I suggest you not to emphasize on Test because TDD is actually is a software development methodology, not a testing methodology.
I would say it is about unit testing and code coverage. It is about shipping bugless code and being able to make changes easily in the future.
See Uncle Bob's words of wisdom.
How I use it, it's unit testing oriented. Suppose I want a method that square ints I write this method :
int square(int x) { return null; }
and then write some tests like :
[Test]
TestSquare()
{
Assert.AreEqual(square(0),0);
Assert.AreEqual(square(1),1);
Assert.AreEqual(square(10),100);
Assert.AreEqual(square(-1),1);
Assert.AreEqual(square(-10),100);
....
}
Ok, maybe square is a bad example :-)
In each case I test expected behaviour and all borderline vals like maxint and zero and null-value (remember you can test on errors too) and see to it the test fails (which isn't hard :-)) then I keep working on the function until it works.
So : first a unit test that fails an covers what you want it to cover, then the method.
Generally, unit tests in "TDD" shouldn't involve any IO at all.
In fact, you'll have a ton more effectiveness if you write objects that do not create side effects (I/O is almost always, if not always, a side effect!), and define your the behavior of your class either in terms of return values of methods, or calls made to interfaces that have been passed into the object.
I just want to give my view on the topic which may help to understand TDD a bit more in a different way.
TDD is a design method that relies in testing first. because you asked about how the test is, ill go like this:
If you want to build an application, you know the purpose of what you want to build and you know generally that when you are done, or along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc.
on the other hand TDD changes your mindset and i'll point out one of such ways. commonly , you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. I call this style SDDD (Syntax debugging driven development ).
but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).
by the way even though i said "you know the purpose of what you want to build ..", in practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)

TDD: Where to start the first test [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
So I've done unit testing some, and have experience writing tests, but I have not fully embraced TDD as a design tool.
My current project is to re-work an existing system that generates serial numbers as part of the companies assembly process. I have an understanding of the current process and workflow due to looking at the existing system. I also have a list of new requirements and how they are going to modify the work flow.
I feel like I'm ready to start writing the program and I've decided to force myself to finally do TDD from the start to end.
But now I have no idea where to start. (I also wonder if I'm cheating the TDD process by already have an idea of the program flow for the user.)
The user flow is really serial and is just a series of steps. As an example, the first step would be:
user submits a manufacturing order number and receives a list of serializable part numbers of that orders bill of materials
The next step is started when the user selects one of the part numbers.
So I was thinking I can use this first step as a starting point. I know I want a piece of code that takes a manufacturing order number and returns a list of part numbers.
// This isn't what I'd want my code to end up looking like
// but it is the simplest statement of what I want
IList<string> partNumbers = GetPartNumbersForMfgOrder(string mfgOrder);
Reading Kent Becks example book he talks about picking small tests. This seems like a pretty big black box. Its going to require a mfg order repository and I have to crawl a product structure tree to find all applicable part numbers for this mfg order and I haven't even defined my domain model in code at all.
So on one hand that seems like a crappy start - a very general high level function. On the other hand, I feel like if I start at a lower level I'm really just guessing what I might need and that seems anti-TDD.
As a side note... is this how you'd use stories?
As an assembler
I want to get a list of part numbers on a mfg order
So that I can pick which one to serialize
To be truthful, an assembler would never say that. All an assembler wants is to finish the operation on mfg order:
As an assembler
I want to mark parts with a serial number
So that I can finish the operation on the mfg order
Here's how I would start. Lets suppose you have absolutely no code for this application.
Define the user story and the business value that it brings: "As a User I want to submit a manufacturing order number and a list of part numbers of that orders so that I can send the list to the inventory system"
start with the UI. Create a very simple page (lets suppose its a web app) with three fields: label, list and button. That's good enough, isn't it? The user could copy the list and send to the inv system.
Use a pattern to base your desig on, like MVC.
Define a test for your controller method that gets called from the UI. You're testing here that the controller works, not that the data is correct: Assert.AreSame(3, controller.RetrieveParts(mfgOrder).Count)
Write a simple implementation of the controller to make sure that something gets returned: return new List<MfgOrder>{new MfgOrder(), new MfgOrder(), new MfgOrder()}; You'll also need to implement classes for MfgOrder, for example.
Now your UI is working! Working incorrectly, but working. So lets expect the controller to get the data from a service or DAO. Create a Mock DAO object in the test case, and add an expectation that the method "partsDao.GetPartsInMfgOrder()" is called.
Create the DAO class with the method. Call the method from the controller. Your controller is now done.
Create a separate test to test the DAO, finally making sure it returns the proper data from the DB.
Keep iterating until you get it all done. After a little while, you'll get used to it.
The main point here is separating the application in very small parts, and testing those small parts individually.
This is perfectly okay as a starting test. With this you define expected behavior - how it should work. Now if you feel you've taken a much bigger bite than you'd have liked.. you can temporarily ignore this test and write a more granular test that takes out part or atleast mid-way. Then other tests that take you towards the goal of making the first big test pass. Red-Green-Refactor at each step.
Small tests, I think mean that you should not be testing a whole lot of stuff in one test. e.g. Are components D.A, B and C in state1, state2 and state3 after I've called Method1(), Method2() and Method3() with these parameters on D.
Each test should test just one thing. You can search SO for qualities of good tests. But I'd consider your test to be a small test because it is short and focussed on one task - 'Getting PartNumbers From Manufacturing Order'
Update: As a To-Try suggestion (AFAIR from Beck's book), you may wanna sit down and come up with a list of one-line tests for the SUT on a piece of paper. Now you can choose the easiest (tests that you're confident that you'll be able to get done.) in order to build some confidence. OR you could attempt one that you're 80% confident but has some gray areas (my choice too) because it'll help you learn something about the SUT along the way. Keep the ones that you've no idea of how to proceed for the end... hopefully it'll be clearer by the time the easier ones are done. Strike them off one by one as and when they turn green.
I think you have a good start but don't quite see it that way. The test that is supposed to spawn more tests makes total sense to me as if you think about it, do you know what a Manufacturing Order number or a Part Number is yet? You have to build those possibly which leads to other tests but eventually you'll get down to the itty bitty tests I believe.
Here's a story that may require a bit of breaking down:
As a User I want to submit a manufacturing order number and receive a list of serializable part numbers of that orders bill of materials
I think the key is to break things down over and over again into tiny pieces that make it is to build the whole thing. That "Divide and conquer" technique is handy at times. ;)
Well well, you've hit the exact same wall I did when I tried TDD for the first time :)
Since then, I gave up on it, simply because it makes refactoring too expensive - and I tend to refactor a lot during the initial stage of development.
With those grumpy words out of the way, I find that one of the most overseen and most important aspects of TDD is that it forces you to define your class-interfaces before actually implementing them. That's a very good thing when you need to assemble all your parts into one big product (well, into sub-products ;) ). What you need to do before writing your first tests, is to have your domain model, deployment model and preferably a good chunk of your class-diagrams ready before coding - simply because you need to identify your invariants, min- and max-values etc., before you can test for them. You should be able to identify these on a unit-testing level from your design.
Soo, in my experience (not in the experience of some author who enjoys mapping real world analogies to OO :P ), TDD should go like this:
Create your deployment diagram, from the requirement specification (ofc, nothing is set in stone - ever)
Pick a user story to implement
Create or modify your domain model to include this story
Create or modify your class-diagram to include this story (including various design classes)
Identify test-vectors.
Create the tests based on the interface you made in step 4
Test the tests(!). This is a very important step..
Implement the classes
Test the classes
Go have a beer with your co-workers :)

TDD ...how?

I'm about to start out my first TDD (test-driven development) program, and I (naturally) have a TDD mental block..so I was wondering if someone could help guide me on where I should start a bit.
I'm creating a function that will read binary data from socket and parses its data into a class object.
As far as I see, there are 3 parts:
1) Logic to parse data
2) socket class
3) class object
What are the steps that I should take so that I could incrementally TDD? I definitely plan to first write the test before even implementing the function.
The issue in TDD is "design for testability"
First, you must have an interface against which to write tests.
To get there, you must have a rough idea of what your testable units are.
Some class, which is built by a function.
Some function, which reads from a socket and emits a class.
Second, given this rough interface, you formalize it into actual non-working class and function definitions.
Third, you start to write your tests -- knowing they'll compile but fail.
Part-way through this, you may start head-scratching about your function. How do you set up a socket for your function? That's a pain in the neck.
However, the interface you roughed out above isn't the law, it's just a good idea. What if your function took an array of bytes and created a class object? This is much, much easier to test.
So, revisit the steps, change the interface, write the non-working class and function, now write the tests.
Now you can fill in the class and the function until all your tests pass.
When you're done with this bit of testing, all you have to do is hook in a real socket. Do you trust the socket libraries? (Hint: you should) Not much to test here. If you don't trust the socket libraries, now you've got to provide a source for the data that you can run in a controlled fashion. That's a large pain.
Your split sounds reasonable. I would consider the two dependencies to be the input and output. Can you make them less dependent on concrete production code? For instance, can you make it read from a general stream of data instead of a socket? That would make it easier to pass in test data.
The creation of the return value could be harder to mock out, and may not be a problem anyway - is the logic used for the actual population of the resulting object reasonably straightforward (after the parsing)? For instance, is it basically just setting trivial properties? If so, I wouldn't bother trying to introduce a factory etc there - just feed in some test data and check the results.
First, start thinking "the testS", plural, rather than "the test", singular. You should expect to write more than one.
Second, if you have mental block, consider starting with a simpler challenge. Lower the difficulty until it's real easy to do, then move on to more substantial work.
For instance, assume you already have a byte array with the binary data, so you don't even need to think about sockets. All you need to write is something that takes in a byte[] and return an instance of your object. Can you write a test for that ?
If you still have mental block, lower it yet another notch. Assume your byte array is only going to contain default values anyway. So you don't even have to worry about the parsing, just about being able to return an instance of your object that has all values set to the defaults. Can you write a test for that ?
I imagine something like:
public void testFooReaderCanParseDefaultFoo() {
FooReader fr = new FooReader();
Foo myFoo = fr.buildFoo();
assertEquals(0, myFoo.bar());
}
That's rock bottom, right ? You're only testing the Foo constructor. But you can then move up to the next level:
public void testFooReaderGivenBytesBuildsFoo() {
FooReader fr = new FooReader();
byte[] fooData = {1};
fr.consumeBytes(fooData);
Foo myFoo = fr.buildFoo();
assertEquals(1, myFoo.bar());
}
And so on...
'The best testing framework is the application itself'
I believe that a common misconception amongst developers is, they mistakenly make a strong association between testing frameworks and TDD principles. I would advise re-reading the official docs on TDD; bearing in mind that, there is no real relationship between testing frameworks and TDD. After all, TDD is a paradigm not a framework.
Upon reading the wiki on TDD (https://en.wikipedia.org/wiki/Test-driven_development), I've come to realise that to an extent things are a little bit open to interpretation.
There are various personal styles of TDD mainly due to the fact that TDD principles are open to interpretation.
I'm not here to say anyone is wrong, but I would like to share my techniques with you and explain how they have served me well. Bear in mind that I have been programming for 36 years; making my programming habits very well evolved.
Code reuse is over rated. Reuse code too much and you'll end up with bad abstraction and it will become very difficult to fix or change something without it affecting something else. The obvious advantage being less code to manage.
Repeating too much code leads to code management problems and oversized code bases. However it does have the advantage of good separation of concerns (the ability to tweak, change and fix things without affecting other parts of the app).
Don't repeat/refactor too much, don't reuse too much. Code needs to be maintainable. It’s important to understand and respect the balance between code reuse and abstraction/separation of concerns.
When deciding whether to reuse code I base the decision on: .... Will the nature of this code change in context throughout the app codebase? If the answer is no, then I reuse it. If the answer is yes or I'm not sure, I repeat/refactor it. I will however revise my codebases from time to time and see if any of my repeated code can be merged without compromising separation of concerns/abstraction.
As far as my basic programming habits are concerned, I like to write the conditions (if then else switch case etc) first; test them, then fill the conditions with the code and test again. Keep in mind there's no rule that you have to do this in a unit test. I refer to this as the low level stuff.
Once my low level stuff is done, I'll either reuse the code or refactor it into another part of the app, but not after testing it very thoroughly. Problem with repeating/refactoring badly tested code is that, if it’s broken, you have to fix it in multiple places.
BDD To me is a natural follow on from TDD. Once my code base is well tested I can easily tweak behaviours by moving entire blocks of code around. Cool thing is about my programming habits is that sometimes I move code around and discover useful behaviours that I didn’t even intend. It can sometimes even be useful for rebranding stuff to seem like a completely different code base.
To this end my code bases tend to start out a bit slow and pick up momentum because as I advance toward the end of development I have more and more code to refactor from or reuse.
The advantages for me in the way that I code is that, I am able to take on very high levels of complexity as this is promoted by good separation of concerns. It’s also awesome for writing highly optimised code. However the well optimised code tends to be a bit bloated, but to my knowledge there is no way to write optimized code without a bit of bloating. If the app doesn't need high processor efficiency, there's nothing stopping me from de-bloating my code. I'm of the opinion that server side code should be optimised and most client side code normally doesn't require it.
Going back to the topic of testing frameworks, I use them to just save a bit of compiler time.
As far as following story boards is concerned, that comes naturally to me without actually considering it. I've noticed most devs develop in the natural order of story boards even when they are not available.
As a general separation of concerns strategy, in most apps I separate concerns based on UI forms. For example I’ll reuse code within a form and repeat/refactor across forms. This is only a generalistic rule. There are times when I have to think outside the box. Sometimes repeating code can serve well for making code processor efficient.
As a little addendum to my TDD habits; I do optimizations and fault tolerance last. I will try to avoid using try catch blocks as much as possible and write my code in such a way as to not need them. For example rather than catch a null, I will check for null, or rather than catch an index out of bounds, I will scrutinise my code so that it never happens. I find that error trapping too early in app development, leads to semantic errors (behavioural errors that don't crash the app). Semantic errors can be very hard to trace or even notice.
Well that’s my 10 cents. Hope it helps.
Test Driven Development ?
So, this means you should start with writing a test first.
Write a test which contains the code like 'how you want to use your class'. This class or method that you are going to test with this test, is not even there yet.
For instance, you could write a test first like this:
[Test]
public void CanReadDataFromSocket()
{
SocketReader r = new SocketReader( ... ); // create a socketreader instance which takes a socket or a mock-socket in its constructor
byte[] data = r.Read();
Assert.IsTrue (data.Length > 0);
}
For instance; I'm just making up an example here.
Next, once you're able to read data from a socket, you can start thinking on how you'll parse it, and write a test in where you use the 'Parser' class which takes the data that you've read, and outputs an instance of your data class.
etc...
Knowing where to start writing tests and when to stop writing tests while using TDD, is a common problem when starting out.
I have found that it can sometimes help to write an integration test first. Doing so will help create some of the common objects you will be using. It will also allow you to focus your thoughts and tests, since you will need to start writing tests to make the integration test pass.
When I was starting with TDD, I read these 3 rules by Uncle Bob that really helped me out:
You are not allowed to write any production code unless it is to
make a failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
In a shorter version it would be:
Write only enough of a unit test to fail.
Write only enough production code to make the failing unit test pass.
as you can see, this is very simple.

Resources