Can TDD Handle Complex Projects without an upfront design? - tdd

The idea of TDD is great, but i'm trying to wrap my head around how to implement a complex system if a design is not proposed upfront.
For example, let's say I have multiple services for an payment processing application. I'm not sure I understand how development would/can proceed across multiple developers if there is not a somewhat solid design upfront.
It would be great if someone can provide an example and high level steps to putting together a system in this manner. I can see how TDD can lead to simpler and more robust code, I'm just not sure how it can bring together 1) different developers to a common architectural vision and 2) result in a system that can abstract out behavior in order to prevent having to refactor large chunks of code (e.g. accept different payment methods or pricing models based on a long term development roadmap).
I see the refactoring as a huge overhead in a production system where data model changes increase risks for customers and the company.
Clearly i'm probably missing something that TDD gurus have discovered....

IMHO, It depends on the the team's composition and appetite for risk.
If the team consists of several experienced and good designers, you need a less formal 'architecture' phase. It could be just a back of the napkin doodle or a a couple of hours on the whiteboard followed by furious coding to prove the idea. If the team is distributed and/or contains lots of less skilled designers, you'd need to put more time/effort (thinking and documenting) in the design phase before everyone goes off on their own path
The next item that I can think of is to be risk first. Continually assess what are the risks to your project, calculate your exposure/impact and have mitigation plans. Focus on risky and difficult to reverse decisions first. If the decision is easily reversible, spend less time on it.
Skilled designers are able to evolve the architecture in tiny steps... if you have them, you can tone down the rigor in an explicit design phase

TDD can necessitate some upfront design but definitely not big design upfront. because no matter how perfect you think your design is before you start writing code, most of the time it won't pass the reality check TDD forces on it and will blow up to pieces halfway through your TDD session (or your code will blow up if you absolutely want to bend it to your original plan).
The great force of TDD is precisely that it lets your design emerge and refine as you write tests and refactor. Therefore you should start small and simple, making the least assumptions possible about the details beforehand.
Practically, what you can do is sketch out a couple of UML diagrams with your pair (or the whole team if you really need a consensus on the big picture of what you're going to write) and use these diagrams as a starting point for your tests. But get rid of these models as soon as you've written your first few tests, because they would do more harm than good, misleading you to stick to a vision that is no longer true.

First of all, I don't claim to be a TDD guru, but here are some thoughts based on the information in your question.
My thoughts on #1 above: As you have mentioned, you need to have an architectural design up-front - I can't think of a methodology that can be successful without this. The architecture provides your team with the cohesion and vision. You may want to do just-enough-design up front, but that depends on how agile you want to be. The team of developers needs to know how they are going to put together the various components of the system before they start coding, otherwise it will just be one big hackfest.
It would be great if someone can provide an example and high level
steps to putting together a system in this manner
If you are putting together a system that is composed of services, then I would start by defining the service interfaces and any messages that they will exchange. This defines how the various components of your system will interact (this would be an example of your up-front design). Once you have this, you can allocate various development resources to build the services in parallel.
As for #2; one of the advantages of TDD is that it presents you with a "safety net" during refactoring. Since your code is covered with unit tests, when you come to change some code, you will know pretty soon if you have broken something, especially if you are running continuous integration (which most people do with a TDD approach). In this case you either need to adapt your unit tests to cover the new behavior OR fix your code so that your unit tests pass.
result in a system that can abstract out behavior in order to prevent
having to refactor large chunks of code
This is just down to your design, using e.g. a strategy pattern to allow you to abstract and replace behavior. TDD does not prescribe that your design has to suffer. It just asks that you only do what is required to satisfy some functional requirement. If the requirement is that the system must be able to adapt to new payment methods or pricing models, then that is then a point of your design. TDD, if done correctly, will make sure that you are satisfying your requirements and that your design is on the right lines.
I see the refactoring as a huge overhead in a production system where
data model changes increase risks for customers and the company.
One of the problems of software design is that it is a wicked problem which means that refactoring is pretty much inevitable. Yes, refactoring is risky in production systems, but you can mitigate that risk and TDD will help you. You also need to have a supple design and a system with low coupling. TDD will help reduce your coupling since you are designing your code to be testable. And one of the by-products of writing testable code is that you reduce your dependencies on other parts of the system; you tend to code to interfaces which allows you to replace an implementation with a mock or stub. A good example of this is replacing a call to a database with a mock/stub that returns some known data - you don't want to hit a database in your unit tests. I guess I can mention here that a good mocking framework is invaluable with a TDD approach (Rhino mocks and Moq are both open source).
I am sure there are some real TDD gurus out there who can give you some pearls of wisdom...Personally, I wouldn't consider starting a new project with out a TDD approach.

Related

How does TDD drives the design?

I understand that TDD has many advantages (some of them are below). How ever I am not sure how it drives the design?
Serves as documentation
writing tests before actual code helps maximum test coverage
Help determine input value boundaries
Usually when we start implement new piece of functionality, we will have rough idea of the design. And we start with TDD implementation of a class, which is used by other classes as per design. This understanding of mine seems to be in conflict with statement "TDD drives the design"
Please help me understand this by an example.
Most people think that Test-Driven Development is tool to write code with lesser number of bugs. But ,in reality, that is the by-product of TDD. TDD is more of a tool for code designing.
Overall, TDD helps in quality code development in following ways:-
It makes you think about your code design and requirements at every stage, thereby ensuring that you are actually implementing what is required.
You are forced to write testable code, thereby ensuring that your code has loose coupling and high cohesion.
If your code is getting difficult to test, mostly is signifies that there is some issue with your design(Your code is too coupled or not isolated enough)
With that said, I tend to disagree with people that think if you follow TDD blindly you'd always end up with good code design(because that depends more on you and your knowledge of Sofware Design), but I believe there is a good chance you would.
TDD doesn't drive design, but it's an excellent feedback mechanism, which enables you to get rapid feedback on your design ideas.
It also has the beneficial side-effect that it leaves behind a nice regression test suite.
The most basic idea behind this, is that when you want to test something, you want to keep the tests simple and focused. This in turn forces you to start with the smallest bits of your problem, driving towards the Single responsibility principle.
Another thing, for larger problems, is when you are forced (because you want your tests to be focused) to use mocks and stubs. This drives your design towards using the Dependency inversion principle, which in turn makes your code loosely coupled.
TDD in general
The goal of Test-driven Development (TDD) is to create software components that are precisely conforming to a set of functional specifications. The task of the tests is to validate conformity and to identify disparities from these specifications.
Design Guidance
By writing tests first, you're forced to make up your mind what your software component should do. Thinking about meaningful test cases first will guide your design, as you're forced to find answers to a set of questions: How should process inputs? To which components does it interface? What are the outputs I'm expecting? ...
TDD does not replace SW design. It sharpens your design!
In essence, TDD will help you to sharpen your design - but one thing is crucial: You need to find reasonable tests. And trying to replace functional specifications - call it a requirements documents - with tests only, is rather dangerous. Why?
Unit tests as used by TDD can only guarantee that your components exhibit the expected behavior for the given test data, but not for any data! As Dijkstra said in the 1960s: "Testing shows the presence, not the absence of bugs".
> Usually when we start implement new piece of functionality, we will have rough idea of the design.
That's the core thing of your question. If you just have a rough idea of the design, you better should spend more time at the drawing board, asking your self questions like: What are the individual tasks my software should carry out? How can I split the general task into subtasks? How can I map these subtasks to components? Which data needs to be passed among them?
At that time, you might consider doing TDD. But without thinking about a design or software architecture first, you'll end up with a "spaghetti system" that is hard to understand and hard to maintain later on.
Great observation by Mark Seemann. The principle that pushes me to value TDD is fast feedback (and so fast learning). You can't substitute good design techniques for just doing TDD - learn good design principles and use TDD to provide fast feedback on your work.
I find that when using TDD most deeper thinking about my design happens during the refactor step and the tests then provide a space for that step and a safety net for change, but it doesn't automatically create good design.
TDD allows me to hack some working code and use the learning I get while doing that to iterate towards a better design. Of course this can be done without TDD too, but the test helps by providing that safety net.

Does TDD preclude designing first?

I've been reading about TDD lately, and it is advocated because it supposedly results in code that is more testable and less coupled (and a bunch of other reasons).
I haven't been able to find much in the way of practical examples, except for a Roman numeral conversion and a number-to-English converter.
Observing these two examples, I observed the typical red-green-refactor cycles typical of TDD, as well as the application of the rules of TDD. However, this seemed like a big waste of time when normally I would observe a pattern and implement it in code, and then write tests for it after. Or possibly write a stub for the code, write the unit tests, and then write the implementation - which might arguably be TDD - but not this continuous case-by-case refactoring.
TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture. My opinion so far is that the benefits of TDD can be achieved by a proper architectural design, although admittedly not everyone can do this reasonably well.
So here I have two questions:
Am I right in understanding that using TDD pretty much doesn't allow you to design first (see the rules of TDD)?
Is there anything that TDD gives you that you can't get from doing proper design before you start coding?
well, I was in your shoes some time ago and had the same questions. Since then I have done quite some reading about TDD and decided to mess with it a little.
I can summarize my experience about TDD in these points:
TDD is unit testing done right, ATDD/BDD is TDD done right.
Whether you design beforehand or not is totally up to you. Just make sure you don't do BDUF. Believe me you will end up changing most of it midways because you can never fully understand the requirements until your hands get dirty.
OTOH, you can do enough design to get you started. Class diagrams, sequence diagrams, domain models, actors and collaborators are perfectly fine as long as you don't get hung up in the design phase trying to figure everything out.
Some people don't do any design at all. They just let the code talk and concentrate on refactoring.
IMHO, balance your approach. Do some design till you get the hang of it then start testing. When you reach a dead end then go back to the white board.
One more thing, some things can't be solved by TDD like figuring out an algorithm. This is a very interesting post that shows that some things just need to be designed first.
Unit testing is hard when you have the code already. TDD forces you to think from your API users perspective. This way you can early on decide if the public interface from your API is usable or not. If you decide to do unit testing after implementing everything you will find it tedious and most probably it will be only for some cases and I know some people who will right only passing test cases just to get the feature done. I mean who wants to break his own code after all that work?
TDD breaks this mentality. Tests are first class citizens. You aren't allowed to skip tests. You aren't allowed to postpone some tests till the next release because we don't have enough time.
Finally to answer your question if there anything that TDD gives you that you can't get from doing proper design before you start coding, I would say commitment.
As long as your doing TDD you are committed to apply good OO principles, so that your code is testable.
To answer your questions:
"Test Driven Development" (TDD) is often referred to as "Test Driven Design", in that this practice will result in a good design of the code. When you have written a failing unit test, you are forced into a test driven design approach, so that you can implement just what is needed to make the test pass i.e. you have to consider the design of the code you are writing to make the test pass.
When using a TDD approach a developer will implement the minimum amount of code required to pass the test. Doing proper design beforehand usually results in waste if the requirements change once the project has started.
You say "TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture" - If you are following an Agile approach to your software development, then you do still need to do some up front architectural investigation (e.g. if you were using the Scrum methodology you would create a development "Spike" at the start of a project) to ascertain what the minimum amount of architecture needed to start the project. At this stage you make decisions based on what you know now e.g. if you had to work with a small dataset you'd choose to use a regular DB, if you have a huge DB you might to choose to use a NoSQL big data DB etc.
However, once you have a general idea of the architecture, you should let the design evolve as the project progresses leaving further architectural decisions as late in the process as possible; Invariably as a project progresses the architecture and requirements will change.
Further more this rather popular post on SO will give you even more reasons why TDD is definetly worth the effort.

TDD and UML together

I'm new to TDD approach so I'm wondering if anyone experienced wit this could enlighten me a little. I would like to get some leads how to use UML and TDD methodology together.
I've been used to: Design with UML --> Generate skeleton classes (and then keep it synchronized) -> Implement and finally Test. I must admit that testing part was the worst one, so I started to look for something else - TDD. So I have some general knowledge what is it about but before I proceed further, I am interested knowing how it goes together with software design especially UML.
So when I first design/create test, how can UML fit in? Would it be possible to design classes first, from them create skeleton classes, from them generate Unit tests which would be "filled" before actual implementation of UML pregenerated classes, would this approach break whole TDD? Or is there any other way that would keep UML and TDD together?
The TDD cycle is test, code, refactor, (repeat) and then ship. As the TDD name implies, the development process is driven by testing, specifically that means writing tests first before developing or writing code.
The first paragraph is purely definitional ... from how Kent Beck defines TDD ... or how Wikipedians generally understand TDD ... I thought it was necessary to belabor the point with this definition, because I am not certain whether everyone is really discussing the same TDD or if others really understand the implications the [most-important part or the] definiton of TDD, the implications of writing tests first. In other words, I think more of the focus of the answers to this question should delve a bit deeper into TDD, rather than explaining a bias for/against UML. The lengthy part of my answer relates to my opinion on using UML to support TDD ... UML is just a modelling language, certainly not required to do TDD; it could get in the way if applied inappropriately ... but UML can help with understanding requirements to write tests, how modeling can help refactor if needed and how collecting the artifacts of the process speeds the documentation of shipped code. I would welcome any comments, criticisms or suggestions, but please don't vote my answer up or down because you agree or don't agree with the first paragraph ... the TDD acronym is Test-Driven Development which means Test-First Development.
Writing a test first implies that the developer understands the specifications and requirements FIRST ... obviously, any test is written should fail until the code gets written, but in TDD, the test must be written first -- you can't do TDD without being focused on understanding a requirements specification before you write tests, before you write code. There are situations where the requirements do not exist at all; requirements elicitation involves a bit of hacking a pre-pre-alpha version to "throw mud at the wall to see what sticks" ... that sort of thing should not be confused with development, certainly not test-driven development, it's basically just one form of requirements-elicitation for a poorly-understood application domain.
UML diagrams are one form of requirements input to TDD. Not the only one, but probably better than written specifications if people who are knowledgeable in creating UML diagrams are available. It is often better work with visual diagrams for better communication in exploring the problem domain [with users/clients/other systems providers] during pre-implementation requirements modeling sessions ... where simulating performance is necessary for really understanding requirements (e.g. CANbus network interactions); it is often ideal if we can work with a specification language or CASE tool like Rhapsody or Simulink RT Workshop that can be executable, modular and complete ... UML is not necessarily part of TDD, but it is part of an approach design synthesis that involves expending more effort understanding what is required first before writing any code [and then wasting time trying to market that code to someone who cares]; generally, more focus on understanding the systems requirements lays the groundwork for more efficient, productive agile TDD iterations.
Refactoring is about improving design -- cleaning up working, tested code to make it simpler, easier to maintain. You want to tighten it up as much as possible to remove obfuscated tangles where for bugs might be hiding or could spawn in future releases -- you don't refactor because a customer requires it; you refactor because it's cheaper to ship clean code rather than to continue to pay the cost of supporting/maintaining complexicated mess. Generally, most of TDD is more code-focused; but, you could employ UML for looking at larger scope or taking a step back to look at the problem, e.g. creating a Class diagram to help identify [missing] tests or possible refactorings. This is not something you'd need to mandate or want to do on a general basis, but where appropriate.
The last step, 'ship' is a serious step ... 'ship' is not shorthand for "toss it over the wall and forget it, because good code doesn't need support" or "abandon and hope that there are no more iterations." From a financial or business perspective, shipping is the most important step of TDD, because it is where you get paid. Shipping does a involve "shifting gears" because it includes systems integration, preparation for support and maintenance, getting ready for the next round of development, etc. The primary use of UML diagrams will be to communicate [in abstract terms] how the code does what it does ... UML is useful because hopefully the diagrams are an artifact of the requirements and development processes; it's not necessary to start from scratch when the code ships ... as a communication tool, UML would be appropriate for reducing integration errors multi-module systems, larger projects that might involve modules written in different languages, networks of embedded systems where different companies must collaborate on safety-critical systems but need the abstraction to be stingy with or protective of their "proprietary knowledge."
Just as you should avoid using big hammer in situations where a tiny screw driver is appropriate OR you aren't going to get anywhere by asking all developers to standardize on using Emacs as their editor. Sometimes the view is not worth the climb -- you don't want to always haul out the UML banner or become known a the guy who was alway pushing UML ... particularly not in situations where there is no substitute for writing tests or reading code. If you stick to appropriate situations, you should not be afraid to use UML as a communication language in all situations where the language helps you.
So when I first design/create test,
how can UML fit in? Would it be
possible to design classes first, from
them create skeleton classes, from
them generate Unit tests which would
be "filled" before actual
implementation of UML pregenerated
classes, would this approach break
whole TDD? Or is there any other way
that would keep UML and TDD together?
If you create an entire skeleton class - by which I assume you mean a class with all methods defined but empty - before writing your first test, then I would say you are not doing TDD, and lose the benefits of TDD. As we do TDD - Test-Driven Design - our tests incrementally lead us to the next elements - methods and classes - that our program needs. If you have predetermined, pre-specified in UML, what your classes and methods are, a great deal of your design is already done, and either your subsequent development is constrained to follow it, or you have wasted effort that subsequent work will undo.
There may be ways to use both UML and TDD together for design, but as you have described it, you are doing the design within UML, before TDD ever gets a chance. That won't give you the full benefit of TDD.
Or is there any other way that would keep UML and TDD together?
UML and TDD fit wonderfully together:
Make the initial design in UML - it doesn't have to be complete, just consistent, self-contained
Create the empty tests. This step could also be automatized
All tests will fail at first, as required by TDD (because the generated code from UML does not have any code)
Start writing tests for each class
Start with classes which do not have a great deal of associations if you are confident with your software architecture and UML skills (no, you're not doing waterfall, but sometimes you just know what you're doing - you know the application domain already or you have used expert knowledge at step 1)
Start with classes which have a lot of associations ("central classes") if you are NOT confident in your understanding of the application domain - this will make it easier to eliminate as soon as possible bad design decisions, because you will notice them as early as possible
... The tests are still failing
In parallel to each unit being tested (step 4), write the implementation inside the empty method bodies. DO NOT modify any class, interface or method names or parameter signatures. You ARE only allowed to add private helper methods, NOT more
If at step 6 (which is run in tandem with step 4) you realize you need to make changes in the design:
Go to step 1 and refine the UML, then generate the code again (good UML tools will not overwrite your implementation). Attention: avoid introducing new classes. You want to finish step 13 within a few weeks
Re-run the tests and fix the ones failing which were previously OK
Continue with what you left at step 6
Go to step 6 if not all class tests pass
Proceed to component, package and subsystem tests
Add deployment to UML and deploy to the integration environment (http://en.wikipedia.org/wiki/Development_environment_%28software_development_process%29)
Proceed to integration tests
Go through the test/QA stage
Go through the User Acceptance Testing
Reiterate the previous steps as required (according to your iterative development process)
... months pass
Deploy version 1.0.0 to production
Do not try to do arrive at many design decisions at step 1 or following reiterations of step 1 (refining the design). You want to finish step 13 in the first iteration after a few weeks.
While some people think UML is a design methodology, it is first and foremost a communication tool. Hence the name, Unified Modeling Language. The idea is to have common vocabulary (of shapes) that you can insert in a book and everybody will understand.
TDD on the other hand is a design methodology, the way to construct the system starting from its interface and its consumer, and only then adding the implementation.
Once your design has emerged as a result of applying TDD, you can communicate that design using UML. If you don't need to communicate (like if you're the only developer), you arguably don't need UML.
Some people think of domain analysis (identifying key Nouns, Verbs and Adjectives and building a basic ontological model) as being a part of UML methodology, reflected in the use case & ER diagrams... This may be useful to do before you jump into TDD red/green/refactor cycle. Then again, strictly speaking this is DDD (Domain Driven Design), not UML per se.
If you're designing classes using UML, you're designing what you think the code will look like. But what you create using TDD is the reality.
Leave UML until the end, to document the results of the code you created using TDD.
UML is the design part, of course.
I'd only use it on the system I was ending up with. Rendering test classes in UML seems ludicrous to me.
I'd use it to get a general sense of what I was creating. Then I'd start with TDD to implement it.
As far as keeping it in synch, I'd get a UML tool that was capable of import and export. I'd worry more about the code and less about UML. It's useful for capturing your thoughts and documenting later, but not much else. I'd always prefer working, tested code over diagrams any day.

Test-Driven Development "Barriers to Entry"?

I'm in the process of doing a study on Test-Driven Development and one of the discussion points is the "Barrier to Entry" associated with TDD. Does anyone have any experience around this area, on any projects you've worked on that decided not to use TDD because the barrier to entry was too high?
From what I can tell the only barrier to entry is knowledge (and as such experience) of individual developers, with most not being entirely accustomed to the process and it being slightly alien. Financially it seems to be very appealing given most of the market leading tools are open source, freely available, well documented and well supported.
Thoughts/feelings appreciated.
Thanks,
EDIT - does anyone know of any high profile quotes of people advocating TDD? Would love to see how high it goes up the chain. Cheers.
Some barriers include:
An existing code base which doesn't lend itself to unit testing.
A problem domain that is hard to unit test meaningfully, such as GUI work or integrations with third party systems.
A perception of integration problems over unit problems (in other words, if it doesn't work end to end it doesn't do anything, so what is the point of testing the unit).
A mindset that wants to design ahead of time and have a clear system design rather than have tests drive design
A political culture where design is done by a different person/group than development, and that design is not unit-test friendly.
An inability to get over the fact that TDD is not about testing for conformance (arguments like "the one who writes the tests shouldn't be the one who codes it, they will be too lenient on themselves" and such variants).
It isn't they way they have coded until now, so the shift is harder.
Sometimes a certain test can be hard to set up, so the method will get abandoned because it "feels" slower.
Design requirements that don't lend themselves to evolving design well or at all (think Nuclear Plant control software or other systems were actual lives depend on their functioning correctly).
If everyone isn't running the test before checking in code, tests start to break often for wrong reasons (that is the intended behavior of the code changed, but the test didn't keep up, so the test is wrong, not the code) so they can be perceived as a drag.
In terms of barriers to entry, effectively, because you are explicitly writing tests that must pass before code is considered to be complete, the lead time in the dev cycle involved in getting functional code is longer. Now, when using TDD, you're effectively guaranteeing a certain level of quality on the code (whatever level of quality you choose to test against) and so that is generally more than enough compensation for the lag in lead time, but strictly speaking, there IS a greater lead time to getting functional code using TDD.
Effectively, if you have coders that write bug-free code, TDD will be a drag on your development cycle. The value of TDD, of course, is that there aren't any coders who can always write bug-free code, and so the cost of fixing bugs has to be factored in somewhere; in TDD, the cost of the test infrastructure is front-loaded.
Note that this is not in any way a negative thing about TDD; I'm just saying, that front-loading COULD be considered to be a "barrier to entry". Personally, as a coder, I would say that the Return on Investment is more than worth the effort, and I think most experienced dev managers would as well.
Team and/or management buy-in is the biggest obstacle in some companies. If you're the lone developer trying to use TDD and you can't get others on the project interested, it can be very frustrating.
Of course that's not a financial barrier at all. The biggest perceived financial barrier is probably time. If you have a large code base that you need to write unit tests for, it can seem quite daunting. Your manager (or someone above them) will question why you want to spend time writing code that will not add features/functionality to the code. Many people don't realize that writing the tests up front (as you do in TDD) can actually save you time, both immediately and in the long run when you're maintaining that code.
I think one major barrier is how it requires you to change the way you think.
Before I tried TDD, I would create a class, say Employee, then I would stub in things like FirstName, LastName, Email, etc. Then I would write some logic and forget that I missed a few fields or something else. And before I knew it I had a pretty complex class without knowing if those fields were ever necessary.
Also, it's a complete change from how we are used to writing software. We are used to writing software as we receive features from the guys who sign our checks. We are not used to writing code which doesn't compile, making it compile, then making it work to make our tests pass.
The first time you do this, you feel a bit.. well silly and stupid. Why am I making my code intentionally fail? It seems illogical to the "make it work" philosophy we've all been taught for so long.
A few reasons why it has failed so far where I work:
Most of the project at work on are older apps. Not pre-.NET but, .NET 2.0 and in some cases .NET 1.0.
Some of these projects are not well factored, either because the technology wasn't there in 1.0, or it was built quickly because they needed something NOW..
As Jon pointed out, some things are still a PIA (pain-in-the-***) to unit test, UI, database, etc.
Expensive tools. If you are only allowed to Microsoft tools, it's a high price tag to do this the "right way". We use resharper, so it really isn't a problem.
Time. I'm in a team of three guys supporting a department of 30 people. We are considered overhead, and many of our development consists of interfacing systems together
Yes the main barrier to entry is in your head or in the head of other programmers.
In the beginning you don't know what to srite in your tests.
The trick is to think about how your code will be used instead of focusing on how you re going to write it. Easier said than done ...
When you start to "get it", it's a bit hard to know where to stop writing tests.
You have to remember that tests prove nothing so you just can't write tests to covers all cases, you have to select the most useful ones ... and that's already a lot !
I've certainly seen plenty of resistance. The barriers I've encountered are:
Unit testing user interfaces (web or thick client) is tricky. I know there are lots of attempts to solve the problem, but I don't think any of them have made it really simple - because it's a naturally hard problem.
At the other end, although there are various ways of making it easier to test the code involved with the database, it's still tricky and time-consuming.
While good tests definitely speed up development overall, testing is a skill - and while you suck at it, unit testing may well be more trouble than it's worth, which means you never build up the skill...
Managers often see it as an optional extra to development - a nice to have rather than critical. This means it's the first thing to go when the project inevitably has a resource squeeze.
I wrote a long-ish article about this a few weeks back, "Why I write tests first".
I think the biggest barrier is building the discipline to start with tests first, but I don't believe the TDD (or any practice for that matter) should be approached as an always, absolutely, 100% of the time solution.
TDD is a tool in each developer's arsenal. I tend to think it works well for me most of the time. A developer that isn't as accustomed to writing tests (first or otherwise) may it difficult to get anything done if TDD is forced on them because they can't think in terms of writing the test first.
I consider myself an experienced test-writer, but I can't always think in terms of tests. Some problems don't lend themselves well to it, or at least my head doesn't get wrapped around it some days. And some types of code (such as UI and client-side code) doesn't lend itself well to always writing tests.
If you have a team of developers that do not write tests as a matter of habit, I'd push that first. I have no problem requiring that all new code have accompanying unit tests where possible/practical. Once testing is a discipline, converting developers to TDD individually or as a team is much easier.
One non-obvious barrier (non-obvious to me, at least) is the build infrastructure. If developers don't have control over the build process, or if the infrastructure is too baroque to be manageable, then integrating tests into the build process is going to be shunted to the side in the name of "efficiency". (Of course, in these situations it's the build infrastructure that should be shunted aside in the name of efficiency.)

Is test-driven development a normal approach in game development?

I am just curious since all TDD examples I have seen is web programming related. And if it's not a normal approach, why is it not?
TDD has become a favored approach by software developers who are serious about their profession. [IEEE:TDD] The benefits of the approach are significant, and the costs are low by comparison. [The Three Laws of TDD]
There are no software domains for which TDD is inappropriate, or ineffective. However, there are domains in which it is challenging. Gaming happens to be one of these.
Actually, the challenge is not so much gaming as it is UI. The reason UI is a challenge is that you often don't know what you want the UI to look like until you've seen it. UI is one of those things that you have to fiddle with. Getting it right is a deeply iterative process that is full of twists and turns and dead ends and back alleys. Writing tests first for UI is likely to be both difficult and wasteful.
Now before everybody roars off and says: "Uncle Bob says: 'Don't do TDD for UI'" let me say this. The fact that it's hard to do pure TDD for UI does not mean you can't do pure TDD for almost everything else. Much of gaming is about algorithm, and you can use TDD with those algorithms to your heart's delight. It's true, especially in gaming, that some of those algorithms are the kind of code you have to fiddle with, just like UI, and so are probably not amenable to being tested first. But there is a lot of other algorithmic code that can and should be written test first.
The trick here is to follow the single responsibility principle (SRP) and separate those kinds of code that you have to fiddle with, from those kinds that are deterministic. Don't put easy-to-test algorithms in with your UI. Don't mix your speculative code with your non-speculative code. Keep the things that change for reason A separate from the things that change for reason B.
Also, keep this in mind: The fact that some code is hard to test first, does not mean that this code is hard to test second. Once you have fiddled and tweaked and gotten the code to work just the way you like, then you can write the tests demonstrate that the code works the way you think. (You'll be surprised at how many times you find bugs while doing this.)
The problem with writing tests "after the fact" is that often the code is so coupled that it is hard to write the kinds of surgical tests that are most helpful. So if you are writing the kind of code that is hard to test first, you should take care to follow the dependency inversion principle (DIP), and the open/closed principle (OCP) in order to keep the code decoupled enough to test after the fact.
The simple answer is "no", TDD is not a normal approach in game development. Some people will point at Highmoon and Neil Llopis as counter-examples, but it's a big industry and they are the only people I know of who have fully embraced TDD. I'm sure there are others, but they are the only ones I know of (and I've been in the industry for 5 years).
I think a lot of us have dabbled in unit testing at some point, but for one reason or another it hasn't taken hold. Speaking from personal experience it is hard for a games studio to switch to TDD. Usually a codebase is kept from project to project, and applying TDD to a large existing codebase is both tedious and largely thankless. I'm sure that eventually it would prove fruitful, but getting games coders to buy into it is difficult.
I have had some success in writing unit tests for low-level game engine code, because this code tends to have very few dependencies and is easily encapsulated. This has always been testing after the fact though and not TDD. The higher-level game code is usually harder to write tests for because it has far more dependencies and often is associated with complex data and state. Taking AI as an example, to test AI require some kind of context, meaning a navigation mesh and other objects in the world. Setting up that kind of test in isolation can be non-trivial, especially if the systems involved weren't designed for it.
What is more common in game development, and I've had more personal success with, is smoke testing. You'll often see smoke testing used in conjunction with continuous integration to provide various kinds of feedback on the behaviour of the code. Smoke testing is easier because it can be done by just feeding data into the game and reading back information, without having to compartmentalize your code into tiny testable pieces. Taking AI as the example again, you can tell the game to load up a level and provide a script that loads an AI agent and gives it commands. Then you simply determine if the agent performs those commands. This is a smoke test rather than a unit test because you are running the game as a whole and not testing the AI system in isolation.
In my opinion it is possible to get decent test coverage by unit testing the low-level code while smoke testing the high level behaviours. I think (hope) that other studios are also taking a similar approach.
If my opinion of TDD sounds somewhat ambiguous that's because it is. I'm still somewhat on the fence about it. While I see some benefits (regression testing, emphasis on design before code), applying it and enforcing it while working with a pre-existing codebase seems like a recipe for headaches.
Games from Within has an article discussing their use of unit testing, the limitations of unit testing with regards to games in particular, and an automated functional testing server that they set up to help with this.
If you are referring to the practice of writing and maintaining unit tests for every bit of code, I'd venture a guess and state that this is not in widespread use in the gaming industry. There are many reasons for this, but I can think of 3 obvious ones:
Cultural. Programmers are conservative, game programmers doubly so.
Practical. TDD does not fit very well to the problem domain (too many moving parts).
Crunchological. There's never enough time.
The TDD paradigm works best in application domains which are not very stateful, or at least where the moving parts are not all moving at the same time, to put it colloquially.
TDD is applicable to parts of the game development process (foundation libraries and such) but "testing" in this line of work usually means running automated fly-through, random key testing, timing io loads, tracking fps spikes, making sure the player can't wriggle his way into causing visual instabilities, stuff like that. The automaton is also very often a humanoid.
TDD can be a useful tool, but its status as a silver bullet that must-be-ubiquitous-when-making-a-system is rather questionable. Development should not be driven by tests, but by reason. RDD is a crappy acronym though - it won't catch on. ;)
Probably the main reason is that TDD is preferred by those with languages more conducive to it. But apart from that, games themselves are a poor match for the paradigm anyway.
Generally speaking (and yes, I do mean generally speaking, so please don't swamp me with counterexamples), test-driven design works best for event-driven systems and not so well for simulation-style systems. You can still use tests on your low-level components in games, whether test-driven or just plain unit testing, but for more higher level tasks there is rarely any sort of discrete event that you can simulate with deterministic results.
For example, a web application typically has very distinct inputs (an HTTP request), changes a very small amount of state (for example, records in the database), and generates a largely deterministic output (for example, HTML page). These can be easily checked for validity, and since generating the input is simple it's trivial to create tests.
However with games the input may be hard to simulate (especially if it needs to occur at a certain point... think of getting past loading screens, menu screens, etc.), the amount of state you change may be large (for example, if you have a physics system, or complex reactive AI) and the output is rarely deterministic (random number use is the main culprit here, though things like floating point precision loss is another, as might be hardware specifications, or available CPU time, or the performance of a background thread, etc.).
To do TDD you need to know exactly what you expect to see in a certain event and to have an accurate way of measuring it, and both of these are difficult problems with simulations that avoid discrete events, deliberately include random factors, act differently on different machines, and have analogue outputs such as graphics and audio.
Additionally, there's one massive practical issue which is process startup time. Many of the things you will want to test require the loading of large quantities of data, and if you mock up the data you're not truly testing the algorithm. With this in mind it quickly becomes impractical to have any sort of test scaffolding that just performs individual tasks. You can run tests against a web server without having to take the webserver down each time - that's rarely possible with games unless you do the testing from an embedded scripting language (which is reasonable, and does indeed take place in the industry).
For example, you want to add volumetric shadow rendering to your in-game buildings. So you'd need to write a test that starts up all the necessary subsystems (for example, renderer, game, resource loaders), load in buildings (incl. mesh, materials/textures), load in a camera, position that camera to point at the building, enable the shadows, render a scene, and then somehow decide whether the shadows actually appear in the frame buffer. It's less than practical. In reality you'd have all this scaffolding already there in the form of your game, and you'd just fire it up to conduct a visual test in addition to any assertions within the code itself.
Most game developers aren't exactly with it in terms of modern development practices. Thankfully.
But a test-driven development model emphasizes concentrating on how something would be used first, then fleshing out what it does. That in general is good to do since it forces you to concentrate on how a particular feature will actually fit into whatever you're doing (say, a game).
So good game developers do this naturally. Just not explicitly.
#Rune Once again, please emphasise the 'D' rather than the 'T'. At a unit level, the tests are a thinking tool to help you understand what you want and to drive the design of the code. Certainly at the unit level, I find I end up with cleaner, more robust code. The better the quality of the pieces I put into the system, the better they fit together with fewer (but not no) bugs.
That's not the same thing at all as the sort of serious testing that games need.
TDD isn't really a 'normal' approach anywhere yet as it's still relatively new and not universally understood or accepted yet. That isn't to say that some shops don't work that way now but I'm still surprised to hear anyone using it at all at this point.

Resources