Have any of you applied TDD in VHDL? - tdd

I applied TDD for software development, and it was really good! Now, I'm keen on FPGA design with VHDL and I am wondering about how to apply TDD methodology with it.
Have any of you used TDD on FPGA design? If yes, how do yo do? Do you know any articles or materials to learn about?
Thanks!

I have never specifically applied TDD, but I always have testbenches for the design components in VHDL projects.
Although it is not entirely within the philosophy of TDD, a lot of hardware design end up resembling some of its description. In fact, I would argue that hardware is often much more suitable for testing with fine granularity, because components often have clearly defined inputs and outputs where software units would be depending on many other classes for the functionality to make sense.
The resemblance is strong where all sub-blocks are being properly verified, especially if you consider the blocks to be 'features'. Even more so, when using stimuli files for testbenches on small arithmetic blocks, it is normal to first start with some common and special cases you can think of, and then add more as you find problems in your high-level tests. You will end up working iteratively on a fairly low level.
Some concepts of TDD that don't map well to hardware design include the idea of 'user stories'. Unless if you have specifically designed your project to be extensible, it is more likely you won't see any top level tests passing until all your components are in place.
I don't have any specific links or materials for you. Usually there are many custom scripts involved for running all the tests. Apart from testbenches it often involves checks with assertions, equivalence checks and various forms of formal verification, but doing that in great detail may be more specific to ASIC than FPGA design. The VUnit mentioned in the comment by Briand Drummond looks very interesting.
Note that hardware engineers tend to call checks on VHDL code verification, where testing is what you do on a physical product. I've had one professor that was very passionate about the distinction.

Related

How does TDD drives the design?

I understand that TDD has many advantages (some of them are below). How ever I am not sure how it drives the design?
Serves as documentation
writing tests before actual code helps maximum test coverage
Help determine input value boundaries
Usually when we start implement new piece of functionality, we will have rough idea of the design. And we start with TDD implementation of a class, which is used by other classes as per design. This understanding of mine seems to be in conflict with statement "TDD drives the design"
Please help me understand this by an example.
Most people think that Test-Driven Development is tool to write code with lesser number of bugs. But ,in reality, that is the by-product of TDD. TDD is more of a tool for code designing.
Overall, TDD helps in quality code development in following ways:-
It makes you think about your code design and requirements at every stage, thereby ensuring that you are actually implementing what is required.
You are forced to write testable code, thereby ensuring that your code has loose coupling and high cohesion.
If your code is getting difficult to test, mostly is signifies that there is some issue with your design(Your code is too coupled or not isolated enough)
With that said, I tend to disagree with people that think if you follow TDD blindly you'd always end up with good code design(because that depends more on you and your knowledge of Sofware Design), but I believe there is a good chance you would.
TDD doesn't drive design, but it's an excellent feedback mechanism, which enables you to get rapid feedback on your design ideas.
It also has the beneficial side-effect that it leaves behind a nice regression test suite.
The most basic idea behind this, is that when you want to test something, you want to keep the tests simple and focused. This in turn forces you to start with the smallest bits of your problem, driving towards the Single responsibility principle.
Another thing, for larger problems, is when you are forced (because you want your tests to be focused) to use mocks and stubs. This drives your design towards using the Dependency inversion principle, which in turn makes your code loosely coupled.
TDD in general
The goal of Test-driven Development (TDD) is to create software components that are precisely conforming to a set of functional specifications. The task of the tests is to validate conformity and to identify disparities from these specifications.
Design Guidance
By writing tests first, you're forced to make up your mind what your software component should do. Thinking about meaningful test cases first will guide your design, as you're forced to find answers to a set of questions: How should process inputs? To which components does it interface? What are the outputs I'm expecting? ...
TDD does not replace SW design. It sharpens your design!
In essence, TDD will help you to sharpen your design - but one thing is crucial: You need to find reasonable tests. And trying to replace functional specifications - call it a requirements documents - with tests only, is rather dangerous. Why?
Unit tests as used by TDD can only guarantee that your components exhibit the expected behavior for the given test data, but not for any data! As Dijkstra said in the 1960s: "Testing shows the presence, not the absence of bugs".
> Usually when we start implement new piece of functionality, we will have rough idea of the design.
That's the core thing of your question. If you just have a rough idea of the design, you better should spend more time at the drawing board, asking your self questions like: What are the individual tasks my software should carry out? How can I split the general task into subtasks? How can I map these subtasks to components? Which data needs to be passed among them?
At that time, you might consider doing TDD. But without thinking about a design or software architecture first, you'll end up with a "spaghetti system" that is hard to understand and hard to maintain later on.
Great observation by Mark Seemann. The principle that pushes me to value TDD is fast feedback (and so fast learning). You can't substitute good design techniques for just doing TDD - learn good design principles and use TDD to provide fast feedback on your work.
I find that when using TDD most deeper thinking about my design happens during the refactor step and the tests then provide a space for that step and a safety net for change, but it doesn't automatically create good design.
TDD allows me to hack some working code and use the learning I get while doing that to iterate towards a better design. Of course this can be done without TDD too, but the test helps by providing that safety net.

Can TDD Handle Complex Projects without an upfront design?

The idea of TDD is great, but i'm trying to wrap my head around how to implement a complex system if a design is not proposed upfront.
For example, let's say I have multiple services for an payment processing application. I'm not sure I understand how development would/can proceed across multiple developers if there is not a somewhat solid design upfront.
It would be great if someone can provide an example and high level steps to putting together a system in this manner. I can see how TDD can lead to simpler and more robust code, I'm just not sure how it can bring together 1) different developers to a common architectural vision and 2) result in a system that can abstract out behavior in order to prevent having to refactor large chunks of code (e.g. accept different payment methods or pricing models based on a long term development roadmap).
I see the refactoring as a huge overhead in a production system where data model changes increase risks for customers and the company.
Clearly i'm probably missing something that TDD gurus have discovered....
IMHO, It depends on the the team's composition and appetite for risk.
If the team consists of several experienced and good designers, you need a less formal 'architecture' phase. It could be just a back of the napkin doodle or a a couple of hours on the whiteboard followed by furious coding to prove the idea. If the team is distributed and/or contains lots of less skilled designers, you'd need to put more time/effort (thinking and documenting) in the design phase before everyone goes off on their own path
The next item that I can think of is to be risk first. Continually assess what are the risks to your project, calculate your exposure/impact and have mitigation plans. Focus on risky and difficult to reverse decisions first. If the decision is easily reversible, spend less time on it.
Skilled designers are able to evolve the architecture in tiny steps... if you have them, you can tone down the rigor in an explicit design phase
TDD can necessitate some upfront design but definitely not big design upfront. because no matter how perfect you think your design is before you start writing code, most of the time it won't pass the reality check TDD forces on it and will blow up to pieces halfway through your TDD session (or your code will blow up if you absolutely want to bend it to your original plan).
The great force of TDD is precisely that it lets your design emerge and refine as you write tests and refactor. Therefore you should start small and simple, making the least assumptions possible about the details beforehand.
Practically, what you can do is sketch out a couple of UML diagrams with your pair (or the whole team if you really need a consensus on the big picture of what you're going to write) and use these diagrams as a starting point for your tests. But get rid of these models as soon as you've written your first few tests, because they would do more harm than good, misleading you to stick to a vision that is no longer true.
First of all, I don't claim to be a TDD guru, but here are some thoughts based on the information in your question.
My thoughts on #1 above: As you have mentioned, you need to have an architectural design up-front - I can't think of a methodology that can be successful without this. The architecture provides your team with the cohesion and vision. You may want to do just-enough-design up front, but that depends on how agile you want to be. The team of developers needs to know how they are going to put together the various components of the system before they start coding, otherwise it will just be one big hackfest.
It would be great if someone can provide an example and high level
steps to putting together a system in this manner
If you are putting together a system that is composed of services, then I would start by defining the service interfaces and any messages that they will exchange. This defines how the various components of your system will interact (this would be an example of your up-front design). Once you have this, you can allocate various development resources to build the services in parallel.
As for #2; one of the advantages of TDD is that it presents you with a "safety net" during refactoring. Since your code is covered with unit tests, when you come to change some code, you will know pretty soon if you have broken something, especially if you are running continuous integration (which most people do with a TDD approach). In this case you either need to adapt your unit tests to cover the new behavior OR fix your code so that your unit tests pass.
result in a system that can abstract out behavior in order to prevent
having to refactor large chunks of code
This is just down to your design, using e.g. a strategy pattern to allow you to abstract and replace behavior. TDD does not prescribe that your design has to suffer. It just asks that you only do what is required to satisfy some functional requirement. If the requirement is that the system must be able to adapt to new payment methods or pricing models, then that is then a point of your design. TDD, if done correctly, will make sure that you are satisfying your requirements and that your design is on the right lines.
I see the refactoring as a huge overhead in a production system where
data model changes increase risks for customers and the company.
One of the problems of software design is that it is a wicked problem which means that refactoring is pretty much inevitable. Yes, refactoring is risky in production systems, but you can mitigate that risk and TDD will help you. You also need to have a supple design and a system with low coupling. TDD will help reduce your coupling since you are designing your code to be testable. And one of the by-products of writing testable code is that you reduce your dependencies on other parts of the system; you tend to code to interfaces which allows you to replace an implementation with a mock or stub. A good example of this is replacing a call to a database with a mock/stub that returns some known data - you don't want to hit a database in your unit tests. I guess I can mention here that a good mocking framework is invaluable with a TDD approach (Rhino mocks and Moq are both open source).
I am sure there are some real TDD gurus out there who can give you some pearls of wisdom...Personally, I wouldn't consider starting a new project with out a TDD approach.

TDD and Responsibility Driven Design - how to reconcile test-first with the design process?

The 'London Style' of TDD suggests focusing on object roles, responsibilities and collaborations, using mock objects to drive out layers of large scale component design in a top down process (as opposed the 'Classic Style' which focuses on algorithmic refinement of methods).
I understand the basic approach, but am wondering how you reconcile this form of TDD (which still emphasises writing a test first) with the more natural way in which we design components, ie. sketching logical designs of related classes and defining their roles and responsibilities on paper / in our heads well before we start writing any code.
Appreciate some real-world advice.
I don't see a need to reconcile TDD (in any form) with "natural" component design. Testing implies that you have an idea of what you test, at the very least you have an interface to some level of precision. Starting out at a coarse-grained component definition seems very "natural" to me.
:)
'London Style' is basically good OOP combined with Outside-in (acceptance test) driven TDD (I am assuming you mean an approach similar to the GOOS book).
That is the way that it "should" be done ; although the "classical" people should have been more explicit about it. I'm not sure there is such a classification among the practitioners (although there are people who are faster with TDD and people who struggle with it).
State-based and interaction-based are styles and are not fits-all-sizes approaches. You need to choose the style for the task at hand.
The problem with doing "TDD in a corner" is that you may end up with well tested code that works but still does the wrong thing from the customers perspective.
Evolution has landed us now into an ATDD cycle which is TDD done at the customer/acceptance level which drives an inner TDD cycle for developers to make the acceptance test pass.
On the "reconcilation":
I've found 'listening to the tests' quite enlightening once you have tuned your ears.. let the tests drive the design.
This is also aligned to the BDD folks. I recommend picking up the RSpec book which has a walkthrough in the first section of the book.

Experiences with Test Driven Development (TDD) for logic (chip) design in Verilog or VHDL

I have looked on the web and the discussions/examples appear to be for traditional software development. Since Verilog and VHDL (used for chip design, e.g. FPGAs and ASICs) are similar to software development C and C++ it would appear to make sense. However they have some differences being fundamentally parallel and requiring hardware to fully tests.
What experiences, good and bad, have you had? Any links you can suggest on this specific application?
Edits/clarifications:
10/28/09: I'm particularly asking about TDD. I'm familiar with doing test benches, including self-checking ones. I'm also aware that SystemVerilog has some particular features for test benches.
10/28/09: The questions implied include 1) writing a test for any functionality, never using waveforms for simulation and 2) writing test/testbenches first.
11/29/09: In Empirical Studies Show Test Driven Development Improves Quality they report for (software) TDD "The pre-release defect density of the four products, measured as defects per thousand lines of code, decreased between 40% and 90% relative to the projects that did not use TDD. The teams' management reported subjectively a 15–35% increase in initial development time for the teams using TDD, though the teams agreed that this was offset by reduced maintenance costs." The reduced bugs reduces risk for tape-out, at the expense of moderate schedule impact. This also has some data.
11/29/09: I'm mainly doing control and datapath code, not DSP code. For DSP, the typical solution involves a Matlab bit-accurate simulation.
03/02/10: The advantage of TDD is you make sure the test fails first. I suppose this could be done with assertions too.
I write code for FPGAs, not ASICS... but TDD is my still my preferred approach. I like to have a full suite of tests for all the functional code I write, and I try (not always successfully) to write testcode first. Staring at waveforms always happens at some point when you're debugging, but it's not a good way of validating your code (IMHO).
Given the difficulty of performing proper tests in the real hardware (stimulating corner cases is particularly hard) and the fact that a VHDL-compile takes seconds (vs a "to hardware" compile that takes many minutes (or even hours)), I don't see how anyone can operate any other way!
I also build assertions into the RTL as I write it to catch things I know shouldn't ever happen. Apparantly this is seen as a bit "weird", as there's a perception that verification engineers write assertions and RTL designers don't. But mostly I'm my own verification engineer, so maybe that's why!
I use VUnit for test driven development with VHDL.
VUnit is a Python library that invokes the VHDL compiler and simulator and reads the results of the simulation. It also provides several nice VHDL libraries that makes it a lot easier to write better test benches, such as a communication library, logging library and a checking library.
There are many possibilities since it is invoked from Python. It is possible to both generate test data, as well as check the output data from the test in Python. I saw this example the other day where they used Octave - a Matlab copy - for plotting test results.
VUnit seems very active and I have several times been able to actually ask questions directly to the developers and gotten help quite quickly.
A downside is that it is harder to debug compilation errors since there are so many function/procedure variations with the same name in the libraries. Also, some stuff is done behind the scene by preprocessing the code, which means that some errors might show up in unexpected places.
I don't know a lot about hardware/chip design, but I am deeply into TDD, so I can at least discuss suitability of the process with you.
The question I'd call most pertinent is: How quickly can your tests give you feedback on a design? Related to that: How quickly can you add new tests? And how well do your tools support refactoring (changing structure without changing behavior) of your design?
The TDD process depends a great deal on the "softness" of software - good automated unit tests run in seconds (minutes at the outside), and guide short bursts of focused construction and refactoring. Do your tools support this kind of workflow - rapidly cycling between writing and running tests and building the system under test in short iterations?
The SystemVerilog extensions to the IEEE Verilog Standard include
a variety of constructs which facilitate creating thorough test suites
for verifying complex digital logic designs. SystemVerilog is one of
the Hardware Verification Languages (HVL) which is used to verify ASIC chip
designs via simulation (as opposed to emulation or using FPGA's).
Significant benefits over a traditional Hardware Design Language (Verilog) are:
constrained randomization
assertions
automatic collection of functional and assertion coverage data
The key is to have access to simulation software which supports
this recent (2005) standard. Not all simulators fully support
the more advanced features.
In addition to the IEEE standard, there is an open-source SystemVerilog library
of verification components available from VMM Central (http://www.vmmcentral.com). It provides a reasonable framework for creating a test environment.
SystemVerilog is not the only HVL,and VMM is not the only library.
But, I would recommend both, if you have access to the appropriate
tools. I have found this to be an effective methodology in finding design
bugs before becoming silicon.
With regard to refactoring tools for hardware languages, I'd like to point you to our tool Sigasi HDT. Sigasi provides an IDE with built-in VHDL analyzer and VHDL refactorings.
Philippe Faes, Sigasi
ANVIL– ANother Verilog Interaction Layer talks about this some. I haven't tried it.
I never actively tried TDD on an RTL design, but I had my thoughts on this.
What I think would be interesting is to try out this approach in connection with assertions. You would basically first write down in form of assertions what you assume/expect from your module, write your RTL and later you can verify these assertions using formal tools and/or simulation. In contrast to "normal" testcases (where you probably would need to write directed ones) you should have much better coverage and the assertions/assumptions may be of use later (e.g. on system level) as well.
However I wouldn't fully rely on assertions, this can become very hairy.
Maybe you can express your thoughts on this as well, as you are asking for it I guess you carry some ideas in your head?
What is TDD for you? Do you mean having all your code exercised by automatic tests at all times, or do you go further to mean that tests are written before the code and no new code is written unless tests fail?
Whichever approach you prefer, HDL code testing isn't very different from software testing. It has its pluses (much better coverage and depth of testing) and minuses (difficult to set up and cumbersome relatively to software).
I've had very good experience with employing Python and generic HDL transactors for implementing comprehensive and automatic tests for synthesizable HDL modules. The idea is somewhat similar to what Janick Bergeron presents in his books, but instead of SystemVerilog, Python is used to (1) generate VHDL code from test scenarios written in Python and (2) verification of results written by the monitoring transactors that accept waveforms from the design during simulation.
There's much more to be written about this technique, but I'm not sure what you want to focus on.

Is test-driven development a normal approach in game development?

I am just curious since all TDD examples I have seen is web programming related. And if it's not a normal approach, why is it not?
TDD has become a favored approach by software developers who are serious about their profession. [IEEE:TDD] The benefits of the approach are significant, and the costs are low by comparison. [The Three Laws of TDD]
There are no software domains for which TDD is inappropriate, or ineffective. However, there are domains in which it is challenging. Gaming happens to be one of these.
Actually, the challenge is not so much gaming as it is UI. The reason UI is a challenge is that you often don't know what you want the UI to look like until you've seen it. UI is one of those things that you have to fiddle with. Getting it right is a deeply iterative process that is full of twists and turns and dead ends and back alleys. Writing tests first for UI is likely to be both difficult and wasteful.
Now before everybody roars off and says: "Uncle Bob says: 'Don't do TDD for UI'" let me say this. The fact that it's hard to do pure TDD for UI does not mean you can't do pure TDD for almost everything else. Much of gaming is about algorithm, and you can use TDD with those algorithms to your heart's delight. It's true, especially in gaming, that some of those algorithms are the kind of code you have to fiddle with, just like UI, and so are probably not amenable to being tested first. But there is a lot of other algorithmic code that can and should be written test first.
The trick here is to follow the single responsibility principle (SRP) and separate those kinds of code that you have to fiddle with, from those kinds that are deterministic. Don't put easy-to-test algorithms in with your UI. Don't mix your speculative code with your non-speculative code. Keep the things that change for reason A separate from the things that change for reason B.
Also, keep this in mind: The fact that some code is hard to test first, does not mean that this code is hard to test second. Once you have fiddled and tweaked and gotten the code to work just the way you like, then you can write the tests demonstrate that the code works the way you think. (You'll be surprised at how many times you find bugs while doing this.)
The problem with writing tests "after the fact" is that often the code is so coupled that it is hard to write the kinds of surgical tests that are most helpful. So if you are writing the kind of code that is hard to test first, you should take care to follow the dependency inversion principle (DIP), and the open/closed principle (OCP) in order to keep the code decoupled enough to test after the fact.
The simple answer is "no", TDD is not a normal approach in game development. Some people will point at Highmoon and Neil Llopis as counter-examples, but it's a big industry and they are the only people I know of who have fully embraced TDD. I'm sure there are others, but they are the only ones I know of (and I've been in the industry for 5 years).
I think a lot of us have dabbled in unit testing at some point, but for one reason or another it hasn't taken hold. Speaking from personal experience it is hard for a games studio to switch to TDD. Usually a codebase is kept from project to project, and applying TDD to a large existing codebase is both tedious and largely thankless. I'm sure that eventually it would prove fruitful, but getting games coders to buy into it is difficult.
I have had some success in writing unit tests for low-level game engine code, because this code tends to have very few dependencies and is easily encapsulated. This has always been testing after the fact though and not TDD. The higher-level game code is usually harder to write tests for because it has far more dependencies and often is associated with complex data and state. Taking AI as an example, to test AI require some kind of context, meaning a navigation mesh and other objects in the world. Setting up that kind of test in isolation can be non-trivial, especially if the systems involved weren't designed for it.
What is more common in game development, and I've had more personal success with, is smoke testing. You'll often see smoke testing used in conjunction with continuous integration to provide various kinds of feedback on the behaviour of the code. Smoke testing is easier because it can be done by just feeding data into the game and reading back information, without having to compartmentalize your code into tiny testable pieces. Taking AI as the example again, you can tell the game to load up a level and provide a script that loads an AI agent and gives it commands. Then you simply determine if the agent performs those commands. This is a smoke test rather than a unit test because you are running the game as a whole and not testing the AI system in isolation.
In my opinion it is possible to get decent test coverage by unit testing the low-level code while smoke testing the high level behaviours. I think (hope) that other studios are also taking a similar approach.
If my opinion of TDD sounds somewhat ambiguous that's because it is. I'm still somewhat on the fence about it. While I see some benefits (regression testing, emphasis on design before code), applying it and enforcing it while working with a pre-existing codebase seems like a recipe for headaches.
Games from Within has an article discussing their use of unit testing, the limitations of unit testing with regards to games in particular, and an automated functional testing server that they set up to help with this.
If you are referring to the practice of writing and maintaining unit tests for every bit of code, I'd venture a guess and state that this is not in widespread use in the gaming industry. There are many reasons for this, but I can think of 3 obvious ones:
Cultural. Programmers are conservative, game programmers doubly so.
Practical. TDD does not fit very well to the problem domain (too many moving parts).
Crunchological. There's never enough time.
The TDD paradigm works best in application domains which are not very stateful, or at least where the moving parts are not all moving at the same time, to put it colloquially.
TDD is applicable to parts of the game development process (foundation libraries and such) but "testing" in this line of work usually means running automated fly-through, random key testing, timing io loads, tracking fps spikes, making sure the player can't wriggle his way into causing visual instabilities, stuff like that. The automaton is also very often a humanoid.
TDD can be a useful tool, but its status as a silver bullet that must-be-ubiquitous-when-making-a-system is rather questionable. Development should not be driven by tests, but by reason. RDD is a crappy acronym though - it won't catch on. ;)
Probably the main reason is that TDD is preferred by those with languages more conducive to it. But apart from that, games themselves are a poor match for the paradigm anyway.
Generally speaking (and yes, I do mean generally speaking, so please don't swamp me with counterexamples), test-driven design works best for event-driven systems and not so well for simulation-style systems. You can still use tests on your low-level components in games, whether test-driven or just plain unit testing, but for more higher level tasks there is rarely any sort of discrete event that you can simulate with deterministic results.
For example, a web application typically has very distinct inputs (an HTTP request), changes a very small amount of state (for example, records in the database), and generates a largely deterministic output (for example, HTML page). These can be easily checked for validity, and since generating the input is simple it's trivial to create tests.
However with games the input may be hard to simulate (especially if it needs to occur at a certain point... think of getting past loading screens, menu screens, etc.), the amount of state you change may be large (for example, if you have a physics system, or complex reactive AI) and the output is rarely deterministic (random number use is the main culprit here, though things like floating point precision loss is another, as might be hardware specifications, or available CPU time, or the performance of a background thread, etc.).
To do TDD you need to know exactly what you expect to see in a certain event and to have an accurate way of measuring it, and both of these are difficult problems with simulations that avoid discrete events, deliberately include random factors, act differently on different machines, and have analogue outputs such as graphics and audio.
Additionally, there's one massive practical issue which is process startup time. Many of the things you will want to test require the loading of large quantities of data, and if you mock up the data you're not truly testing the algorithm. With this in mind it quickly becomes impractical to have any sort of test scaffolding that just performs individual tasks. You can run tests against a web server without having to take the webserver down each time - that's rarely possible with games unless you do the testing from an embedded scripting language (which is reasonable, and does indeed take place in the industry).
For example, you want to add volumetric shadow rendering to your in-game buildings. So you'd need to write a test that starts up all the necessary subsystems (for example, renderer, game, resource loaders), load in buildings (incl. mesh, materials/textures), load in a camera, position that camera to point at the building, enable the shadows, render a scene, and then somehow decide whether the shadows actually appear in the frame buffer. It's less than practical. In reality you'd have all this scaffolding already there in the form of your game, and you'd just fire it up to conduct a visual test in addition to any assertions within the code itself.
Most game developers aren't exactly with it in terms of modern development practices. Thankfully.
But a test-driven development model emphasizes concentrating on how something would be used first, then fleshing out what it does. That in general is good to do since it forces you to concentrate on how a particular feature will actually fit into whatever you're doing (say, a game).
So good game developers do this naturally. Just not explicitly.
#Rune Once again, please emphasise the 'D' rather than the 'T'. At a unit level, the tests are a thinking tool to help you understand what you want and to drive the design of the code. Certainly at the unit level, I find I end up with cleaner, more robust code. The better the quality of the pieces I put into the system, the better they fit together with fewer (but not no) bugs.
That's not the same thing at all as the sort of serious testing that games need.
TDD isn't really a 'normal' approach anywhere yet as it's still relatively new and not universally understood or accepted yet. That isn't to say that some shops don't work that way now but I'm still surprised to hear anyone using it at all at this point.

Resources