Any math approaches to state management of complex objects? - user-interface

I usually use ASP.net web forms for GUI, maybe one of most "stateful" technologies. But it applies to any technology which has states. Sometimes forms are tricky and complex, with >30 elements and > 3 states of each element. Intuitive way of designing such a form usually works for 90%. Other 10% usually find testers or end-users:).
The problem as i see it that we should imagine a lot of scenarios on the same object, which is much harder than a consequence of independent operations.
From functional programming courses I know that best way is not to use state management and use pure functions and variable passing by value and all these stuff, which is greatly formalized. Sometimes, we cannot avoid it.
Do you use any math formalisms and approaches to state management of complex objects? Not like monads in Haskell, but which can be used in more traditional business applications and languages - Java, C#, C++.
It may be not Turing-complete formalism, but 99% will be great also:).
Sorry if it is just another tumbleweed question:)

Use message-passing as an abstraction. Advantages:
The difficulty with complex state is complex interactions, which are especially hairy in concurrent systems like typical GUIs. Message-passing, by eliminating shared state, stops the complexity of state in one process from being infectious.
Message-passing concurrency has nice foundational models: e.g., the Actor model, CSP, both of which influenced Erlang.
It integrates well with functional programming: check out Erlang again. Peter van Roy's book *Concepts, Techniques, and Models of Computer Programming is an excellent text that shows the fundamental ingredients of programming languages, such as pure functions, and message-passing, and how they can be combined. The text is avilable as a free PDF.

It may be not Turing-complete formalism, but 99% will be great also:).
Sorry, but I'd rather provide NP-complete solution :)
Quick answer from me would be Test-Driven Approach. But read further for more.
The problem as i see it that we should
imagine a lot of scenarios on the same
object, which is much harder than a
consequence of independent operations.
In such cases the decomposition (not only in computer science sense, but in mathematical too) is very useful.
You decompose complex scenario in many simpler ones, which in turn can still be complex by themselves and can be decomposed further.
As a result of such a process you should end up with a number of simple functions (tasks) mostly independent of each ones.
This is very important because then you can UNIT TEST those simple scenarios.
Additionally, it is much easier and better to follow test-first approach which allows to see the decomposition in the very beginning of the development process.
Do you use any math formalisms and approaches to state management of complex objects?
To continue what I said, for me the most important thing is to make a good decomposition so that I can ensure the quality and being able to easily reproduce errors in an automated manner.
To give you an abstract example:
You have a complex scenario A. You always need to write at least 3 tests for each scenario: correct input, incorrect input and corner case(s).
Starting to write first test (correct input) I realize that the test becomes too complex.
As a result, I decompose scenario A into less complex A1, A2, A3. Then I start writing tests for each of them again (I should end up with at least 3*3=9 tests).
I realise that A1 is still too complex to test, so I decompose it again into A1-1, A1-2. Now I have 4 different scenarios (A1-2, A1-2, A2, A3) and 3*4=12 potential tests. I continue writing the tests.
After I am done. I start implementation, so all my tests pass. After that you have 12 proves that scenario A (more precisely its parts) works correctly. Additionally, you might write another 3 tests for the scenario A that combines all of its decomposed parts - this kind of testing is often (but not always!) can be seen as Integration testing.
Then let's assume a bug is found in scenario A. You are not sure which part it belongs to, but you suspect that it is related to A1-2 or A3. So you write 2 more tests for each of the scenario to reproduce the bug (write such a test that fails not meeting your expectations). After you have reproduced the bug you fix it and make ALL tests pass.
Now you have 2 more proves of correctly working system that ensures all the previous functionality is working the same way.
There are 2 main major problems with this approach IMO.
You need to write a lot of tests and support them. Many developers just do not want to do that.
Additionally, the process of decomposition is more art than science. Good decomposition will result in a good structure, tests and supportability while a bad one will result in a lot of pain and wasted time. And it is hard to tell if the decomposition is good or bad at first.
This process is called Test-Driven-Development. I find it to be the closest "formalization" of development process that plays nice between science and real world.
So I do not really talk about state here but rather behavior and proving it works correctly.
From personal experience, I should mention that ASP.NET WebForm is technically VERY hard to test.
To overcome that, I would suggest to apply MVP pattern for ASP.NET WebForms.
As opposed to WebForms, ASP.NET MVC is so much easier to test.
But still, you should have set of so called "services" (our scenarios) and (unit) test them separately, then test the UI integration in the environment close to Integration tests.

Related

How does TDD drives the design?

I understand that TDD has many advantages (some of them are below). How ever I am not sure how it drives the design?
Serves as documentation
writing tests before actual code helps maximum test coverage
Help determine input value boundaries
Usually when we start implement new piece of functionality, we will have rough idea of the design. And we start with TDD implementation of a class, which is used by other classes as per design. This understanding of mine seems to be in conflict with statement "TDD drives the design"
Please help me understand this by an example.
Most people think that Test-Driven Development is tool to write code with lesser number of bugs. But ,in reality, that is the by-product of TDD. TDD is more of a tool for code designing.
Overall, TDD helps in quality code development in following ways:-
It makes you think about your code design and requirements at every stage, thereby ensuring that you are actually implementing what is required.
You are forced to write testable code, thereby ensuring that your code has loose coupling and high cohesion.
If your code is getting difficult to test, mostly is signifies that there is some issue with your design(Your code is too coupled or not isolated enough)
With that said, I tend to disagree with people that think if you follow TDD blindly you'd always end up with good code design(because that depends more on you and your knowledge of Sofware Design), but I believe there is a good chance you would.
TDD doesn't drive design, but it's an excellent feedback mechanism, which enables you to get rapid feedback on your design ideas.
It also has the beneficial side-effect that it leaves behind a nice regression test suite.
The most basic idea behind this, is that when you want to test something, you want to keep the tests simple and focused. This in turn forces you to start with the smallest bits of your problem, driving towards the Single responsibility principle.
Another thing, for larger problems, is when you are forced (because you want your tests to be focused) to use mocks and stubs. This drives your design towards using the Dependency inversion principle, which in turn makes your code loosely coupled.
TDD in general
The goal of Test-driven Development (TDD) is to create software components that are precisely conforming to a set of functional specifications. The task of the tests is to validate conformity and to identify disparities from these specifications.
Design Guidance
By writing tests first, you're forced to make up your mind what your software component should do. Thinking about meaningful test cases first will guide your design, as you're forced to find answers to a set of questions: How should process inputs? To which components does it interface? What are the outputs I'm expecting? ...
TDD does not replace SW design. It sharpens your design!
In essence, TDD will help you to sharpen your design - but one thing is crucial: You need to find reasonable tests. And trying to replace functional specifications - call it a requirements documents - with tests only, is rather dangerous. Why?
Unit tests as used by TDD can only guarantee that your components exhibit the expected behavior for the given test data, but not for any data! As Dijkstra said in the 1960s: "Testing shows the presence, not the absence of bugs".
> Usually when we start implement new piece of functionality, we will have rough idea of the design.
That's the core thing of your question. If you just have a rough idea of the design, you better should spend more time at the drawing board, asking your self questions like: What are the individual tasks my software should carry out? How can I split the general task into subtasks? How can I map these subtasks to components? Which data needs to be passed among them?
At that time, you might consider doing TDD. But without thinking about a design or software architecture first, you'll end up with a "spaghetti system" that is hard to understand and hard to maintain later on.
Great observation by Mark Seemann. The principle that pushes me to value TDD is fast feedback (and so fast learning). You can't substitute good design techniques for just doing TDD - learn good design principles and use TDD to provide fast feedback on your work.
I find that when using TDD most deeper thinking about my design happens during the refactor step and the tests then provide a space for that step and a safety net for change, but it doesn't automatically create good design.
TDD allows me to hack some working code and use the learning I get while doing that to iterate towards a better design. Of course this can be done without TDD too, but the test helps by providing that safety net.

Does TDD preclude designing first?

I've been reading about TDD lately, and it is advocated because it supposedly results in code that is more testable and less coupled (and a bunch of other reasons).
I haven't been able to find much in the way of practical examples, except for a Roman numeral conversion and a number-to-English converter.
Observing these two examples, I observed the typical red-green-refactor cycles typical of TDD, as well as the application of the rules of TDD. However, this seemed like a big waste of time when normally I would observe a pattern and implement it in code, and then write tests for it after. Or possibly write a stub for the code, write the unit tests, and then write the implementation - which might arguably be TDD - but not this continuous case-by-case refactoring.
TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture. My opinion so far is that the benefits of TDD can be achieved by a proper architectural design, although admittedly not everyone can do this reasonably well.
So here I have two questions:
Am I right in understanding that using TDD pretty much doesn't allow you to design first (see the rules of TDD)?
Is there anything that TDD gives you that you can't get from doing proper design before you start coding?
well, I was in your shoes some time ago and had the same questions. Since then I have done quite some reading about TDD and decided to mess with it a little.
I can summarize my experience about TDD in these points:
TDD is unit testing done right, ATDD/BDD is TDD done right.
Whether you design beforehand or not is totally up to you. Just make sure you don't do BDUF. Believe me you will end up changing most of it midways because you can never fully understand the requirements until your hands get dirty.
OTOH, you can do enough design to get you started. Class diagrams, sequence diagrams, domain models, actors and collaborators are perfectly fine as long as you don't get hung up in the design phase trying to figure everything out.
Some people don't do any design at all. They just let the code talk and concentrate on refactoring.
IMHO, balance your approach. Do some design till you get the hang of it then start testing. When you reach a dead end then go back to the white board.
One more thing, some things can't be solved by TDD like figuring out an algorithm. This is a very interesting post that shows that some things just need to be designed first.
Unit testing is hard when you have the code already. TDD forces you to think from your API users perspective. This way you can early on decide if the public interface from your API is usable or not. If you decide to do unit testing after implementing everything you will find it tedious and most probably it will be only for some cases and I know some people who will right only passing test cases just to get the feature done. I mean who wants to break his own code after all that work?
TDD breaks this mentality. Tests are first class citizens. You aren't allowed to skip tests. You aren't allowed to postpone some tests till the next release because we don't have enough time.
Finally to answer your question if there anything that TDD gives you that you can't get from doing proper design before you start coding, I would say commitment.
As long as your doing TDD you are committed to apply good OO principles, so that your code is testable.
To answer your questions:
"Test Driven Development" (TDD) is often referred to as "Test Driven Design", in that this practice will result in a good design of the code. When you have written a failing unit test, you are forced into a test driven design approach, so that you can implement just what is needed to make the test pass i.e. you have to consider the design of the code you are writing to make the test pass.
When using a TDD approach a developer will implement the minimum amount of code required to pass the test. Doing proper design beforehand usually results in waste if the requirements change once the project has started.
You say "TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture" - If you are following an Agile approach to your software development, then you do still need to do some up front architectural investigation (e.g. if you were using the Scrum methodology you would create a development "Spike" at the start of a project) to ascertain what the minimum amount of architecture needed to start the project. At this stage you make decisions based on what you know now e.g. if you had to work with a small dataset you'd choose to use a regular DB, if you have a huge DB you might to choose to use a NoSQL big data DB etc.
However, once you have a general idea of the architecture, you should let the design evolve as the project progresses leaving further architectural decisions as late in the process as possible; Invariably as a project progresses the architecture and requirements will change.
Further more this rather popular post on SO will give you even more reasons why TDD is definetly worth the effort.

Can TDD Handle Complex Projects without an upfront design?

The idea of TDD is great, but i'm trying to wrap my head around how to implement a complex system if a design is not proposed upfront.
For example, let's say I have multiple services for an payment processing application. I'm not sure I understand how development would/can proceed across multiple developers if there is not a somewhat solid design upfront.
It would be great if someone can provide an example and high level steps to putting together a system in this manner. I can see how TDD can lead to simpler and more robust code, I'm just not sure how it can bring together 1) different developers to a common architectural vision and 2) result in a system that can abstract out behavior in order to prevent having to refactor large chunks of code (e.g. accept different payment methods or pricing models based on a long term development roadmap).
I see the refactoring as a huge overhead in a production system where data model changes increase risks for customers and the company.
Clearly i'm probably missing something that TDD gurus have discovered....
IMHO, It depends on the the team's composition and appetite for risk.
If the team consists of several experienced and good designers, you need a less formal 'architecture' phase. It could be just a back of the napkin doodle or a a couple of hours on the whiteboard followed by furious coding to prove the idea. If the team is distributed and/or contains lots of less skilled designers, you'd need to put more time/effort (thinking and documenting) in the design phase before everyone goes off on their own path
The next item that I can think of is to be risk first. Continually assess what are the risks to your project, calculate your exposure/impact and have mitigation plans. Focus on risky and difficult to reverse decisions first. If the decision is easily reversible, spend less time on it.
Skilled designers are able to evolve the architecture in tiny steps... if you have them, you can tone down the rigor in an explicit design phase
TDD can necessitate some upfront design but definitely not big design upfront. because no matter how perfect you think your design is before you start writing code, most of the time it won't pass the reality check TDD forces on it and will blow up to pieces halfway through your TDD session (or your code will blow up if you absolutely want to bend it to your original plan).
The great force of TDD is precisely that it lets your design emerge and refine as you write tests and refactor. Therefore you should start small and simple, making the least assumptions possible about the details beforehand.
Practically, what you can do is sketch out a couple of UML diagrams with your pair (or the whole team if you really need a consensus on the big picture of what you're going to write) and use these diagrams as a starting point for your tests. But get rid of these models as soon as you've written your first few tests, because they would do more harm than good, misleading you to stick to a vision that is no longer true.
First of all, I don't claim to be a TDD guru, but here are some thoughts based on the information in your question.
My thoughts on #1 above: As you have mentioned, you need to have an architectural design up-front - I can't think of a methodology that can be successful without this. The architecture provides your team with the cohesion and vision. You may want to do just-enough-design up front, but that depends on how agile you want to be. The team of developers needs to know how they are going to put together the various components of the system before they start coding, otherwise it will just be one big hackfest.
It would be great if someone can provide an example and high level
steps to putting together a system in this manner
If you are putting together a system that is composed of services, then I would start by defining the service interfaces and any messages that they will exchange. This defines how the various components of your system will interact (this would be an example of your up-front design). Once you have this, you can allocate various development resources to build the services in parallel.
As for #2; one of the advantages of TDD is that it presents you with a "safety net" during refactoring. Since your code is covered with unit tests, when you come to change some code, you will know pretty soon if you have broken something, especially if you are running continuous integration (which most people do with a TDD approach). In this case you either need to adapt your unit tests to cover the new behavior OR fix your code so that your unit tests pass.
result in a system that can abstract out behavior in order to prevent
having to refactor large chunks of code
This is just down to your design, using e.g. a strategy pattern to allow you to abstract and replace behavior. TDD does not prescribe that your design has to suffer. It just asks that you only do what is required to satisfy some functional requirement. If the requirement is that the system must be able to adapt to new payment methods or pricing models, then that is then a point of your design. TDD, if done correctly, will make sure that you are satisfying your requirements and that your design is on the right lines.
I see the refactoring as a huge overhead in a production system where
data model changes increase risks for customers and the company.
One of the problems of software design is that it is a wicked problem which means that refactoring is pretty much inevitable. Yes, refactoring is risky in production systems, but you can mitigate that risk and TDD will help you. You also need to have a supple design and a system with low coupling. TDD will help reduce your coupling since you are designing your code to be testable. And one of the by-products of writing testable code is that you reduce your dependencies on other parts of the system; you tend to code to interfaces which allows you to replace an implementation with a mock or stub. A good example of this is replacing a call to a database with a mock/stub that returns some known data - you don't want to hit a database in your unit tests. I guess I can mention here that a good mocking framework is invaluable with a TDD approach (Rhino mocks and Moq are both open source).
I am sure there are some real TDD gurus out there who can give you some pearls of wisdom...Personally, I wouldn't consider starting a new project with out a TDD approach.

What's the most effective workflow between people who develop algorithms and developers?

We are developing software with pattern recognition in video. We have 7 mathematicians who are creating algorithms. Plus we have 2 developers that maintain / develop the application with these algorithms. The problem is that mathematicians are using different development tools to create algorithm like Matlab, C, C++. Also because they are not developers the don't give much concerns for memory management or multi-threading. This one of the reason why the app. has a lot of bugs.
If in your company you have similar situation, how do you deal with it? What's the best tools you can recommend to create algorithms? What communication supposed to be between mathematicians and developers? What's in your opinion the most effective to work with high-level tools?
I am not sure whether you devs are rewriting the mathematician's stuff or if you just have to interface to it so I am not sure if my answer is of any use.
However: I work together with a bunch of phd candidates and postdocs on a machine learning library and am a student myself. In that process, I came to translate a lot of algorithms from python/numpy to C++/blas.
This process can be quite tedious - especially with numerical and stochastic algorithms, it is hard to find bugs.
So here is what I did: Get some sample inputs and calculate the results with the python code. Generate unit tests out of these for C++ and then start coding them in C++.
Checking concrete sample inputs with the outputs is essential in this setting.
I agree with Makach.
Let the guys who are creating the algorithms use the tools that they are most familiar with. Because there are two separate (and equally important) tasks to work on in this project. First, there is the creating of an efficient, elegant and appropriate mathematically sound algorithm, then there is the twistedly difficult task of translating it into CPU-speak. The mathimaticians should focus on their first task, and to make it easier for them, allow them to use the toosl they are comfortable with. In terms of man hours, it is a much more efficient use of their time to write MATLAB code, than it would be to have them learn a new programming language.
Your task is to unearth the (presumably) brilliant mathematics that are buried within the gibberish code.
That part is just a perspective on the problem at hand. Here's the actual answer.
Communication, mutual respect, and teaching/learning.
Communication & Mutual Respect
You must communicate with them often. Work closely with them and ask them questions whenever you come across something you're not sure of. This is much easier when there is mutual respect, which means that if you spend all your time criticizing their coding abilities, then they will be forced to spend all their time criticizing your math abilities. Instead, try quick learning-sessions. ("Lunch & Learn" is a fairly common tactic)
Teaching/Learning
The first and most important piece of wisdom to impart to them is commenting. Have them comment the crap out of their code. Tell them that the comments are much more important than the code quality, and that as long as their comments are right, they can leave the rest up to you guys. Because they can. They don't need to have their code look beautiful, for be the fastest, they just need it to make sense to you guys.
To continue this mutual learning scenario, if you notice some very simple very common mistakes they are making, (nothing NEARLY as complicated as multithreading) just give them a quick heads up. "That way works (or doesn't) but here's a way to do it that is a little different but it will make your lives much much easier." Encourage them to reciprocate by trying to notice which nuances or parts of their algorithms which you and your team are having difficulty with and teach a little tutorial about it.
Once you guys get the communication flowing, you'll find it easier and easier to shape their coding style to what is best for your team, and they will also find it easier to understand why you don't see it the same way they do.
Also, as mentioned by Kekoav, make sure they provide a few fully loaded test cases.
That means for something like
A -> B -> C -> D -> Solution
They would provide you all the values for A, then what it looks like at B, then what it looks like at C and so on. So that you can be certain that not only is it correct at the end, but it's also correct at every step of the way. Try to have them provide examples that are regular, and also a few of them that are unusual, so that you can be certain your code covers edge cases.
I'd recommend the devs spend a few hours getting used to Matlab, especially the Matlab debugger. If their background is CS then they'll already be familiar with vectors and matrices theoretically if not practically. Other than the matrix being the default data structure, Matlab is C-like and easy enough to interpret for translation into another language.
I have been working with a physics professor lately, and have a little experience with this(although admittedly I'm no expert).
I have had to translate a lot of Matlab code into another language. It has been difficult because a lot of(most) of the operations are absent, including when it comes to precision, and working with matrices and vectors. A good math library needs to be found, or created to fit your needs.
The best way that I have found is to do the following:
Get the algorithm to work correctly in the new language.
Create some tests to verify that the algorithm is producing desired output. Have your mathematicians verify that your converted solution in fact works, and that you have covered all bases with your tests.
Then after it is working, and you can trust your tests, optimize the algorithm to be good coding style, have good design and performance characteristics. Use your regression tests to make sure you aren't breaking anything.
I normally start with a verbatim copy of their algorithms into the other language, and then work from there, regardless of if I do a lot of tests.
It is important to get a working copy first, in case the performance is really not an issue and you need to move on to other things and can come back later to make it faster.
This is your job. How you deal with this is what identifies you as a system developer.
Communicate with your colleagues. Draw and explain, have meetings, agree upon and set standards requirements, follow your plans and talk to your project manager. Make sure that your relevant colleagues are joining up on meetings. Have 1-1 talks etc etc
You cannot blame it on the mathematicians for developers creating bugs. It's their job to worry about implementation, not the mathematicians.

Is test-driven development a normal approach in game development?

I am just curious since all TDD examples I have seen is web programming related. And if it's not a normal approach, why is it not?
TDD has become a favored approach by software developers who are serious about their profession. [IEEE:TDD] The benefits of the approach are significant, and the costs are low by comparison. [The Three Laws of TDD]
There are no software domains for which TDD is inappropriate, or ineffective. However, there are domains in which it is challenging. Gaming happens to be one of these.
Actually, the challenge is not so much gaming as it is UI. The reason UI is a challenge is that you often don't know what you want the UI to look like until you've seen it. UI is one of those things that you have to fiddle with. Getting it right is a deeply iterative process that is full of twists and turns and dead ends and back alleys. Writing tests first for UI is likely to be both difficult and wasteful.
Now before everybody roars off and says: "Uncle Bob says: 'Don't do TDD for UI'" let me say this. The fact that it's hard to do pure TDD for UI does not mean you can't do pure TDD for almost everything else. Much of gaming is about algorithm, and you can use TDD with those algorithms to your heart's delight. It's true, especially in gaming, that some of those algorithms are the kind of code you have to fiddle with, just like UI, and so are probably not amenable to being tested first. But there is a lot of other algorithmic code that can and should be written test first.
The trick here is to follow the single responsibility principle (SRP) and separate those kinds of code that you have to fiddle with, from those kinds that are deterministic. Don't put easy-to-test algorithms in with your UI. Don't mix your speculative code with your non-speculative code. Keep the things that change for reason A separate from the things that change for reason B.
Also, keep this in mind: The fact that some code is hard to test first, does not mean that this code is hard to test second. Once you have fiddled and tweaked and gotten the code to work just the way you like, then you can write the tests demonstrate that the code works the way you think. (You'll be surprised at how many times you find bugs while doing this.)
The problem with writing tests "after the fact" is that often the code is so coupled that it is hard to write the kinds of surgical tests that are most helpful. So if you are writing the kind of code that is hard to test first, you should take care to follow the dependency inversion principle (DIP), and the open/closed principle (OCP) in order to keep the code decoupled enough to test after the fact.
The simple answer is "no", TDD is not a normal approach in game development. Some people will point at Highmoon and Neil Llopis as counter-examples, but it's a big industry and they are the only people I know of who have fully embraced TDD. I'm sure there are others, but they are the only ones I know of (and I've been in the industry for 5 years).
I think a lot of us have dabbled in unit testing at some point, but for one reason or another it hasn't taken hold. Speaking from personal experience it is hard for a games studio to switch to TDD. Usually a codebase is kept from project to project, and applying TDD to a large existing codebase is both tedious and largely thankless. I'm sure that eventually it would prove fruitful, but getting games coders to buy into it is difficult.
I have had some success in writing unit tests for low-level game engine code, because this code tends to have very few dependencies and is easily encapsulated. This has always been testing after the fact though and not TDD. The higher-level game code is usually harder to write tests for because it has far more dependencies and often is associated with complex data and state. Taking AI as an example, to test AI require some kind of context, meaning a navigation mesh and other objects in the world. Setting up that kind of test in isolation can be non-trivial, especially if the systems involved weren't designed for it.
What is more common in game development, and I've had more personal success with, is smoke testing. You'll often see smoke testing used in conjunction with continuous integration to provide various kinds of feedback on the behaviour of the code. Smoke testing is easier because it can be done by just feeding data into the game and reading back information, without having to compartmentalize your code into tiny testable pieces. Taking AI as the example again, you can tell the game to load up a level and provide a script that loads an AI agent and gives it commands. Then you simply determine if the agent performs those commands. This is a smoke test rather than a unit test because you are running the game as a whole and not testing the AI system in isolation.
In my opinion it is possible to get decent test coverage by unit testing the low-level code while smoke testing the high level behaviours. I think (hope) that other studios are also taking a similar approach.
If my opinion of TDD sounds somewhat ambiguous that's because it is. I'm still somewhat on the fence about it. While I see some benefits (regression testing, emphasis on design before code), applying it and enforcing it while working with a pre-existing codebase seems like a recipe for headaches.
Games from Within has an article discussing their use of unit testing, the limitations of unit testing with regards to games in particular, and an automated functional testing server that they set up to help with this.
If you are referring to the practice of writing and maintaining unit tests for every bit of code, I'd venture a guess and state that this is not in widespread use in the gaming industry. There are many reasons for this, but I can think of 3 obvious ones:
Cultural. Programmers are conservative, game programmers doubly so.
Practical. TDD does not fit very well to the problem domain (too many moving parts).
Crunchological. There's never enough time.
The TDD paradigm works best in application domains which are not very stateful, or at least where the moving parts are not all moving at the same time, to put it colloquially.
TDD is applicable to parts of the game development process (foundation libraries and such) but "testing" in this line of work usually means running automated fly-through, random key testing, timing io loads, tracking fps spikes, making sure the player can't wriggle his way into causing visual instabilities, stuff like that. The automaton is also very often a humanoid.
TDD can be a useful tool, but its status as a silver bullet that must-be-ubiquitous-when-making-a-system is rather questionable. Development should not be driven by tests, but by reason. RDD is a crappy acronym though - it won't catch on. ;)
Probably the main reason is that TDD is preferred by those with languages more conducive to it. But apart from that, games themselves are a poor match for the paradigm anyway.
Generally speaking (and yes, I do mean generally speaking, so please don't swamp me with counterexamples), test-driven design works best for event-driven systems and not so well for simulation-style systems. You can still use tests on your low-level components in games, whether test-driven or just plain unit testing, but for more higher level tasks there is rarely any sort of discrete event that you can simulate with deterministic results.
For example, a web application typically has very distinct inputs (an HTTP request), changes a very small amount of state (for example, records in the database), and generates a largely deterministic output (for example, HTML page). These can be easily checked for validity, and since generating the input is simple it's trivial to create tests.
However with games the input may be hard to simulate (especially if it needs to occur at a certain point... think of getting past loading screens, menu screens, etc.), the amount of state you change may be large (for example, if you have a physics system, or complex reactive AI) and the output is rarely deterministic (random number use is the main culprit here, though things like floating point precision loss is another, as might be hardware specifications, or available CPU time, or the performance of a background thread, etc.).
To do TDD you need to know exactly what you expect to see in a certain event and to have an accurate way of measuring it, and both of these are difficult problems with simulations that avoid discrete events, deliberately include random factors, act differently on different machines, and have analogue outputs such as graphics and audio.
Additionally, there's one massive practical issue which is process startup time. Many of the things you will want to test require the loading of large quantities of data, and if you mock up the data you're not truly testing the algorithm. With this in mind it quickly becomes impractical to have any sort of test scaffolding that just performs individual tasks. You can run tests against a web server without having to take the webserver down each time - that's rarely possible with games unless you do the testing from an embedded scripting language (which is reasonable, and does indeed take place in the industry).
For example, you want to add volumetric shadow rendering to your in-game buildings. So you'd need to write a test that starts up all the necessary subsystems (for example, renderer, game, resource loaders), load in buildings (incl. mesh, materials/textures), load in a camera, position that camera to point at the building, enable the shadows, render a scene, and then somehow decide whether the shadows actually appear in the frame buffer. It's less than practical. In reality you'd have all this scaffolding already there in the form of your game, and you'd just fire it up to conduct a visual test in addition to any assertions within the code itself.
Most game developers aren't exactly with it in terms of modern development practices. Thankfully.
But a test-driven development model emphasizes concentrating on how something would be used first, then fleshing out what it does. That in general is good to do since it forces you to concentrate on how a particular feature will actually fit into whatever you're doing (say, a game).
So good game developers do this naturally. Just not explicitly.
#Rune Once again, please emphasise the 'D' rather than the 'T'. At a unit level, the tests are a thinking tool to help you understand what you want and to drive the design of the code. Certainly at the unit level, I find I end up with cleaner, more robust code. The better the quality of the pieces I put into the system, the better they fit together with fewer (but not no) bugs.
That's not the same thing at all as the sort of serious testing that games need.
TDD isn't really a 'normal' approach anywhere yet as it's still relatively new and not universally understood or accepted yet. That isn't to say that some shops don't work that way now but I'm still surprised to hear anyone using it at all at this point.

Resources