Test-Driven Development "Barriers to Entry"? - tdd

I'm in the process of doing a study on Test-Driven Development and one of the discussion points is the "Barrier to Entry" associated with TDD. Does anyone have any experience around this area, on any projects you've worked on that decided not to use TDD because the barrier to entry was too high?
From what I can tell the only barrier to entry is knowledge (and as such experience) of individual developers, with most not being entirely accustomed to the process and it being slightly alien. Financially it seems to be very appealing given most of the market leading tools are open source, freely available, well documented and well supported.
Thoughts/feelings appreciated.
Thanks,
EDIT - does anyone know of any high profile quotes of people advocating TDD? Would love to see how high it goes up the chain. Cheers.

Some barriers include:
An existing code base which doesn't lend itself to unit testing.
A problem domain that is hard to unit test meaningfully, such as GUI work or integrations with third party systems.
A perception of integration problems over unit problems (in other words, if it doesn't work end to end it doesn't do anything, so what is the point of testing the unit).
A mindset that wants to design ahead of time and have a clear system design rather than have tests drive design
A political culture where design is done by a different person/group than development, and that design is not unit-test friendly.
An inability to get over the fact that TDD is not about testing for conformance (arguments like "the one who writes the tests shouldn't be the one who codes it, they will be too lenient on themselves" and such variants).
It isn't they way they have coded until now, so the shift is harder.
Sometimes a certain test can be hard to set up, so the method will get abandoned because it "feels" slower.
Design requirements that don't lend themselves to evolving design well or at all (think Nuclear Plant control software or other systems were actual lives depend on their functioning correctly).
If everyone isn't running the test before checking in code, tests start to break often for wrong reasons (that is the intended behavior of the code changed, but the test didn't keep up, so the test is wrong, not the code) so they can be perceived as a drag.

In terms of barriers to entry, effectively, because you are explicitly writing tests that must pass before code is considered to be complete, the lead time in the dev cycle involved in getting functional code is longer. Now, when using TDD, you're effectively guaranteeing a certain level of quality on the code (whatever level of quality you choose to test against) and so that is generally more than enough compensation for the lag in lead time, but strictly speaking, there IS a greater lead time to getting functional code using TDD.
Effectively, if you have coders that write bug-free code, TDD will be a drag on your development cycle. The value of TDD, of course, is that there aren't any coders who can always write bug-free code, and so the cost of fixing bugs has to be factored in somewhere; in TDD, the cost of the test infrastructure is front-loaded.
Note that this is not in any way a negative thing about TDD; I'm just saying, that front-loading COULD be considered to be a "barrier to entry". Personally, as a coder, I would say that the Return on Investment is more than worth the effort, and I think most experienced dev managers would as well.

Team and/or management buy-in is the biggest obstacle in some companies. If you're the lone developer trying to use TDD and you can't get others on the project interested, it can be very frustrating.
Of course that's not a financial barrier at all. The biggest perceived financial barrier is probably time. If you have a large code base that you need to write unit tests for, it can seem quite daunting. Your manager (or someone above them) will question why you want to spend time writing code that will not add features/functionality to the code. Many people don't realize that writing the tests up front (as you do in TDD) can actually save you time, both immediately and in the long run when you're maintaining that code.

I think one major barrier is how it requires you to change the way you think.
Before I tried TDD, I would create a class, say Employee, then I would stub in things like FirstName, LastName, Email, etc. Then I would write some logic and forget that I missed a few fields or something else. And before I knew it I had a pretty complex class without knowing if those fields were ever necessary.
Also, it's a complete change from how we are used to writing software. We are used to writing software as we receive features from the guys who sign our checks. We are not used to writing code which doesn't compile, making it compile, then making it work to make our tests pass.
The first time you do this, you feel a bit.. well silly and stupid. Why am I making my code intentionally fail? It seems illogical to the "make it work" philosophy we've all been taught for so long.
A few reasons why it has failed so far where I work:
Most of the project at work on are older apps. Not pre-.NET but, .NET 2.0 and in some cases .NET 1.0.
Some of these projects are not well factored, either because the technology wasn't there in 1.0, or it was built quickly because they needed something NOW..
As Jon pointed out, some things are still a PIA (pain-in-the-***) to unit test, UI, database, etc.
Expensive tools. If you are only allowed to Microsoft tools, it's a high price tag to do this the "right way". We use resharper, so it really isn't a problem.
Time. I'm in a team of three guys supporting a department of 30 people. We are considered overhead, and many of our development consists of interfacing systems together

Yes the main barrier to entry is in your head or in the head of other programmers.
In the beginning you don't know what to srite in your tests.
The trick is to think about how your code will be used instead of focusing on how you re going to write it. Easier said than done ...
When you start to "get it", it's a bit hard to know where to stop writing tests.
You have to remember that tests prove nothing so you just can't write tests to covers all cases, you have to select the most useful ones ... and that's already a lot !

I've certainly seen plenty of resistance. The barriers I've encountered are:
Unit testing user interfaces (web or thick client) is tricky. I know there are lots of attempts to solve the problem, but I don't think any of them have made it really simple - because it's a naturally hard problem.
At the other end, although there are various ways of making it easier to test the code involved with the database, it's still tricky and time-consuming.
While good tests definitely speed up development overall, testing is a skill - and while you suck at it, unit testing may well be more trouble than it's worth, which means you never build up the skill...
Managers often see it as an optional extra to development - a nice to have rather than critical. This means it's the first thing to go when the project inevitably has a resource squeeze.

I wrote a long-ish article about this a few weeks back, "Why I write tests first".
I think the biggest barrier is building the discipline to start with tests first, but I don't believe the TDD (or any practice for that matter) should be approached as an always, absolutely, 100% of the time solution.
TDD is a tool in each developer's arsenal. I tend to think it works well for me most of the time. A developer that isn't as accustomed to writing tests (first or otherwise) may it difficult to get anything done if TDD is forced on them because they can't think in terms of writing the test first.
I consider myself an experienced test-writer, but I can't always think in terms of tests. Some problems don't lend themselves well to it, or at least my head doesn't get wrapped around it some days. And some types of code (such as UI and client-side code) doesn't lend itself well to always writing tests.
If you have a team of developers that do not write tests as a matter of habit, I'd push that first. I have no problem requiring that all new code have accompanying unit tests where possible/practical. Once testing is a discipline, converting developers to TDD individually or as a team is much easier.

One non-obvious barrier (non-obvious to me, at least) is the build infrastructure. If developers don't have control over the build process, or if the infrastructure is too baroque to be manageable, then integrating tests into the build process is going to be shunted to the side in the name of "efficiency". (Of course, in these situations it's the build infrastructure that should be shunted aside in the name of efficiency.)

Related

Does TDD preclude designing first?

I've been reading about TDD lately, and it is advocated because it supposedly results in code that is more testable and less coupled (and a bunch of other reasons).
I haven't been able to find much in the way of practical examples, except for a Roman numeral conversion and a number-to-English converter.
Observing these two examples, I observed the typical red-green-refactor cycles typical of TDD, as well as the application of the rules of TDD. However, this seemed like a big waste of time when normally I would observe a pattern and implement it in code, and then write tests for it after. Or possibly write a stub for the code, write the unit tests, and then write the implementation - which might arguably be TDD - but not this continuous case-by-case refactoring.
TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture. My opinion so far is that the benefits of TDD can be achieved by a proper architectural design, although admittedly not everyone can do this reasonably well.
So here I have two questions:
Am I right in understanding that using TDD pretty much doesn't allow you to design first (see the rules of TDD)?
Is there anything that TDD gives you that you can't get from doing proper design before you start coding?
well, I was in your shoes some time ago and had the same questions. Since then I have done quite some reading about TDD and decided to mess with it a little.
I can summarize my experience about TDD in these points:
TDD is unit testing done right, ATDD/BDD is TDD done right.
Whether you design beforehand or not is totally up to you. Just make sure you don't do BDUF. Believe me you will end up changing most of it midways because you can never fully understand the requirements until your hands get dirty.
OTOH, you can do enough design to get you started. Class diagrams, sequence diagrams, domain models, actors and collaborators are perfectly fine as long as you don't get hung up in the design phase trying to figure everything out.
Some people don't do any design at all. They just let the code talk and concentrate on refactoring.
IMHO, balance your approach. Do some design till you get the hang of it then start testing. When you reach a dead end then go back to the white board.
One more thing, some things can't be solved by TDD like figuring out an algorithm. This is a very interesting post that shows that some things just need to be designed first.
Unit testing is hard when you have the code already. TDD forces you to think from your API users perspective. This way you can early on decide if the public interface from your API is usable or not. If you decide to do unit testing after implementing everything you will find it tedious and most probably it will be only for some cases and I know some people who will right only passing test cases just to get the feature done. I mean who wants to break his own code after all that work?
TDD breaks this mentality. Tests are first class citizens. You aren't allowed to skip tests. You aren't allowed to postpone some tests till the next release because we don't have enough time.
Finally to answer your question if there anything that TDD gives you that you can't get from doing proper design before you start coding, I would say commitment.
As long as your doing TDD you are committed to apply good OO principles, so that your code is testable.
To answer your questions:
"Test Driven Development" (TDD) is often referred to as "Test Driven Design", in that this practice will result in a good design of the code. When you have written a failing unit test, you are forced into a test driven design approach, so that you can implement just what is needed to make the test pass i.e. you have to consider the design of the code you are writing to make the test pass.
When using a TDD approach a developer will implement the minimum amount of code required to pass the test. Doing proper design beforehand usually results in waste if the requirements change once the project has started.
You say "TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture" - If you are following an Agile approach to your software development, then you do still need to do some up front architectural investigation (e.g. if you were using the Scrum methodology you would create a development "Spike" at the start of a project) to ascertain what the minimum amount of architecture needed to start the project. At this stage you make decisions based on what you know now e.g. if you had to work with a small dataset you'd choose to use a regular DB, if you have a huge DB you might to choose to use a NoSQL big data DB etc.
However, once you have a general idea of the architecture, you should let the design evolve as the project progresses leaving further architectural decisions as late in the process as possible; Invariably as a project progresses the architecture and requirements will change.
Further more this rather popular post on SO will give you even more reasons why TDD is definetly worth the effort.

When should you not refactor?

We all know that refactoring is good and I love it as much as the next guy, but do you have real cases where is better not to refactor ?
Something like time critical stuff or synchronization? Technical or human reasons are equally welcome. Real cases scenarios and experiences a plus.
Edit : from the answers thus far, it looks like the only reason not to refactor is money. My question is mostly relative to something like this: suppose you would like to perform "extract method", but if you add the additional function call, you will make the code slightly less faster and hinder a very strict synchronization. Just to give you an idea of what I mean.
Another reason I sometimes heard is that "others used to the current code layout will get annoyed by your changes". Of course, I doubt this is a good reason.
I'm a big fan of refactoring to keep code clean and maintainable. But you generally want to shy away from refactoring production modules that work fine and don't require change. However, when you do need to work on a module to fix bugs or introduce a new feature, some refactoring is usually worth it and won't cost much since you're already committed to doing a full set of tests and going through the release process. (Unit tests are very helpful, but are only part of the full test suite, as other posters noted.)
More significant refactorings may make it harder for others to find their way around the new code, and they may then react unfavorably to refactoring. To minimize this, bring other team members in on the process using an approach like pair programming.
Update (8/10): Another reason to not refactor is when you aren't approaching the existing code base with proper humility and respect. With these qualities you'll tend to be conservative and do only refactorings that really do make a difference. If you approach the code with too much arrogance, you may wind up just making changes instead of refactoring. Is that new method name really clearer, or did the old one have a name with a very specific meaning in your application domain? Did you really need to mechanically reformat that source file to your personal style, when the existing style met project guidelines? Again pair programming can help.
To reinforce the other answer (and touch on issues you mention): do not refactor a part of the code until it's well covered by all relevant kinds of testing. This doesn't mean "don't refactor it" -- the emphasis is on "add the necessary tests" (to do unit-tests properly may well require some refactoring, particularly the introduction of factory DPs and/or dependency injection DPs in code that's now solidly bolted to concrete dependencies).
Note that this does cover your second paragraph's issues: if a section of the code is time-critical it should be well covered by "load-tests" (which like the more usual kind, correctness-test, should cover both specific units [albeit performance-wise -- correctness-checking is other tests' business!-)] AND end-to-end operations -- the equivalent of unit tests and integration tests if one was talking about correctness rather than performance).
Multi-tasking code with subtle sync issues can be a nightmare as no test can really make you entirely confident about it -- no other refactoring (that might in any way affect any fragile sync that just appears to be working now) should be considered BEFORE one intended to make the synchronization much, MUCH more robust and sound (message-passing through guaranteed-threadsafe queues being BY FAR my favorite design pattern in this regard;-).
Hmmm - I disagree with the above (1st response). Given code with no tests, you may refactor it to to make it more testable.
You do not refactor code when you cannot test the resulting code in time to deliver it such that it is still valuable to the recipient.
You do not refactor code when your refactoring will not improve the quality of the code. Quality is not subjective, although at times, design may be.
You do not refactor code when there is no business justification for making an alteration.
There are probably more, but hopefully you get the idea...
As Martin Fowler writes, you shouldn't refactor if a deadline is near. That time in project is better suited to flush out bugs instead of improving design (refactoring). Do the refactoring omitted this time directly after the deadline is over.
Refactoring is not good in and of itself. Rather, its purpose is to improve code quality so that it can be maintained more cheaply and with less risk of adding defects. For actively developed code, the benefits of refactoring are worth the cost. For frozen code that there is no intention to do any further work on, refactoring yields no benefit.
Even for live code, refactoring has its own risks, which unit tests can minimize. It also has its own place in the development cycle, which is towards the front, where it's less disruptive. The best time and place for refactoring is just before you start to make major changes to some otherwise brittle code.
When it is not cost-effective. There's a guy at the place I work who loves refactoring. Making code perfect makes him very happy. He can check out a current project tree and go to town on it, moving functions and classes around and tightening things up so they look great, have better flow, and are more extensible in the future.
Unfortunately, it's not worth the money. If he spends a week refactoring some classes into more functional units that may be easier to work with in the future, that's a week's worth of salary lost to the company with no noticeable bottom-line improvements.
Code will never, ever be absolutely perfect. You learn to live with it, and keep your hands off something that could be done better, but perhaps isn't worth the time.
If the code seems very difficult to refactor without breaking, that's the most important code to refactor!
If there aren't any tests, write some as you refactor.
Honestly, the one case is where you are forbidden to touch some code by management/customer/SomeoneImportant, and when that happens I consider the project broken.
Here is my experience:
Don't refactor:
When you don't have test suite accompanying with the code you want to refactor. You might want to develop the test suit first instead.
When your manager doesn't really care about the maintainablity and extensibility of current code base, instead they care much about if they would be able to deliver the product on schedule, especially for the project with short and tight schedule.
If you stick to the principle that everything that you do should add value for the client/ business, then the times you should not refactor are the following:
Code that works and no new development is planned.
Code that is good enough / works and refactoring simply represents gold plating.
The cost of refactoring is higher than living with the existing code.
The cost of refactoring is higher that rewriting the code from scratch
Some of the other answsers say that you should not refactor code that does not have unit tests. If code needs refactoring, you should refactor it, you must however write tests first. If the code is written in a way that makes it difficult to test, it should be rewritten (in a perfect world).
When you've got other stuff to build. I always feel like refactoring an existing system when I'm supposed to be doing something else.
There's always a balance to be had between fixing or adding to code and refactoring. However, this balance is so far in favor of refactoring that I don't think I've ever been on a team that refactored too much. Chances are, if you think you're erring on the side of refactoring too much, you're right on the money.
Of course, the biggest determining factor is how close the deadline is. If a deadline is imminent, requirements come first.
Isn't the need to refactor code largely based on the propensity of people to cut and paste code rather than thinking the solution through, and doing the factoring in advance? In other words, whenever you feel the need to cut & paste some code, merely make that chunk of code a function, and document it.
I have had to maintain way too much code where people found it easier to cut and paste a whole function, only to make one or two trivial changes, which could easily have been parametrized. But like many other's experience, to try to refactor some of this code would have take a LOT of time and been very risky.
I have 4 projects wherein a 10K line collection of functions was merely copied and modified as needed. This is a horrid maintenance nightmare. Especially when the code has LOTS of problems, e.g. hard-wired endianness assumptions, tons of global variables, etc. I feel bile in my throat just thinking about it.
Don't refactor if you don't have the time to test the refactored code before release. Refactoring can introduce bugs. If you have well-tested and relatively bug-free code, why take the risk? Wait until the next development cycle.
If you're stuck maintaining an old flakey code base with no future beyond keeping it running until management can bite the bullet and do a rewrite then refactoring is a lose-lose situation. First the developer loses because refactoing bad flakey code is a nightmare and secondly the business loses because as the developer attempts to refactor the software breaks in unexpected and unforseen ways.
When you don't really know what the code is doing in the first place. And yes, I have seen people ignore that rule.
It's just a cost-benefit tradeoff. Estimate the cost to refactor, estimate the benefits, determine if you actually have the time to refactor given other tasks, determine if refactoring is the best time-benefit tradeoff. There may be other tasks more worth doing.

What do you think about the omnipresent "Test, Test, Test!" principle?

In the old days programming used to involve less guesswork. I would write some lines of code and be 100% certain about what the code does and what it does not at a glance. Errors were mostly typos, but not about the functionality.
The last years I believe there is a trend for this "trial-and-error" programming : write the code (as if in draft), and then debug iteratively until the program's behavior appears to comply with the requirements. Test, and test again, and then again.
Funny thing is, in my Visual Studio the "Run" button has been replaced by a button labelled "Debug" (= I know you have some bugs!). I have to admit that in several apps that I write I cannot guarantee a bug-free code.
What do you think ? Or maybe our systems are now overly complicated (browser/OS/Service Pack compatibilities, etc etc) and this justifies testing on all types of environments.
I've experienced the opposite, actually. Whereas it used to be a case of running until it worked, I now unit test until the tests pass... and this seems to be at least a reasonably common transition, as far as I can see.
I have to say that code which worked first time with only typos has never been the norm in my experience. The difference is that now I can find the problems much more quickly, and also spot if old problems come back. I can sometimes manage pretty short and simple bits of code with no errors (and posting on Stack Overflow has improved that ability) but large, complex systems? Heck no.
To answer the title of your post - the "test, test, test" principle is a good one, in my view... but I don't associate that with running the whole program repeatedly. I associate it with running unit tests frequently. I rarely need to use the debugger for unit tests - usually a failure makes the cause suitably obvious by inspection, because only a small amount of code is being tested.
The one word answer is "Complexity". The real answer is "Unnecessary Complexity"!
The accounting principles has not changed for the past 30 years. Why then is writing an accounting system is so much more difficult today? It is good to have a Graphic User Interface but do we have to go overboard?
Software development has been caught in a vicious circle for many years. The complexity is feeding itself and instead of reducing it we simply hide it under layers and layers of wrappers. Eventually something is going to give.
When we favor form over function, we have to pay the price.
Could it be that in later years developers have come to the realization that the "100% certainty" might not actually be correct? Developing software is very complex, and even though the tools have evolved over the years, so has our realization that writing good code is hard. True, debugging and automated unit tests have made us more productive, but we still produce bugs, just as we did back then, only now we have different tools to catch them with.
You may write code that you think you know 100% what it does and does not do, but there is always that edge case that you haven't thought of or the odd exception thrown that you don't expect. Some times trial-and-error programming can be a helpful tool to narrow down a problem, with the debuggers help.
Its important to know what tools are available to you to help produce code with minimal bugs.
I have found that the Test-Test approach helps me design the code. Sometimes the work that has to be done is too complex to do it all in one. Testing forces me to split it into smaller parts and as I solve these I am able to put them together into a larger whole.
I think the advantage comes in an indirect way: When you embrace tests and unit tests, you have to write your application in such a way that you can actually write tests:
Classes need to be written in such a way that you can instantiate a single object without the whole application and OS around it, but just a few helper objects. This means you need to minimize the dependencies, and make all communication to the surrounding system explicit.
Implementing the test cases means that you have to find a minimum sequence of commands and calls that makes your class do something meaningful. This often points to awkward design decisions, or shows you that classes are very difficult to use for certain purposes.
All in all, when you embrace tests, you end up with a system that has a minimum of interdependencies between its components, and the test cases serve as documentation of how to use your components.
Testing (executing your system) tells you something about "the presence of bugs but NOT about the absence of them" (afaik this term is coinced by dijkstra). It points to the direction that the strength of your test-suite is the key of testing: "You have so many test cases, that you can say, that many bugs do not exist. This implies that big parts of your software work as expected".
Some examples for having a strong/mighty test-suite:
A lot of code is executed by your unit tests (the traditional coverage term)
You have no false-negative tests (test which show green but in fact should be red). False negative tests are evil, because they give you a wrong sense of test-case quality. For details of good test-asserts and false-negatives see also blog-entry#1 and blog-entry#2.
The requirements are well understood (I have seen a lot of cases where an automated test was testing the wrong thing and the developer misunderstood the requirement from business). For the developer is was green, but for business the system was not working as expected (another kind of false-negative example but on a higher level).
In a sense the correctness of a program is only proven, when it is done with mathematical proofs (which only pays off for life-critical and money-intense systems). Still you can achieve a lot with automated testings (apart from unit-testing, automated integration testing always helped a lot).
Regarding debugging: I use debugging to as often as I used to be, but sometimes when adding new functionality to code (my new test-case shows green) I break other test-cases. By the assert I instantly see that something went wrong, but still didn't locate the bug. For locating the bug debugging is still helpful (with the red test-case I execute the problematic code-paths, with the debugger I locate the bug).
If you're interested in test-automation have a look at masterpiece xUnit Test patterns.
I've read one book ("TDD by example" by Kent Beck) which indeed seems to take that "trial and error" approach to an extreme: but it's more like "make the unit tests work". Still, I couldn't get myself to finish this book - a rare occurence, especially since I really hoped to get a better understanding. Still, committing obviously imbecile code to be improved later makes me shiver.
Science: Automated tests have their advantages. However, they are not the silver bullet they are claimed to be. No single test method is sufficient to findenough defects, and other methods have a better detection rate.
Gut feel: Our problems are facets of ever-increasing complexity. Complexity highly correlates with the amount of code we have to manage. In this light, TDD attempts to solve the problems of to much code by writing even more code.
Advantages: We now have an established formalism to make testing repeatable, accountable and immediately documented. It is definitely a way out of the "works on my machine" and "strange, it worked yesterday, I'll give you the latest DLL" trap.
I currently practice Test Driven Development (TDD), or at least write many unit tests to verify that most/all of my code behaves the way I expect it to behave. Taking this approach forces me to look at my program from the perspective of the consumer. Also, as I write tests, I often think of boundary limits, additional scenarios that I didn't originally envision, etc.
I've now come to the point where I'm afraid to make changes to older programs, as I'm afraid that I'll break something. Regression testing is onerous, compared with running a suite of unit tests.

TDD as a defect-reduction strategy

Can TDD be successful as a defect-reduction strategy without incorporating guidance on test case construction and evaluation?
IMO, my answer would be no. For TDD to be effective, there has to be guidelines around what is a test and what it means to have something be reasonably tested. Without a guideline, there may be some developers that end up with tons of defects because their initial tests cover a very small set of inputs,e.g. only the valid ones, which can cause the idea of using TDD to become worthless.
Test driven development can reduce defects in a QA cycle simply because testing allows developers to find defects prior to releasing their code to the QA team.
But without guidance on how to test you really won't be able to create any kind of long-term benefit since haphazard testing will only prevent defects by blind luck. Good tests based on good guidance can go a long way towards reducing defects.
if you don't have tests to reproduce defects, how do you know that "defect reduction" has taken place?
of course you do have tests - they're just manual, and thus tedious and time-consuming to reproduce...
Here's a study (warning: link to PDF file) done by microsoft on some of their internal teams.
A quote from it:
The results of the case studies indicate that the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD
That's the only actual empirical study done on TDD/Unit testing that I'm aware of, but there are plenty of people (including myself) that will anecdotally tell you that TDD (and unit testing in general) will definitely provide an increase in the quality of your code.
From my own experience, there is definitely a reduction in the number of defects, but the numbers feel like they would be far less than even the 40% from the Microsoft study; This is (again, based solely on what I've seen) primarily because most corporate developers have little to no experience with Unit Testing (let alone TDD), and will invariably do a bad job of it while they are learning. Actually learning how to do TDD well requires at least a solid year of experience, and I've never worked in (or even seen) a team which actually had a full complement of developers with that experience.
You may want to pickup a copy of Gerard Meszaros' xUnit Test Patterns. Specifically, Chapter 5 might apply most directly to your question where it covers the Principles of Test Automation. Some of those principles that I think apply to your scenario where there needs to be some sort of guidance, common interest, or some sort of implied do this or fear the wrath of :
Principle: Communicate Intent
Tests need to be easy to maintain, readily apparent what the test is doing.
Principle: Keep Tests Independent
Small tests that cover one small piece. Tests should not interfere with each other.
Principle: Minimize Test Overlap
Need to design tests that cover a specific piece, and do not create tests that exercise the same paths repeatedly.
Principle: Verify One Condition Per Test
This one seemed simple enough for me, but is one of the most challenging in my experiences for people to grasp. I may write tests that have some multiple asserts, but I try to keep all those together around the specific area. When it comes to hunting down failures and other test issues, it is MUCH easier to fiddle with a single test that is testing a specific path instead of several different paths all clumped into a single test method.
Further relating to my experiences, if we want to talk about the corporate developer, I have seen some folks that are interested and take the initiative to learn something new, but more often than not, you have folks that like to go with the flow, and like to have things laid out for them. Without some sort of direction, be it a mandate from a senior engineer-type, or some sort of joint-team discussions (see Practices of an Agile Developer for some ideas such as lunch time meetings once a week), I think your chance of success would be limited.
In a team situation, where your code is likely to be used by someone else, the tests have a fringe benefit that can reduce defects without necessarily even improving anyone's code.
Where documentation is poor (which during development is "often"), the tests act a crib for how you expect your code to be called. So, even in cases where the code is really very fragile, TDD can still reduce the number of defects raised against the end-product -- by ensuring your colleagues can see well-written tests before they can use your code, you've ensured they know how you intend your code to be used before they call it. They are thus less-likely to call your code in an unexpected sequence / without having configured something you expected (but forgot to write a check for) as a prerequisite. Thus they are less likely to trigger the failure condition, and you are less likely to see them or the (human) test team raising a defect because something crashed.
Of course, whether you see that "there's a hidden bug in there, it's just not being called yet" as a problem itself is another good question.

What are some reasons why a sole developer should use TDD? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm a contract programmer with lots of experience. I'm used to being hired by a client to go in and do a software project of one form or another on my own, usually from nothing. That means a clean slate, almost every time. I can bring in libraries I've developed to get a quick start, but they're always optional. (and depend on getting the right IP clauses in the contract) Many times I can specify or even design the hardware platform... so we're talking serious freedom here.
I can see uses for constructing automated tests for certain code: Libraries with more than trivial functionality, core functionality with a high number of references, etc. Basically, as the value of a piece of code goes up through heavy use, I can see it would be more and more valuable to automatically test that code so that I know I don't break it.
However, in my situation, I find it hard to rationalize anything more than that. I'll adopt things as they prove useful, but I'm not about to blindly follow anything.
I find many of the things I do in 'maintenance' are actually small design changes. In this case, the tests would not have saved me anything and now they'd have to change too. A highly iterative, stub-first design approach works very well for me. I can't see actually saving myself that much time with more extensive tests.
Hobby projects are even harder to justify... they're usually anything from weekenders up to a say month long. Edge-case bugs rarely matter, it's all about playing with something.
Reading questions such as this one, The most voted on response seems to say that in that poster's experience/opinion TDD actually wastes time if you've got less than 5 people (even assuming a certain level of competence/experience with TDD). However, that appears to be covering initial development time, not maintenance. It's not clear how TDD stacks up over the entire life cycle of a project.
I think TDD could be a good step in the worthwhile goal of improving the quality of the products of our industry as a whole. Idealism on it's own is no longer all that effective at motivating me, though.
I do think TDD would be a good approach in large teams, or any size team containing at least one unreliable programmer. That's not my question.
Why would a sole developer with a good track record adopt TDD?
I'd love to hear of any kind of metrics done (formally or not) on TDD... focusing on solo developers or very small teams.
Failing that, anecdotes of your personal experiences would be nice, too. :)
Please avoid stating opinion without experience to back it. Let's not make this an ideology war. Also the skip greater employment options argument. This is simply an efficiency question.
I'm not about to blindly follow anything.
That's the right attitude. I use TDD all the time, but I don't adhere to it as strictly as some.
The best argument (in my mind) in favor of TDD is that you get a set of tests you can run when you finally get to the refactoring and maintenance phases of your project. If this is your only reason for using TDD, then you can write the tests any time you want, instead of blindly following the methodology.
The other reason I use TDD is that writing tests gets me thinking about my API up front. I'm forced to think about how I'm going to use a class before I write it. Getting my head into the project at this high level works for me. There are other ways to do this, and if you've found other methods (there are plenty) to do the same thing, then I'd say keep doing what works for you.
I find it even more useful when flying solo. With nobody around to bounce ideas off of and nobody around to perform peer reviews, you will need some assurance that you're code is solid. TDD/BDD will provide that assurance for you. TDD is a bit contraversial, though. Others may completely disagree with what I'm saying.
EDIT: Might I add that if done right, you can actually generate specifications for your software at the same time you write tests. This is a great side effect of BDD. You can make yourself look like super developer if you're cranking out solid code along with specs, all on your own.
Ok my turn... I'd do TDD even on my own (for non-spike/experimental/prototype code) because
Think before you leap: forces me to think what I want to get done before i start cranking out code. What am I trying to accomplish here.. 'If I assume I already had this piece.. how would I expect it to work?' Encourages interface-in design of objects.
Easier to change: I can make modifications with confidence.. 'I didn't break anything in step1-10 when i changed step5.' Regression testing is instantaneous
Better designs emerge: I've found better designs emerging without me investing effort in a design activity. test-first + Refactoring lead to loosely coupled, minimal classes with minimal methods.. no overengineering.. no YAGNI code. The classes have better public interfaces, small methods and are more readable. This is kind of a zen thing.. you only notice you got it when you 'get it'.
The debugger is not my crutch anymore : I know what my program does.. without having to spend hours stepping thru my own code. Nowadays If I spend more than 10 mins with the debugger.. mental alarms start ringing.
Helps me go home on time I have noticed a marked decrease in the number of bugs in my code since TDD.. even if the assert is like a Console trace and not a xUnit type AT.
Productivity / Flow: it helps me to identify the next discrete baby-step that will take me towards done... keeps the snowball rolling. TDD helps me get into a rhythm (or what XPers call flow) quicker. I get a bigger chunk of quality work done per unit time than before. The red-green-refactor cycle turns into... a kind of perpetual motion machine.
I can prove that my code works at the touch of a button
Practice makes perfect I find myself learning & spotting dragons faster.. with more TDD time under my belt. Maybe dissonance.. but I feel that TDD has made me a better programmer even when I don't go test first. Spotting refactoring opportunities has become second nature...
I'll update if I think of any more.. this is what i came up with in the last 2 mins of reflection.
I'm also a contract programmer. Here are my 12 Reasons Why I Love Unit Tests.
My best experience with TDD is centered around the pyftpdlib project. Most of the development is done by the original author, and I've made a few small contributions, but it's essentially a solo project. The test suite for the project is very thorough, and tests all the major features of the FTPd library. Before checking in changes or releasing a version, all tests are checked, and when a new feature is added, the test suite is always updated as well.
As a result of this approach, this is the only project I've ever worked on that didn't have showstopper bugs appear after a new release, have changes checked in that broke a major feature, etc. The code is very solid and I've been consistently impressed with how few bug reports have been opened during the life of the project. I (and the original author) attribute much of this success to the comprehensive test suite and the ability to test every major code path at will.
From a logical perspective, any code you write has to be tested, and without TDD then you'll be testing it yourself manually. On the flip side to pyftpdlib, the worst code by number of bugs and frequency of major issues, is code that is/was solely being tested by the developers and QA trying out new features manually. Things don't get tested because of time crunch or falling through the cracks. Old code paths are forgotten and even the oldest stable features end up breaking, major releases end up with important features non-functional. etc. Manual testing is critically important for verification and some randomization of testing, but based on my experiences I'd say that it's essential to have both manual testing and a carefully constructed unit test framework. Between the two approaches the gaps in coverage are smaller, and your likelihood of problems can only be reduced.
It does not matter whether you are the sole developer or not. You have to think of it from the application point of view. All the applications needs to work properly, all the applications need to be maintained, all the applications needs to be less buggy. There are of course certain scenarios where a TDD approach might not suit you. This is when the deadline is approaching very fast and no time to perform unit testing.
Anyways, TDD does not depend on a solo or a team environment. It depends on the application as a whole.
I don't have an enormous amount of experience, but I have had the experience of seeing sharply-contrasted approaches to testing.
In one job, there was no automated testing. "Testing" consisted of poking around in the application, trying whatever popped in your head, to see if it broke. Needless to say, it was easy for flat-out-broken code to reach our production server.
In my current job, there is lots of automated testing, and a full CI-system. Now when code gets broken, it is immediately obvious. Not only that, but as I work, the tests really document what features are working in my code, and what haven't yet. It gives me great confidence to be able to add new features, knowing that if I break existing ones, it won't go unnoticed.
So, to me, it depends not so much on the size of the team, but the size of the application. Can you keep track of every part of the application? Every requirement? Every test you need to run to make sure the application is working? What does it even mean to say that the application is "working", if you don't have tests to prove it?
Just my $0.02.
Tests allow you to refactor with confidence that you are not breaking the system. Writing the tests first allows the tests to define what is working behavior for the system. Any behavior that isn't defined by the test is by definition a by-product and allowed to change when refactoring. Writing tests first also drive the design in good directions. To support testability you find that you need to decouple classes, use interfaces, and follow good pattern (Inversion of Control, for instance) to make your code easily testable. If you write tests afterwards, you can't be sure that you've covered all the behavior expected of your system in the tests. You also find that some things are hard to test because of the design -- since it was likely developed without testing in mind -- and are tempted to skimp on or omit tests.
I generally work solo and mostly do TDD -- the cases where I don't are simply where I fail to live up to my practices or haven't yet found a good way that works for me to do TDD, for example with web interfaces.
TDD is not about testing it's about writing code. As such, it provides a lot of benefits to even a single developer. For many developers it is a mindshift to write more robust code. For example, how often do you think "Now how can this code fail?" after writing code without TDD? For many developers, the answer to that question is none. For TDD practioners it shifts the mindset to to doing things like checking if objects or strings are null before doing something with them because you are writing tests to specifically do that (break the code).
Another major reason is change. Anytime you deal with a customer, they can never seem to make up their minds. The only constant is change. TDD helps as a "safety net" to find all the other areas that could break.Even on small projects this can keep you from burning up precious time in the debugger.
I could go and on, but I think saying that TDD is more about writing code than anything should be enough to justify it's use as a sole developer.
I tend to agree with the validity of your point about the overhead of TDD for 'one developer' or 'hobby' projects not justifying the expenses.
You have to consider however that most best practices are relevant and useful if they are consistently applied for a long period of time.
For example TDD is saving you testing/bugfixing time in a long run, not within 5 minutes after you've created the first unit test.
You're a contract programmer which means that you will leave your current project when it will be finished and will switch to something else, most likely in another company. Your current client will have to maintain and support your application. If you do not leave the support team a good framework to work with they will be stuck. TDD will help the project to be sustainable. It will increase the stability of the code base so other people with less experience will not be able not do too much damage trying to change it.
The same applies for the hobby projects. You may be tired of it and will want to pass it to someone. You might become commercially successful (think Craiglist) and will have 5 more people working besides you.
Investment in proper process always pays-off, even if it is just gained experience. But most of the time you will be grateful that when you started a new project you decided to do it properly
You have to consider OTHER people when doing something. You you have to think ahead, plan for growth, plan for sustainability.
If you don't want to do that - stick to the cowboy coding, it's much simpler this way.
P.S. The same thing applies to other practices:
If you don't comment your code and you have ideal memory you'll be fine but someone else reading your code will not.
If you don't document your discussions with the customer somebody else will not know anything about a crucial decision you made
etc ad infinitum
I no longer refactor anything without a reasonable set of unit tests.
I don't do full-on TDD with unit tests first and code second. I do CALTAL -- Code A LIttle, Test A Little -- development. Generally, code goes first, but not always.
When I find that I've got to refactor, I make sure I've got enough tests and then I hack away at the structure with complete confidence that I don't have to keep the entire old-architecture-becomes-new-architecture plan in my head. I just have to get the tests to pass again.
I refactor the important bits. Get the existing suite of tests to pass.
Then I realize I forgot something, and I'm back to CALTAL development on the new stuff.
Then I see things I forgot to delete -- but are they really unused everywhere? Delete 'em and see what fails in the testing.
Just yesterday -- part way through a big refactoring -- I realized that I still didn't have the exact right design. But the tests still had to pass, so I was free to refactor my refactoring before I was even done with the first refactoring. (whew!) And it all worked nicely because I had a set of tests to validate the changes against.
For flying solo TDD is my copilot.
TDD lets me more clearly define the problem in my head. That helps me focus on implementing just the functionality that is required, and nothing more. It also helps me create a better API, because I'm writing a "client" before I write the code itself. I can also refactor without having to worry about breaking anything.
I'm going to answer this question quite quickly, and hopefully you will start to see some of the reasoning, even if you still disagree. :)
If you are lucky enough to be on a long-running project, then there will be times when you want to, for example, write your data tier first, then maybe the business tier, before moving on up the stack. If your client then makes a requirement change that requires re-work on your data layer, a set of unit tests on the data layer will ensure that your methods don't fail in undesirable ways (assuming you update the tests to reflect the new requirements). However, you are likely to be calling the data layer method from the business layer as well, and possibly in several places.
Let's assume you have 3 calls to a method in the business layer, but you only modify 2. In the third method, you may still be getting data back from your data layer that appears to be valid, but may break some of the assumptions you coded months before. Unit tests at this level (and above) should have been designed to spot broken assumptions, and in failing they should highlight to you that there is a section of code that needs to be revisited.
I'm hoping that this very simplistic example will be enough to get you thinking about TDD a little more, and that it might create a spark that makes you consider using it. Of course, if you still don't see the point, and you are confident in your own abilities to keep track of many thousands of lines of code, then I have no place to tell you you should start TDD.
The point about writing the tests first is that it enforces the requirements and design decisions you are making. When I mod the code, I want to make sure those are still enforced and it is easy enough to "break" something without getting a compiler or run-time error.
I have a test-first approach because I want to have a high degree of confidence in my code. Granted, the tests need to be good tests or they don't enforce anything.
I've got some pretty large code bases that I work on and there is a lot of non-trivial stuff going on. It is easy enough to make changes that ripple and suddenly X happens when X should never happen. My tests have saved me on several occasions from making a critical (but subtle) error that might have gone unnoticed by human testers.
When the tests do fail, they are opportunities to look at them and the production code and make sure that it is correct. Sometimes the design changes and the tests will need to be modified. Sometimes I'll write something that passes 99 out of 100 tests. That 1 test that didn't pass is like a co-worker reviewing my code (in a sense) to make sure I'm still building what I'm supposed to be building.
I feel that as a solo developer on a project, especially a larger one, you tend to be spread pretty thin.
You are in the middle of a large refactoring when all of a sudden a couple of critical bugs are detected that for some reason did not show up during pre-release testing. In this case you have to drop everything and fix them and after having spent two weeks tearing your hair out you can finally get back to whatever you were doing before.
A week later one of your largest customers realizes that they absolutely must have this cool new shiny feature or otherwise they won't place the order for those 1M units they should have already ordered a month ago.
Now, three months later you don't even remember why you started refactoring in the first place let alone what the code you are refactoring was supposed to do. Thank god you did a good job writing those unit tests because at least they tell you that your refactored code is still doing what it was supposed to do.
Lather, rinse, repeat.
..story of my life for the past 6 months. :-/
Sole developer should use TDD on his project (track record does not matter), since eventually this project could be passed to some other developer. Or more developers could be brought in.
New people will have extremely have hard time working with the code without the tests. They will break things.
Does your client own the source code when you deliver the product? If you can convince them that delivering the product with unit tests adds value, then you are up-selling your services and delivering a better product. From the client's perspective, test coverage not only ensures quality, it allows future maintainers to understand the code much more readily since the tests isolate functionality from the UI.
I think TDD as a methodology is not just about "having tests when making changes", thus it does not depend on team- nor on project size. It's about noting one's expectations about what a pice of code/an application does BEFORE one starts to really think about HOW the noted behaviour is implemented. The main focus of TDD is not only having test in place for written code but writing less code because you just do what make the test green (and refactor later).
If you're like me and find it quite hard to think about what a part/the whole application does WITHOUT thinking about how to implement it, I think its fine to write your test after your code and thus letting the code "drive" the tests.
If your question isn't so much about test-first (TDD) or test-after (good coding?) I think testing should be standard practise for any developer, wether alone or in a big team, who creates code which stays in production longer than three months. In my expirience that's the time-span after which even the original author has to think hard about what these twenty lines of complex, super-optimized, but sparsely documented code really code do. If you've got tests (which cover all paths throughth the code), there less to think - and less to ERR about, even years later...
Here are a few memes and my responses:
"TDD made me think about how it would fail, which made me a better programmer"
Given enough experience, being higly concerned with failure modes should naturally become part of your process anyway.
"Applications need to work properly"
This assumes you are able to test absolutely everything. You're not going to be any better at covering all possible tests correctly than you were at writing the functional code correctly in the first place. "Applications need to work better" is a much better argument. I agree with that, but it's idealistic and not quite tangible enough to motivate as much as I wish it would. Metrics/anecdotes would be great here.
"Worked great for my <library component X>"
I said in the question I saw value in these cases, but thanks for the anecdote.
"Think of the next developer"
This is probably one of the best arguments to me. However, it is quite likely that the next developer wouldn't practice TDD either, and it would therefore be a waste or possibly even a burden in that case. Back-door evangelism is what it amounts to there. I'm quite sure a TDD developer would really appeciate it, though.
How much are you going to appreciate projects done in deprecated must-do methodologies when you inherit one? RUP, anyone? Think of what TDD means to next developer if TDD isn't as great as everyone thinks it is.
"Refactoring is a lot easier"
Refactoring is a skill like any other, and iterative development certainly requires this skill. I tend to throw away considerable amounts of code if I think the new design will save time in the long run, and it feels like there would be an awful number of tests thrown away too. Which is more efficient? I don't know.
...
I would probably recommend some level of TDD to anyone new... but I'm still having trouble with the benefits for anyone who's been around the block a few times already. I will probably start adding automated tests to libraries. It's possible that after doing that, I'll see more value in doing it generally.
Motivated self interest.
In my case, sole developer translates to small business owner. I've written a reasonable amount of library code to (ostensibly) make my life easier. A lot of these routines and classes aren't rocket science, so I can be pretty sure they work properly (at least in most cases) by reviewing the code, some some spot testing and debugging into the methods to make sure they behave the way I think they do. Brute force, if you will. Life is good.
Over time, this library grows and gets used in more projects for different customers. Testing gets more time consuming. Especially cases where I'm (hopefully) fixing bugs and (even more hopefully) not breaking something else. And this isn't just for bugs in my code. I have to be careful adding functionality (customers keep asking for more "stuff") or making sure code still works when moved to a new version of my compiler (Delphi!), third party code, runtime environment or operating system.
Taken to the extreme, I could spend more time reviewing old code than working on new (read: billable) projects. Think of it as the angle of repose of software (how high can you stack untested software before it falls over :).
Techniques like TDD gives me methods and classes that are more thoughtfully designed, more thoroughly tested (before the customer gets them) and need less maintenance going forward.
Ultimately, it translates to less time doing maintenance and more time to spend doing things that are more profitable, more interesting (almost anything) and more important (like family).
We are all developers with a good track record. After all, we are all reading Stackoverflow. And many of us use TDD and perhaps those people have a great track record. I get hired because people want someone who writes great test automation and can teach that to others. When working alone, I do TDD on my coding projects at home because I found that if I don’t, I spent time doing manual testing or even debugging, and who needs that. (Perhaps those people have only good track records. I don’t know.)
When it comes to being a good automobile driver, everyone believes they are a “good driver.” This is a cognitive bias all drivers have. Programmers have their own biases. The reasons developers such as the OP don’t do TDD are covered in this Agile Thoughts podcast series. The podcast archive also has content on test automation concepts such as the test pyramid, and an intro about what is TDD and why write tests first starting with episode 9 in the podcast archive.

Resources