Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am about to start planning a major refactoring of our codebase, and I would like to get some opinions and answers to some questions (I have seen quite a few discussions on similar topics, such as https://stackoverflow.com/questions/108141/how-do-i-work-effectively-with-very-messy-legacy-code, Strategy for large scale refactoring, but I have some specific questions (at the bottom):
We develop a complex application. There are some 25 developers working the codebase. Total man years put into the product to date are roughly 150.
The current codebase is a single project, built with ant. The high level goal of the project I'm embarking on is to modularize the codebase into its various infrastructures and applicative components.
There is currently no good separation between the various logical components, so it's clear that any modularization effort will need to include some API definitions and serious untangling to enable the separation.
Quality standards are low - there are almost no tests, and definitely no tests running as part of the build process.
Another very important point is that this project needs to take place in parallel to active product development and versions being shipped to customers.
Goals of project:
allow reuse of components across different projects
separate application from infrastructure, and allow them to evolve independently
improve testability (by creating APIs)
simplify developers' dev env (less code checked out and compiled)
My thoughts and questions:
What are your thoughts regarding the project's goals? Anything you would change?
do you have experience with such projects? What would some recommendations?
I'm very concerned with the lack of tests - hence lack of control for me to know that the refactoring process is not breaking anything as i go. This is a catch 22, because one of the goals of this project is to make our code more testable...
I was very influenced by Michael Feathers' Working Effectively With Legacy Code . According to it, a bottom up approach is the way to solve my problem - don't jump head first into the codebase and try to fix it, but rather start small by adding unit tests around new code for several months, and see how the code (and team) become much better, to an extent where abstractions will emerge, APIs will surface, etc, and essentially - the modularization will start happening by itself.
Does anyone have experience with such a direction?
As seen in many other questions on this topic - the main problem here is managerial disbelief. "how is testing class by class (and spending a lot of time doing so) gonna bring us to a stable system? It's a nice theory which doesn't work in real life". Any tips on selling this?
Well I guess it's better now than later but you've definitely got a task ahead of you. I was once in a team of three responsible for a refactoring a product of similar size. It was procedural code but I'll describe some of the issues we had that will similarly apply.
We started at the bottom and started easing into it by picking functions that should have been highly reusable but weren't. We'd write a bunch of unit tests on the existing code (none existed at all!), but before long, we faced our first big problem--the existing code had bugs that had been laying dormant.
Do we fix them? If we do, then we've gone beyond a refactoring. So we'd log an issue with the existing code hoping to get a fixed and freshly tested code base, but of course management decided there were more important priorities than fixing bugs that had never surfaced. Understandable.
So we thought we'd try fixing the bugs in our new code. Then we discovered that these bugs in the original code made other code work, so really were 'conceptual bugs' rather than 'functional bugs'. Well maybe. There were occasional intermittent spasms in the original software that had never been tracked down.
So then we changed tack and decided to keep the bugs in place, as a true refactoring should do. It's easy to unintentionally introduce bugs, it's far harder to do it intentionally!
The next problem was that the code was in such as mess that the initial unit tests we wrote had to substantially change to cater for the refactoring. In other words, two moving targets. Not good. Just writing the tests was taking ages and lost us belief in the worthiness of the project. It really was something you just wanted to walk away from.
We found in the end we really had to tone down the extent of the refactoring if we were going to finish this millennium, which meant the codebase we dreamed of wouldn't be achieved. We declared that the most feasible solution was just to clean and trim the code and at least make it conceptually easier to understand for future developers to modify.
The reduced benefits of the limited refactoring was deemed not worth the effort by management, and given that similar reliability issues were being found in the hardware platform (embedded project), the company decided it was their chance to renew the entire product, with the software written from scratch, new language, objects. It was only the extensive system test specs in place from the original product that meant this had a chance.
Clearly the absence of tests is going to make people nervous when you attempt to refactor the code. Where will anybody get any faith that your refactoring doesn't break the application? Most of the answers you'll get, I think, will be "this is gonna be very hard and not very successful", and this is largely because you are facing a huge manual task and no faith in the answer.
There are only two ways out.
Build a bunch of tests. Unfortunately, this will cost a lot of time and most managers don't see any value; after all, you've gotten along without them so far. Pointing back to the faith question won't help; you're still using a lot of time before anything useful happens. If they do let you build tests, you'll have the problem of evolving the tests as you refactor; they may not change functionality one bit, but as you build new APIs the tests will have to change to match the new APIs. That's additional work beyond refactoring the code base.
Automate the refactoring process. If you apply trustworthy automated transformations, you can argue (often unsuccessfully) that the refactored code preserves the original system function. The way to beat the unsucessful argument is to write those tests (see first method) and apply the refactoring process to the application and the tests; as the application changes structures, the tests have to change too. But they are just application code from the point of view of automated machinery.
Not a lot of people do the latter; where do you get the tools that can do such things?
In fact, such tools exist. They are called program transformation tools and are used to carry out massive transformations on code.
Think of these as tools for literally refactoring in the large; because of scale,
they tend not to be interactive.
It does take effort to configure them for the task at hand; you have to write custom rules to accomplish your custom desired result. You likely can't do this in a week, but this is a lot less work than manually modifying a large system. And you should consider that you have 150 man-years invested in the existing software; it took that long to make the mess. It seems reasonable that "some" effort small in comparison should be OK.
I only know of 3 such tools that have a chance of working on real code: TXL, Stratego/XT, and our tool, the DMS Software Reengineering Toolkit. The first two are academic products (although TXL has been used for commercial activities in the past); DMS is commercial.
DMS has been used for a wide variety of large-scale software anaysis and massive transformation tasks. One task was automated translation between languages for the B-2 Stealth Bomber. Another, much closer to your refactoring problem, was automated architecting of a large-scale component-based system C++ for componentts, from a legacy proprietary RTOS with its idiosyncratic rules about how components are organized, to CORBA/RT in which the component APIs had to be changed from ad hoc structures to CORBA-style facet and receptacle interfaces as well as using CORBA/RT services in place of the legacy RTOS services. (These tasks were both done with 1-2 man-years of actual effort, by pretty smart and DMS-savvy guys).
There's still the test-construction problem (Both of these examples above had great system tests already).. Here I'm going to go out on a limb. I believe there is hope in getting such tools to automate test generation by instrumenting running code to collect function input-output results. We've built all kinds of instrumentation for source code (obviously you have to compile it after instrumentation) and think we know how to do this. YMMV.
There is something you do which is considerably less ambitious: identify the reusable parts of the code, by finding out what has been reused in the code. Most software systems contain a lot of cloned code (our experience is 10-20% [and I'm surprised by the PHP report of smaller numbers in another answer; I suspect they are using a weak clone detector). Cloned code is a hint of a missing abstraction in the application software. If you can find the clones and see how they vary, you can pretty easily see how to abstract them into functions (or whatever) to make them explicit and reusable.
Salion Inc. did clone detection and abstraction. The paper doesn't explore the abstraction activity; what Salion actually did was a periodic review of the detected clones, and manual remediation of the egregrious ones or those that made sense into (often library) methods. The net result was the code base actually shrank in size and the programmers became more effective because they had better ("more reusable") libraries.
They used our CloneDR, a tool for finding clones by using the program syntax as a guide. CloneDR finds exact clones and near misses (replacement of identifiers or statements) and provides a specific list of clone locatons and clone paramerizations, regardless of layout and comments. You can see clone reports for a number of languages at the link. (I'm the originator and author of CloneDR among my many hats).
Regarding the "small clone percentage" for the PHP project discussed in another answer: I don't know what was being used for a clone detector. The only clone detector focused on PHP that I know is PHPCPD, which IMHO is a terrible clone detector; it only finds exact clones if I understand the claimed implementation. See the PHP example at our site for comparative purposes.
This is exactly what we've been doing for web2project for the past couple years.. we forked from an existing system (dotproject) that had terrible metrics like high cyclomatic complexity (low: 17, avg: 27, high: 195M), lots of duplicate code (8% of overall code), and zero tests.
Since the split, we've reduced duplicate code (2.1% overall), reduced the total code (200kloc to 155kloc), added nearly 500 unit tests, and improved cyclomatic complexity (low: 1, avg: 11, high: 145M). Yes, we still have a ways to go.
Our strategy is detailed in my slides here:
http://caseysoftware.com/blog/phpbenelux-2011-recap - Project Triage & Recovery; and here:
http://www.phparch.com/2010/11/codeworks-2010-slides/ - Unit Testing Strategies; and in various posts like this one:
http://caseysoftware.com/blog/technical-debt-doesn039t-disappear
And just to warn you.. it's not fun at first. It can be fun and satisfying once your metrics start improving but that takes a while.
Good luck.
Related
I'm in the process of doing a study on Test-Driven Development and one of the discussion points is the "Barrier to Entry" associated with TDD. Does anyone have any experience around this area, on any projects you've worked on that decided not to use TDD because the barrier to entry was too high?
From what I can tell the only barrier to entry is knowledge (and as such experience) of individual developers, with most not being entirely accustomed to the process and it being slightly alien. Financially it seems to be very appealing given most of the market leading tools are open source, freely available, well documented and well supported.
Thoughts/feelings appreciated.
Thanks,
EDIT - does anyone know of any high profile quotes of people advocating TDD? Would love to see how high it goes up the chain. Cheers.
Some barriers include:
An existing code base which doesn't lend itself to unit testing.
A problem domain that is hard to unit test meaningfully, such as GUI work or integrations with third party systems.
A perception of integration problems over unit problems (in other words, if it doesn't work end to end it doesn't do anything, so what is the point of testing the unit).
A mindset that wants to design ahead of time and have a clear system design rather than have tests drive design
A political culture where design is done by a different person/group than development, and that design is not unit-test friendly.
An inability to get over the fact that TDD is not about testing for conformance (arguments like "the one who writes the tests shouldn't be the one who codes it, they will be too lenient on themselves" and such variants).
It isn't they way they have coded until now, so the shift is harder.
Sometimes a certain test can be hard to set up, so the method will get abandoned because it "feels" slower.
Design requirements that don't lend themselves to evolving design well or at all (think Nuclear Plant control software or other systems were actual lives depend on their functioning correctly).
If everyone isn't running the test before checking in code, tests start to break often for wrong reasons (that is the intended behavior of the code changed, but the test didn't keep up, so the test is wrong, not the code) so they can be perceived as a drag.
In terms of barriers to entry, effectively, because you are explicitly writing tests that must pass before code is considered to be complete, the lead time in the dev cycle involved in getting functional code is longer. Now, when using TDD, you're effectively guaranteeing a certain level of quality on the code (whatever level of quality you choose to test against) and so that is generally more than enough compensation for the lag in lead time, but strictly speaking, there IS a greater lead time to getting functional code using TDD.
Effectively, if you have coders that write bug-free code, TDD will be a drag on your development cycle. The value of TDD, of course, is that there aren't any coders who can always write bug-free code, and so the cost of fixing bugs has to be factored in somewhere; in TDD, the cost of the test infrastructure is front-loaded.
Note that this is not in any way a negative thing about TDD; I'm just saying, that front-loading COULD be considered to be a "barrier to entry". Personally, as a coder, I would say that the Return on Investment is more than worth the effort, and I think most experienced dev managers would as well.
Team and/or management buy-in is the biggest obstacle in some companies. If you're the lone developer trying to use TDD and you can't get others on the project interested, it can be very frustrating.
Of course that's not a financial barrier at all. The biggest perceived financial barrier is probably time. If you have a large code base that you need to write unit tests for, it can seem quite daunting. Your manager (or someone above them) will question why you want to spend time writing code that will not add features/functionality to the code. Many people don't realize that writing the tests up front (as you do in TDD) can actually save you time, both immediately and in the long run when you're maintaining that code.
I think one major barrier is how it requires you to change the way you think.
Before I tried TDD, I would create a class, say Employee, then I would stub in things like FirstName, LastName, Email, etc. Then I would write some logic and forget that I missed a few fields or something else. And before I knew it I had a pretty complex class without knowing if those fields were ever necessary.
Also, it's a complete change from how we are used to writing software. We are used to writing software as we receive features from the guys who sign our checks. We are not used to writing code which doesn't compile, making it compile, then making it work to make our tests pass.
The first time you do this, you feel a bit.. well silly and stupid. Why am I making my code intentionally fail? It seems illogical to the "make it work" philosophy we've all been taught for so long.
A few reasons why it has failed so far where I work:
Most of the project at work on are older apps. Not pre-.NET but, .NET 2.0 and in some cases .NET 1.0.
Some of these projects are not well factored, either because the technology wasn't there in 1.0, or it was built quickly because they needed something NOW..
As Jon pointed out, some things are still a PIA (pain-in-the-***) to unit test, UI, database, etc.
Expensive tools. If you are only allowed to Microsoft tools, it's a high price tag to do this the "right way". We use resharper, so it really isn't a problem.
Time. I'm in a team of three guys supporting a department of 30 people. We are considered overhead, and many of our development consists of interfacing systems together
Yes the main barrier to entry is in your head or in the head of other programmers.
In the beginning you don't know what to srite in your tests.
The trick is to think about how your code will be used instead of focusing on how you re going to write it. Easier said than done ...
When you start to "get it", it's a bit hard to know where to stop writing tests.
You have to remember that tests prove nothing so you just can't write tests to covers all cases, you have to select the most useful ones ... and that's already a lot !
I've certainly seen plenty of resistance. The barriers I've encountered are:
Unit testing user interfaces (web or thick client) is tricky. I know there are lots of attempts to solve the problem, but I don't think any of them have made it really simple - because it's a naturally hard problem.
At the other end, although there are various ways of making it easier to test the code involved with the database, it's still tricky and time-consuming.
While good tests definitely speed up development overall, testing is a skill - and while you suck at it, unit testing may well be more trouble than it's worth, which means you never build up the skill...
Managers often see it as an optional extra to development - a nice to have rather than critical. This means it's the first thing to go when the project inevitably has a resource squeeze.
I wrote a long-ish article about this a few weeks back, "Why I write tests first".
I think the biggest barrier is building the discipline to start with tests first, but I don't believe the TDD (or any practice for that matter) should be approached as an always, absolutely, 100% of the time solution.
TDD is a tool in each developer's arsenal. I tend to think it works well for me most of the time. A developer that isn't as accustomed to writing tests (first or otherwise) may it difficult to get anything done if TDD is forced on them because they can't think in terms of writing the test first.
I consider myself an experienced test-writer, but I can't always think in terms of tests. Some problems don't lend themselves well to it, or at least my head doesn't get wrapped around it some days. And some types of code (such as UI and client-side code) doesn't lend itself well to always writing tests.
If you have a team of developers that do not write tests as a matter of habit, I'd push that first. I have no problem requiring that all new code have accompanying unit tests where possible/practical. Once testing is a discipline, converting developers to TDD individually or as a team is much easier.
One non-obvious barrier (non-obvious to me, at least) is the build infrastructure. If developers don't have control over the build process, or if the infrastructure is too baroque to be manageable, then integrating tests into the build process is going to be shunted to the side in the name of "efficiency". (Of course, in these situations it's the build infrastructure that should be shunted aside in the name of efficiency.)
The IT department I work in as a programmer revolves around a 30+ year old code base (Fortran and C). The code is in a poor condition partially as a result of 30+ years of ad-hoc poorly thought out changes but I also suspect a lot of it has to do with the capabilities of the programmers who made the changes (and who incidentally are still around).
The business that depends on the software operates 363 days a year and 20 hours a day. Unfortunately there are numerous outages. This is the first place I have worked where there are developers on call to apply operational code fixes to production systems. When I was first, there was actually a copy of the source code and development tools on the production servers so that on the fly changes could be applied; thankfully that practice has now been stopped.
I have hinted a couple of times to management that the costs of the downtime, having developers on call, extra operational staff, unsatisifed customers etc. are costing the business a lot more in the medium, and possibly even short term, than it would to launch a whole hearted effort to re-write/refactor/replace the whole thing (the code base is about 300k lines).
Ideally they'd be some external consultancy that could come in and run the rule over the quality of the code and the costs involved to keep it running vs rewrite/refactor/replace it. The question I have is how should a business go about doing that kind of cost analysis on software AND be able to have confidence in that analysis? The first IT consultants down the street may claim to be able to do the analysis but how could management be made to feel comfortable with it over what they are being told by internal staff?
We recently decided to completely rewrite large portions of our business code from scratch, and it has not gone as well as we had hoped. I've seen a lot of quotes saying you should never try to rewrite anything from scratch, and now I see why. I would recommend starting small - don't try to rewrite the whole thing at once. Identify the large problem areas and focus on refactoring small portions of the system at a time. Since there is 30+ years worth of work in the system, it will take a long time to get it back to a reasonable state. We had about 5-8 years worth of work to rewrite, and it has been difficult. I can't imagine 30+ years of work!
First, the profile of the consultant you need is very specific. Unless you can find someone who worked in a similar domain with the same languages, don't hire him.
Second, there's a 99% probability (I like dramatic numbers) the analysis will go as follow:
Consultant explores the application
Consultant does understand 10% of the application
Time's up, time for the report
Consultant advices a complete rewrite (no refactoring, plain rewrite)
So you may as well make the economy of what the consultant will cost.
You have only two solutions here:
Keep with the actual source code but determine proper methods to fix problems so that you have a very long run refactoring that is progressly made by those who know the application
Get a secondary team to make a new application to replace the old one
If I talk about a secondary team, it's because you cannot bring just one architect to make the new application and have the old team working with him:
They're too busy on the old application
There will be frictions because the newcomer will undoubtedly underestimate the task at hand
I talk from experience, believe me.
If you go the "new application" way don't put your hopes too high. You'll end up with an application that has less than half the functionalities of the current one, simply because you cannot cram 30+ years of special case and exceptional situation fixes into a freshly design software.
Oh, also, if your developers happen to tell you they have a plan, by all means, hear them out. They most probably know what they are talking about.
The first thing that comes to mind is that you are prematurely addressing the rewrite/refactor/replace argument. The first step two steps I would recommend would be:
Unit tests
QA
It's well within engineering scope to implement these. Unit tests are an essential preliminary step before any reasonable refactor or rewrite could possibly take place. By 'unit test' I mean wrap each function call with corresponding code that proves the code works for all known conditions. In complex retrofits this may not actually happen at the most granular level but any automated tests will help immensely.
And QA - have an independent (and aggressive) quality assurance team that rigorously tests beta releases before production. Their test plans and test procedures become essential for any kind of replacement effort.
Once you've got the code under control, then you are in a position where the business can reasonably consider massive changes.
Just a note about your comment about external consultants - no consultancy will ever care enough about the code to provide realistic quality assurance. QA ends up being married to the hip of business defending the company bottom line. It's an internal function ultimately and an external consultant can't provide much more than getting you started really.
I think that your description provides all of the necessary information on code quality (lack thereof). The fact that so many support resources are required also indicates the high costs involved with maintaining the existing system.
As I answered here, a good approach to consider is refactoring one piece of the system at a time until everything works at an acceptable level. I agree with Joel re not throwing away existing code (see Things You Should Never Do. Parts of your code work, so you should leave those in place whenever possible, and focus on the sections that lead to downtime.
Andy also makes a great point about starting small as well.
Another thing to try, is reviewing the processes around the system. When you do this, you should try to determine what failure situations are caused directly or indirectly by user action?, are there configuration or environment problems? If you are having trouble fixing the code directly, then you can still prop it up by dealing with external issues more effectively.
Read the book Working Effectively with Legacy Code (also see the short PDF version) and surround the code with automated tests, as instructed in that book.
Refactor the system little by little. If you rewrite some parts of the code, do it a small subsystem at a time. Don't try to make a Grand Redesign.
The code has been around for 30 years?
Development paradigms have shifted substantially in the last three decades in many ways, and most relevant to your predicament, I feel, is in terms of the amount of time (in man days) required to create something to input->process->output something.
300,000 lines of code 30 years ago, could probably fit into 100,000 lines or less today, and expending fewer man hours(?) This could seem optimistic/ridiculous to some, but on the other hand is achievable, depending on the type of application in question. You have given no indication as to the classification of system - is it a real-time manufacturing process control system of sorts with sensors and actuators tied to it? An airline booking system ? Does it post-process some backlog of data? In other words could it be rebuilt in something like Java and quickly with an agressive, smallish team? Have the requirements been documented, and if so do they need updating or redeveloping from scratch? Is human safety a factor?
Just a quick sanity check, I think whether or not you should rebuild depends on (any order means the same thing):
Number of code dudes required.
Level of expertise of said dudes.
Which languages do not fit.
Which languages do fit.
How much it costs to use chosen language(s) them in terms of hardware and software.
How much does the business depend on this to stay alive.
Is it really too much downtime, or are you just nitpicking? (maybe they really don't care, but pretend to).
Good luck with that!
What is statistical debugging? I haven't found a clear, concise explanation yet, but the term certainly sounds impressive.
Is it just a research topic, or is it being used somewhere, for actual development? In other words: Will it help me find bugs in my program?
I created statistical debugging, along with various wonderful collaborators across the years. I wish I'd noticed your question months ago! But if you are still curious, perhaps this late answer will be better than nothing.
At a very high level, statistical debugging is the idea of using statistical models of program success/failure to track down bugs. These statistical models expose relationships between specific program behaviors and eventual success or failure of a run. For example, suppose you notice that there's a particular branch in the program that sometimes goes left, sometimes right. And you also notice that runs where the branch goes left are fine, but runs where the branch goes right are 75% more likely to crash. So there's a statistical correlation here which may be worth investigating more closely. Statistical debugging formalizes and automates that process of finding program (mis)behaviors that correlate with failure, thereby guiding developers to the root causes of bugs.
Getting back to your original question:
Is it just a research topic, or is it being used somewhere, for actual development?
It is mostly a research topic, but it is out there in the "real" world in two ways:
The public deployment of the Cooperative Bug Isolation Project hunts for bugs in various Open Source programs running under Fedora Linux. You can download pre-instrumented packages and every time you use them you're feeding us data to help us find bugs.
Microsoft has released Holmes, an implementation of statistical debugging for .NET. It's nicely integrated into Visual Studio and should be a very easy way for you to use statistical debugging to help find your own bugs in your own code. I've worked closely with Microsoft Research on Holmes, and these are good smart people who know how to put out high-quality tools.
One warning to keep in mind: statistical debugging needs ample raw data to build good statistical models. In CBI's public deployment, that raw data comes from real end users. With Holmes, I think Microsoft assumes that the raw data will come from in-house automated unit tests and manual tests. What won't work is code with no runs at all, or with only failing runs but no successful counterexamples. Statistical debugging works off of the contrast between good and bad runs, so you need to feed it both. If you want bug-hunting tools without runs, then you'll need some sort of static analysis. I do research on that too, but that's not statistical debugging. :-)
I hope this helped and was not too long. I'm happy to answer any follow-up questions. Happy bug-hunting!
that's when you ship software saying "well, it probably works..."
;-)
EDIT: it's a research topic where machine learning and statistical clustering are used to try to find patterns in programs that are good predictors of bugs, to identify where more bugs are likely to hide.
It sounds like statistical sampling. When you buy a product, there's a good chance that not every single product coming off the "assembly line" has been checked for quality.
Statistical sampling calls for checking a certain percentage of products to almost ensure they're all problem-free. It minimizes the effort at the risk of some problems sneaking through and is absolutely necessary where the testing process is a destructive one - if you carry out destructive testing on 100% of your production line, that's not going to leave much for distribution :-)
To be honest, unless you're checking every single execution path and every single possible input value, you're already doing this in your testing. The amount of effort required to test everything for any but the most simplistic systems is not worth it. The extra cost would make your product a non-compete item.
Note that statistical sampling doesn't just involve testing every 100th unit. There are ways to target the sampling to improve the chance of catching problems. For example, if historical data suggests most errors are introduced at a specific phase, target that phase. If one of your developers is more problematic than others, check his stuff more closely.
From what I can see from a cursory glance at some research papers, statistical debugging is just that - targeting areas based on past history of problems.
I know we already do this for our software. Since any bugs that get fixed have to pass unit and system tests that replicate the problem (and our TDD says these tests should be written before trying to fix the bug), those tests are automatically added to the regression test suite so that those areas that cause more problems naturally are tested more often in the future.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I've inherited a project where the class diagrams closely resemble a spider web on a plate of spaghetti. I've written about 300 unit tests in the past two months to give myself a safety net covering the main executable.
I have my library of agile development books within reach at any given moment:
Working Effectively with Legacy Code
Refactoring
Code Complete
Agile Principles Patterns and Practices in C#
etc.
The problem is everything I touch seems to break something else.
The UI classes have business logic and database code mixed in. There are mutual dependencies between a number of classes. There's a couple of god classes that break every time I change any of the other classes. There's also a mutant singleton/utility class with about half instance methods and half static methods (though ironically the static methods rely on the instance and the instance methods don't).
My predecessors even thought it would be clever to use all the datasets backwards. Every database update is sent directly to the db server as parameters in a stored procedure, then the datasets are manually refreshed so the UI will display the most recent changes.
I'm sometimes tempted to think they used some form of weak obfuscation for either job security or as a last farewell before handing the code over.
Is there any good resources for detangling this mess? The books I have are helpful but only seem to cover half the scenarios I'm running into.
It sounds like you're tackling it in the right way.
Test
Refactor
Test again
Unfortunately, this can be a slow and tedious process. There's really no substitute for digging in and understanding what the code is trying to accomplish.
One book that I can recommend (if you don't already have it filed under "etc.") is Refactoring to Patterns. It's geared towards people who are in your exact situation.
I'm working in a similar situation.
If it is not a small utility but a big enterprise project then it is:
a) too late to fix it
b) beyond the capabilities of a single person to attempt a)
c) can only be fixed by a complete rewriting of the stuff which is out of the question
Refactoring can in many cases be only attempted in your private time at your personal risk. If you don't get an explicit mandate to do it as part of you daily job then you're likely not even get any credit for it. May even be criticized for "pointlessly wasting time on something that has perfectly worked for a long time already".
Just continue hacking it the way it has been hacked before, receive your paycheck and so on. When you get completely frustrated or the system reaches the point of being non-hackable any further, find another job.
EDIT: Whenever I attempt to address the question of the true architecture and doing the things the right way I usually get LOL in my face directly from responsible managers who are saying something like "I don't give a damn about good architecture" (attempted translation from German). I have personally brought one very bad component to the point of non-hackability while of course having given advanced warnings months in advance. They then had to cancel some promised features to customers because it was not doable any longer. Noone touches it anymore...
I've worked this job before. I spent just over two years on a legacy beast that is very similar. It took two of us over a year just to stabilize everything (it's still broke, but it's better).
First thing -- get exception logging into the app if it doesn't exist already. We used FogBugz, and it took us about a month to get reporting integrated into our app; it wasn't perfect right away, but it was reporting errors automatically. It's usually pretty safe to implement try-catch blocks in all your events, and that will cover most of your errors.
From there fix the bugs that come in first. Then fight the small battles, especially those based on the bugs. If you fix a bug that unexpectedly affects something else, refactor that block so that it is decoupled from the rest of the code.
It will take some extreme measures to rewrite a big, critical-to-company-success application no matter how bad it is. Even you get permission to do so, you'll be spending too much time supporting the legacy application to make any progress on the rewrite anyway. If you do many small refactorings, eventually either the big ones won't be that big or you'll have really good foundation classes for your rewrite.
One thing to take away from this is that it is a great experience. It will be frustrating, but you will learn a lot.
I have (once) come across code that was so insanely tangled that I couldn't fix it with a functional duplicate in a reasonable amount of time. That was sort of a special case though, as it was a parser and I had no idea how many clients might be "using" some of the bugs it had. Rendering hundreds of "working" source files erroneous was not a good option.
Most of the time it is imminently doable, just daunting. Read through that refactoring book.
I generally start fixing bad code by moving things around a bit (without actually changing implementation code more than required) so that modules and classes are at least somewhat coherent.
When that is done, you can take your more coherent class and rewrite its guts to perform the exact same way, but this time with sensible code. This is the tricky part with management, as they generally don't like to hear that you are going to take weeks to code and debug something that will behave exactly the same (if all goes well).
During this process I guarantee you will discover tons of bugs, and outright design stupidities. It's OK to fix trivial bugs while recoding, but otherwise leave such things for later.
Once this is done with a couple of classes, you will start to see where things can be modularized better, designed better, etc. Plus it will be easier to make such changes without impacting unrelated things because the code is now more modular, and you probably know it thoroughly.
Mostly, that sounds pretty bad. But I don't understand this part:
My predecessors even thought it would
be clever to use all the datasets
backwards. Every database update is
sent directly to the db server as
parameters in a stored procedure, then
the datasets are manually refreshed so
the UI will display the most recent
changes.
That sounds pretty close to a way I frequently write things. What's wrong with this? What's the correct way?
If your refactorings are breaking code, particularly code that seems to be unrelated, then you're trying to do too much at a time.
I recommend a first-pass refactoring where all you do is ExtractMethod: the goal is simply to name each step in the code, without any attempts at consolidation whatsoever.
After that, think about breaking dependencies, replacing singletons, consolidation.
If your refactorings are breaking things, then it means you don't have adequate unit test coverage - as the unit tests should have broken first. I recommend you get better unit test coverage second, after getting exception logging into place.
I then recommend you do small refactorings first - Extract Method to break large methods into understandable pieces; Introduce Variable to remove some duplication within a method; maybe Introduce Parameter if you find duplication between the variables used by your callers and the callee.
And run the unit test suite after each refactoring or set of refactorings. I'd say run them all until you gain confidence about which tests will need to be rerun every time.
No book will be able to cover all possible scenarios. It also depends on what you'll be expected to do with the project and whether there is any kind of external specification.
If you'll only have to do occasional small changes, just do those and don't bother starting to refactor.
If there is a specification (or you can get someone to write it), consider a complete rewrite if it can be justified by the foreseeable amount of changes to the project
If "the implementation is the specification" and there are a lot of changes planned, then you're pretty much hosed. Write LOTS of unit tests and start refactoring in small steps.
Actually, unit tests are going to be invaluable no matter what you do (if you can write them to an interface that's not going to change much with refactorings or a rewrite, that is).
See blog post Anatomy of an Anti-Corruption Layer, Part 1 and Anatomy of an Anti-Corruption Layer, Part 2.
It cites Eric Evans, Domain-Driven Design: Tackling Complexity in the Heart of Software:
Access the crap behind a facade
You could extract and then refactor some part of it, to break the dependencies and isolate layers into different modules, libraries, assemblies, directories. Then you re-inject the cleaned parts in to the application with a strangler application strategy. Lather, rinse, repeat.
Good luck, that is the tough part of being a developer.
I think your approach is good, but you need to focus on delivering business value (number of unit tests is not a measure of business value, but it may give you an indication if you are on or off track). It's important to have identified the behaviors that need to be changed, prioritize, and focus on the top ones.
The other piece of advise is to remain humble. Realize that if you wrote something so large under real deadlines and someone else saw your code, they would probably have problems understanding it as well. There is a skill in writing clean code, and there is a more important skill in dealing with other people's code.
The last piece of advise is to try to leverage the rest of your team. Past members may know information about the system you can learn. Also, they may be able to help test behaviors. I know the ideal is to have automated tests, but if someone can help by verifying things for you manually consider getting their help.
I particularly like the diagram in Code Complete, in which you start with just legacy code, a rectangle of fuzzy grey texture. Then when you replace some of it, you have fuzzy grey at the bottom, solid white at the top, and a jagged line representing the interface between the two.
That is, everything is either 'nasty old stuff' or 'nice new stuff'. One side of the line or the other.
The line is jagged, because you're migrating different parts of the system at different rates.
As you work, the jagged line gradually descends, until you have more white than grey, and eventually just grey.
Of course, that doesn't make the specifics any easier for you. But it does give you a model you can use to monitor your progress. At any one time you should have a clear understanding of where the line is: which bits are new, which are old, and how the two sides communicate.
You might find the following post useful:
http://refactoringin.net/?p=36
As it is said in the post, don't discard a complete overwrite that easily. Also, if at all possible, try to replace whole layers or tiers with third-party solution like for example ORM for persistence or with new code. But most important of all, try to understand the logic (problem domain) behind the code.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm a contract programmer with lots of experience. I'm used to being hired by a client to go in and do a software project of one form or another on my own, usually from nothing. That means a clean slate, almost every time. I can bring in libraries I've developed to get a quick start, but they're always optional. (and depend on getting the right IP clauses in the contract) Many times I can specify or even design the hardware platform... so we're talking serious freedom here.
I can see uses for constructing automated tests for certain code: Libraries with more than trivial functionality, core functionality with a high number of references, etc. Basically, as the value of a piece of code goes up through heavy use, I can see it would be more and more valuable to automatically test that code so that I know I don't break it.
However, in my situation, I find it hard to rationalize anything more than that. I'll adopt things as they prove useful, but I'm not about to blindly follow anything.
I find many of the things I do in 'maintenance' are actually small design changes. In this case, the tests would not have saved me anything and now they'd have to change too. A highly iterative, stub-first design approach works very well for me. I can't see actually saving myself that much time with more extensive tests.
Hobby projects are even harder to justify... they're usually anything from weekenders up to a say month long. Edge-case bugs rarely matter, it's all about playing with something.
Reading questions such as this one, The most voted on response seems to say that in that poster's experience/opinion TDD actually wastes time if you've got less than 5 people (even assuming a certain level of competence/experience with TDD). However, that appears to be covering initial development time, not maintenance. It's not clear how TDD stacks up over the entire life cycle of a project.
I think TDD could be a good step in the worthwhile goal of improving the quality of the products of our industry as a whole. Idealism on it's own is no longer all that effective at motivating me, though.
I do think TDD would be a good approach in large teams, or any size team containing at least one unreliable programmer. That's not my question.
Why would a sole developer with a good track record adopt TDD?
I'd love to hear of any kind of metrics done (formally or not) on TDD... focusing on solo developers or very small teams.
Failing that, anecdotes of your personal experiences would be nice, too. :)
Please avoid stating opinion without experience to back it. Let's not make this an ideology war. Also the skip greater employment options argument. This is simply an efficiency question.
I'm not about to blindly follow anything.
That's the right attitude. I use TDD all the time, but I don't adhere to it as strictly as some.
The best argument (in my mind) in favor of TDD is that you get a set of tests you can run when you finally get to the refactoring and maintenance phases of your project. If this is your only reason for using TDD, then you can write the tests any time you want, instead of blindly following the methodology.
The other reason I use TDD is that writing tests gets me thinking about my API up front. I'm forced to think about how I'm going to use a class before I write it. Getting my head into the project at this high level works for me. There are other ways to do this, and if you've found other methods (there are plenty) to do the same thing, then I'd say keep doing what works for you.
I find it even more useful when flying solo. With nobody around to bounce ideas off of and nobody around to perform peer reviews, you will need some assurance that you're code is solid. TDD/BDD will provide that assurance for you. TDD is a bit contraversial, though. Others may completely disagree with what I'm saying.
EDIT: Might I add that if done right, you can actually generate specifications for your software at the same time you write tests. This is a great side effect of BDD. You can make yourself look like super developer if you're cranking out solid code along with specs, all on your own.
Ok my turn... I'd do TDD even on my own (for non-spike/experimental/prototype code) because
Think before you leap: forces me to think what I want to get done before i start cranking out code. What am I trying to accomplish here.. 'If I assume I already had this piece.. how would I expect it to work?' Encourages interface-in design of objects.
Easier to change: I can make modifications with confidence.. 'I didn't break anything in step1-10 when i changed step5.' Regression testing is instantaneous
Better designs emerge: I've found better designs emerging without me investing effort in a design activity. test-first + Refactoring lead to loosely coupled, minimal classes with minimal methods.. no overengineering.. no YAGNI code. The classes have better public interfaces, small methods and are more readable. This is kind of a zen thing.. you only notice you got it when you 'get it'.
The debugger is not my crutch anymore : I know what my program does.. without having to spend hours stepping thru my own code. Nowadays If I spend more than 10 mins with the debugger.. mental alarms start ringing.
Helps me go home on time I have noticed a marked decrease in the number of bugs in my code since TDD.. even if the assert is like a Console trace and not a xUnit type AT.
Productivity / Flow: it helps me to identify the next discrete baby-step that will take me towards done... keeps the snowball rolling. TDD helps me get into a rhythm (or what XPers call flow) quicker. I get a bigger chunk of quality work done per unit time than before. The red-green-refactor cycle turns into... a kind of perpetual motion machine.
I can prove that my code works at the touch of a button
Practice makes perfect I find myself learning & spotting dragons faster.. with more TDD time under my belt. Maybe dissonance.. but I feel that TDD has made me a better programmer even when I don't go test first. Spotting refactoring opportunities has become second nature...
I'll update if I think of any more.. this is what i came up with in the last 2 mins of reflection.
I'm also a contract programmer. Here are my 12 Reasons Why I Love Unit Tests.
My best experience with TDD is centered around the pyftpdlib project. Most of the development is done by the original author, and I've made a few small contributions, but it's essentially a solo project. The test suite for the project is very thorough, and tests all the major features of the FTPd library. Before checking in changes or releasing a version, all tests are checked, and when a new feature is added, the test suite is always updated as well.
As a result of this approach, this is the only project I've ever worked on that didn't have showstopper bugs appear after a new release, have changes checked in that broke a major feature, etc. The code is very solid and I've been consistently impressed with how few bug reports have been opened during the life of the project. I (and the original author) attribute much of this success to the comprehensive test suite and the ability to test every major code path at will.
From a logical perspective, any code you write has to be tested, and without TDD then you'll be testing it yourself manually. On the flip side to pyftpdlib, the worst code by number of bugs and frequency of major issues, is code that is/was solely being tested by the developers and QA trying out new features manually. Things don't get tested because of time crunch or falling through the cracks. Old code paths are forgotten and even the oldest stable features end up breaking, major releases end up with important features non-functional. etc. Manual testing is critically important for verification and some randomization of testing, but based on my experiences I'd say that it's essential to have both manual testing and a carefully constructed unit test framework. Between the two approaches the gaps in coverage are smaller, and your likelihood of problems can only be reduced.
It does not matter whether you are the sole developer or not. You have to think of it from the application point of view. All the applications needs to work properly, all the applications need to be maintained, all the applications needs to be less buggy. There are of course certain scenarios where a TDD approach might not suit you. This is when the deadline is approaching very fast and no time to perform unit testing.
Anyways, TDD does not depend on a solo or a team environment. It depends on the application as a whole.
I don't have an enormous amount of experience, but I have had the experience of seeing sharply-contrasted approaches to testing.
In one job, there was no automated testing. "Testing" consisted of poking around in the application, trying whatever popped in your head, to see if it broke. Needless to say, it was easy for flat-out-broken code to reach our production server.
In my current job, there is lots of automated testing, and a full CI-system. Now when code gets broken, it is immediately obvious. Not only that, but as I work, the tests really document what features are working in my code, and what haven't yet. It gives me great confidence to be able to add new features, knowing that if I break existing ones, it won't go unnoticed.
So, to me, it depends not so much on the size of the team, but the size of the application. Can you keep track of every part of the application? Every requirement? Every test you need to run to make sure the application is working? What does it even mean to say that the application is "working", if you don't have tests to prove it?
Just my $0.02.
Tests allow you to refactor with confidence that you are not breaking the system. Writing the tests first allows the tests to define what is working behavior for the system. Any behavior that isn't defined by the test is by definition a by-product and allowed to change when refactoring. Writing tests first also drive the design in good directions. To support testability you find that you need to decouple classes, use interfaces, and follow good pattern (Inversion of Control, for instance) to make your code easily testable. If you write tests afterwards, you can't be sure that you've covered all the behavior expected of your system in the tests. You also find that some things are hard to test because of the design -- since it was likely developed without testing in mind -- and are tempted to skimp on or omit tests.
I generally work solo and mostly do TDD -- the cases where I don't are simply where I fail to live up to my practices or haven't yet found a good way that works for me to do TDD, for example with web interfaces.
TDD is not about testing it's about writing code. As such, it provides a lot of benefits to even a single developer. For many developers it is a mindshift to write more robust code. For example, how often do you think "Now how can this code fail?" after writing code without TDD? For many developers, the answer to that question is none. For TDD practioners it shifts the mindset to to doing things like checking if objects or strings are null before doing something with them because you are writing tests to specifically do that (break the code).
Another major reason is change. Anytime you deal with a customer, they can never seem to make up their minds. The only constant is change. TDD helps as a "safety net" to find all the other areas that could break.Even on small projects this can keep you from burning up precious time in the debugger.
I could go and on, but I think saying that TDD is more about writing code than anything should be enough to justify it's use as a sole developer.
I tend to agree with the validity of your point about the overhead of TDD for 'one developer' or 'hobby' projects not justifying the expenses.
You have to consider however that most best practices are relevant and useful if they are consistently applied for a long period of time.
For example TDD is saving you testing/bugfixing time in a long run, not within 5 minutes after you've created the first unit test.
You're a contract programmer which means that you will leave your current project when it will be finished and will switch to something else, most likely in another company. Your current client will have to maintain and support your application. If you do not leave the support team a good framework to work with they will be stuck. TDD will help the project to be sustainable. It will increase the stability of the code base so other people with less experience will not be able not do too much damage trying to change it.
The same applies for the hobby projects. You may be tired of it and will want to pass it to someone. You might become commercially successful (think Craiglist) and will have 5 more people working besides you.
Investment in proper process always pays-off, even if it is just gained experience. But most of the time you will be grateful that when you started a new project you decided to do it properly
You have to consider OTHER people when doing something. You you have to think ahead, plan for growth, plan for sustainability.
If you don't want to do that - stick to the cowboy coding, it's much simpler this way.
P.S. The same thing applies to other practices:
If you don't comment your code and you have ideal memory you'll be fine but someone else reading your code will not.
If you don't document your discussions with the customer somebody else will not know anything about a crucial decision you made
etc ad infinitum
I no longer refactor anything without a reasonable set of unit tests.
I don't do full-on TDD with unit tests first and code second. I do CALTAL -- Code A LIttle, Test A Little -- development. Generally, code goes first, but not always.
When I find that I've got to refactor, I make sure I've got enough tests and then I hack away at the structure with complete confidence that I don't have to keep the entire old-architecture-becomes-new-architecture plan in my head. I just have to get the tests to pass again.
I refactor the important bits. Get the existing suite of tests to pass.
Then I realize I forgot something, and I'm back to CALTAL development on the new stuff.
Then I see things I forgot to delete -- but are they really unused everywhere? Delete 'em and see what fails in the testing.
Just yesterday -- part way through a big refactoring -- I realized that I still didn't have the exact right design. But the tests still had to pass, so I was free to refactor my refactoring before I was even done with the first refactoring. (whew!) And it all worked nicely because I had a set of tests to validate the changes against.
For flying solo TDD is my copilot.
TDD lets me more clearly define the problem in my head. That helps me focus on implementing just the functionality that is required, and nothing more. It also helps me create a better API, because I'm writing a "client" before I write the code itself. I can also refactor without having to worry about breaking anything.
I'm going to answer this question quite quickly, and hopefully you will start to see some of the reasoning, even if you still disagree. :)
If you are lucky enough to be on a long-running project, then there will be times when you want to, for example, write your data tier first, then maybe the business tier, before moving on up the stack. If your client then makes a requirement change that requires re-work on your data layer, a set of unit tests on the data layer will ensure that your methods don't fail in undesirable ways (assuming you update the tests to reflect the new requirements). However, you are likely to be calling the data layer method from the business layer as well, and possibly in several places.
Let's assume you have 3 calls to a method in the business layer, but you only modify 2. In the third method, you may still be getting data back from your data layer that appears to be valid, but may break some of the assumptions you coded months before. Unit tests at this level (and above) should have been designed to spot broken assumptions, and in failing they should highlight to you that there is a section of code that needs to be revisited.
I'm hoping that this very simplistic example will be enough to get you thinking about TDD a little more, and that it might create a spark that makes you consider using it. Of course, if you still don't see the point, and you are confident in your own abilities to keep track of many thousands of lines of code, then I have no place to tell you you should start TDD.
The point about writing the tests first is that it enforces the requirements and design decisions you are making. When I mod the code, I want to make sure those are still enforced and it is easy enough to "break" something without getting a compiler or run-time error.
I have a test-first approach because I want to have a high degree of confidence in my code. Granted, the tests need to be good tests or they don't enforce anything.
I've got some pretty large code bases that I work on and there is a lot of non-trivial stuff going on. It is easy enough to make changes that ripple and suddenly X happens when X should never happen. My tests have saved me on several occasions from making a critical (but subtle) error that might have gone unnoticed by human testers.
When the tests do fail, they are opportunities to look at them and the production code and make sure that it is correct. Sometimes the design changes and the tests will need to be modified. Sometimes I'll write something that passes 99 out of 100 tests. That 1 test that didn't pass is like a co-worker reviewing my code (in a sense) to make sure I'm still building what I'm supposed to be building.
I feel that as a solo developer on a project, especially a larger one, you tend to be spread pretty thin.
You are in the middle of a large refactoring when all of a sudden a couple of critical bugs are detected that for some reason did not show up during pre-release testing. In this case you have to drop everything and fix them and after having spent two weeks tearing your hair out you can finally get back to whatever you were doing before.
A week later one of your largest customers realizes that they absolutely must have this cool new shiny feature or otherwise they won't place the order for those 1M units they should have already ordered a month ago.
Now, three months later you don't even remember why you started refactoring in the first place let alone what the code you are refactoring was supposed to do. Thank god you did a good job writing those unit tests because at least they tell you that your refactored code is still doing what it was supposed to do.
Lather, rinse, repeat.
..story of my life for the past 6 months. :-/
Sole developer should use TDD on his project (track record does not matter), since eventually this project could be passed to some other developer. Or more developers could be brought in.
New people will have extremely have hard time working with the code without the tests. They will break things.
Does your client own the source code when you deliver the product? If you can convince them that delivering the product with unit tests adds value, then you are up-selling your services and delivering a better product. From the client's perspective, test coverage not only ensures quality, it allows future maintainers to understand the code much more readily since the tests isolate functionality from the UI.
I think TDD as a methodology is not just about "having tests when making changes", thus it does not depend on team- nor on project size. It's about noting one's expectations about what a pice of code/an application does BEFORE one starts to really think about HOW the noted behaviour is implemented. The main focus of TDD is not only having test in place for written code but writing less code because you just do what make the test green (and refactor later).
If you're like me and find it quite hard to think about what a part/the whole application does WITHOUT thinking about how to implement it, I think its fine to write your test after your code and thus letting the code "drive" the tests.
If your question isn't so much about test-first (TDD) or test-after (good coding?) I think testing should be standard practise for any developer, wether alone or in a big team, who creates code which stays in production longer than three months. In my expirience that's the time-span after which even the original author has to think hard about what these twenty lines of complex, super-optimized, but sparsely documented code really code do. If you've got tests (which cover all paths throughth the code), there less to think - and less to ERR about, even years later...
Here are a few memes and my responses:
"TDD made me think about how it would fail, which made me a better programmer"
Given enough experience, being higly concerned with failure modes should naturally become part of your process anyway.
"Applications need to work properly"
This assumes you are able to test absolutely everything. You're not going to be any better at covering all possible tests correctly than you were at writing the functional code correctly in the first place. "Applications need to work better" is a much better argument. I agree with that, but it's idealistic and not quite tangible enough to motivate as much as I wish it would. Metrics/anecdotes would be great here.
"Worked great for my <library component X>"
I said in the question I saw value in these cases, but thanks for the anecdote.
"Think of the next developer"
This is probably one of the best arguments to me. However, it is quite likely that the next developer wouldn't practice TDD either, and it would therefore be a waste or possibly even a burden in that case. Back-door evangelism is what it amounts to there. I'm quite sure a TDD developer would really appeciate it, though.
How much are you going to appreciate projects done in deprecated must-do methodologies when you inherit one? RUP, anyone? Think of what TDD means to next developer if TDD isn't as great as everyone thinks it is.
"Refactoring is a lot easier"
Refactoring is a skill like any other, and iterative development certainly requires this skill. I tend to throw away considerable amounts of code if I think the new design will save time in the long run, and it feels like there would be an awful number of tests thrown away too. Which is more efficient? I don't know.
...
I would probably recommend some level of TDD to anyone new... but I'm still having trouble with the benefits for anyone who's been around the block a few times already. I will probably start adding automated tests to libraries. It's possible that after doing that, I'll see more value in doing it generally.
Motivated self interest.
In my case, sole developer translates to small business owner. I've written a reasonable amount of library code to (ostensibly) make my life easier. A lot of these routines and classes aren't rocket science, so I can be pretty sure they work properly (at least in most cases) by reviewing the code, some some spot testing and debugging into the methods to make sure they behave the way I think they do. Brute force, if you will. Life is good.
Over time, this library grows and gets used in more projects for different customers. Testing gets more time consuming. Especially cases where I'm (hopefully) fixing bugs and (even more hopefully) not breaking something else. And this isn't just for bugs in my code. I have to be careful adding functionality (customers keep asking for more "stuff") or making sure code still works when moved to a new version of my compiler (Delphi!), third party code, runtime environment or operating system.
Taken to the extreme, I could spend more time reviewing old code than working on new (read: billable) projects. Think of it as the angle of repose of software (how high can you stack untested software before it falls over :).
Techniques like TDD gives me methods and classes that are more thoughtfully designed, more thoroughly tested (before the customer gets them) and need less maintenance going forward.
Ultimately, it translates to less time doing maintenance and more time to spend doing things that are more profitable, more interesting (almost anything) and more important (like family).
We are all developers with a good track record. After all, we are all reading Stackoverflow. And many of us use TDD and perhaps those people have a great track record. I get hired because people want someone who writes great test automation and can teach that to others. When working alone, I do TDD on my coding projects at home because I found that if I don’t, I spent time doing manual testing or even debugging, and who needs that. (Perhaps those people have only good track records. I don’t know.)
When it comes to being a good automobile driver, everyone believes they are a “good driver.” This is a cognitive bias all drivers have. Programmers have their own biases. The reasons developers such as the OP don’t do TDD are covered in this Agile Thoughts podcast series. The podcast archive also has content on test automation concepts such as the test pyramid, and an intro about what is TDD and why write tests first starting with episode 9 in the podcast archive.