Related
I've been reading about TDD lately, and it is advocated because it supposedly results in code that is more testable and less coupled (and a bunch of other reasons).
I haven't been able to find much in the way of practical examples, except for a Roman numeral conversion and a number-to-English converter.
Observing these two examples, I observed the typical red-green-refactor cycles typical of TDD, as well as the application of the rules of TDD. However, this seemed like a big waste of time when normally I would observe a pattern and implement it in code, and then write tests for it after. Or possibly write a stub for the code, write the unit tests, and then write the implementation - which might arguably be TDD - but not this continuous case-by-case refactoring.
TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture. My opinion so far is that the benefits of TDD can be achieved by a proper architectural design, although admittedly not everyone can do this reasonably well.
So here I have two questions:
Am I right in understanding that using TDD pretty much doesn't allow you to design first (see the rules of TDD)?
Is there anything that TDD gives you that you can't get from doing proper design before you start coding?
well, I was in your shoes some time ago and had the same questions. Since then I have done quite some reading about TDD and decided to mess with it a little.
I can summarize my experience about TDD in these points:
TDD is unit testing done right, ATDD/BDD is TDD done right.
Whether you design beforehand or not is totally up to you. Just make sure you don't do BDUF. Believe me you will end up changing most of it midways because you can never fully understand the requirements until your hands get dirty.
OTOH, you can do enough design to get you started. Class diagrams, sequence diagrams, domain models, actors and collaborators are perfectly fine as long as you don't get hung up in the design phase trying to figure everything out.
Some people don't do any design at all. They just let the code talk and concentrate on refactoring.
IMHO, balance your approach. Do some design till you get the hang of it then start testing. When you reach a dead end then go back to the white board.
One more thing, some things can't be solved by TDD like figuring out an algorithm. This is a very interesting post that shows that some things just need to be designed first.
Unit testing is hard when you have the code already. TDD forces you to think from your API users perspective. This way you can early on decide if the public interface from your API is usable or not. If you decide to do unit testing after implementing everything you will find it tedious and most probably it will be only for some cases and I know some people who will right only passing test cases just to get the feature done. I mean who wants to break his own code after all that work?
TDD breaks this mentality. Tests are first class citizens. You aren't allowed to skip tests. You aren't allowed to postpone some tests till the next release because we don't have enough time.
Finally to answer your question if there anything that TDD gives you that you can't get from doing proper design before you start coding, I would say commitment.
As long as your doing TDD you are committed to apply good OO principles, so that your code is testable.
To answer your questions:
"Test Driven Development" (TDD) is often referred to as "Test Driven Design", in that this practice will result in a good design of the code. When you have written a failing unit test, you are forced into a test driven design approach, so that you can implement just what is needed to make the test pass i.e. you have to consider the design of the code you are writing to make the test pass.
When using a TDD approach a developer will implement the minimum amount of code required to pass the test. Doing proper design beforehand usually results in waste if the requirements change once the project has started.
You say "TDD appears to incite developers to jump right into the code and build their implementation inductively rather than designing a proper architecture" - If you are following an Agile approach to your software development, then you do still need to do some up front architectural investigation (e.g. if you were using the Scrum methodology you would create a development "Spike" at the start of a project) to ascertain what the minimum amount of architecture needed to start the project. At this stage you make decisions based on what you know now e.g. if you had to work with a small dataset you'd choose to use a regular DB, if you have a huge DB you might to choose to use a NoSQL big data DB etc.
However, once you have a general idea of the architecture, you should let the design evolve as the project progresses leaving further architectural decisions as late in the process as possible; Invariably as a project progresses the architecture and requirements will change.
Further more this rather popular post on SO will give you even more reasons why TDD is definetly worth the effort.
The idea of TDD is great, but i'm trying to wrap my head around how to implement a complex system if a design is not proposed upfront.
For example, let's say I have multiple services for an payment processing application. I'm not sure I understand how development would/can proceed across multiple developers if there is not a somewhat solid design upfront.
It would be great if someone can provide an example and high level steps to putting together a system in this manner. I can see how TDD can lead to simpler and more robust code, I'm just not sure how it can bring together 1) different developers to a common architectural vision and 2) result in a system that can abstract out behavior in order to prevent having to refactor large chunks of code (e.g. accept different payment methods or pricing models based on a long term development roadmap).
I see the refactoring as a huge overhead in a production system where data model changes increase risks for customers and the company.
Clearly i'm probably missing something that TDD gurus have discovered....
IMHO, It depends on the the team's composition and appetite for risk.
If the team consists of several experienced and good designers, you need a less formal 'architecture' phase. It could be just a back of the napkin doodle or a a couple of hours on the whiteboard followed by furious coding to prove the idea. If the team is distributed and/or contains lots of less skilled designers, you'd need to put more time/effort (thinking and documenting) in the design phase before everyone goes off on their own path
The next item that I can think of is to be risk first. Continually assess what are the risks to your project, calculate your exposure/impact and have mitigation plans. Focus on risky and difficult to reverse decisions first. If the decision is easily reversible, spend less time on it.
Skilled designers are able to evolve the architecture in tiny steps... if you have them, you can tone down the rigor in an explicit design phase
TDD can necessitate some upfront design but definitely not big design upfront. because no matter how perfect you think your design is before you start writing code, most of the time it won't pass the reality check TDD forces on it and will blow up to pieces halfway through your TDD session (or your code will blow up if you absolutely want to bend it to your original plan).
The great force of TDD is precisely that it lets your design emerge and refine as you write tests and refactor. Therefore you should start small and simple, making the least assumptions possible about the details beforehand.
Practically, what you can do is sketch out a couple of UML diagrams with your pair (or the whole team if you really need a consensus on the big picture of what you're going to write) and use these diagrams as a starting point for your tests. But get rid of these models as soon as you've written your first few tests, because they would do more harm than good, misleading you to stick to a vision that is no longer true.
First of all, I don't claim to be a TDD guru, but here are some thoughts based on the information in your question.
My thoughts on #1 above: As you have mentioned, you need to have an architectural design up-front - I can't think of a methodology that can be successful without this. The architecture provides your team with the cohesion and vision. You may want to do just-enough-design up front, but that depends on how agile you want to be. The team of developers needs to know how they are going to put together the various components of the system before they start coding, otherwise it will just be one big hackfest.
It would be great if someone can provide an example and high level
steps to putting together a system in this manner
If you are putting together a system that is composed of services, then I would start by defining the service interfaces and any messages that they will exchange. This defines how the various components of your system will interact (this would be an example of your up-front design). Once you have this, you can allocate various development resources to build the services in parallel.
As for #2; one of the advantages of TDD is that it presents you with a "safety net" during refactoring. Since your code is covered with unit tests, when you come to change some code, you will know pretty soon if you have broken something, especially if you are running continuous integration (which most people do with a TDD approach). In this case you either need to adapt your unit tests to cover the new behavior OR fix your code so that your unit tests pass.
result in a system that can abstract out behavior in order to prevent
having to refactor large chunks of code
This is just down to your design, using e.g. a strategy pattern to allow you to abstract and replace behavior. TDD does not prescribe that your design has to suffer. It just asks that you only do what is required to satisfy some functional requirement. If the requirement is that the system must be able to adapt to new payment methods or pricing models, then that is then a point of your design. TDD, if done correctly, will make sure that you are satisfying your requirements and that your design is on the right lines.
I see the refactoring as a huge overhead in a production system where
data model changes increase risks for customers and the company.
One of the problems of software design is that it is a wicked problem which means that refactoring is pretty much inevitable. Yes, refactoring is risky in production systems, but you can mitigate that risk and TDD will help you. You also need to have a supple design and a system with low coupling. TDD will help reduce your coupling since you are designing your code to be testable. And one of the by-products of writing testable code is that you reduce your dependencies on other parts of the system; you tend to code to interfaces which allows you to replace an implementation with a mock or stub. A good example of this is replacing a call to a database with a mock/stub that returns some known data - you don't want to hit a database in your unit tests. I guess I can mention here that a good mocking framework is invaluable with a TDD approach (Rhino mocks and Moq are both open source).
I am sure there are some real TDD gurus out there who can give you some pearls of wisdom...Personally, I wouldn't consider starting a new project with out a TDD approach.
We all know that refactoring is good and I love it as much as the next guy, but do you have real cases where is better not to refactor ?
Something like time critical stuff or synchronization? Technical or human reasons are equally welcome. Real cases scenarios and experiences a plus.
Edit : from the answers thus far, it looks like the only reason not to refactor is money. My question is mostly relative to something like this: suppose you would like to perform "extract method", but if you add the additional function call, you will make the code slightly less faster and hinder a very strict synchronization. Just to give you an idea of what I mean.
Another reason I sometimes heard is that "others used to the current code layout will get annoyed by your changes". Of course, I doubt this is a good reason.
I'm a big fan of refactoring to keep code clean and maintainable. But you generally want to shy away from refactoring production modules that work fine and don't require change. However, when you do need to work on a module to fix bugs or introduce a new feature, some refactoring is usually worth it and won't cost much since you're already committed to doing a full set of tests and going through the release process. (Unit tests are very helpful, but are only part of the full test suite, as other posters noted.)
More significant refactorings may make it harder for others to find their way around the new code, and they may then react unfavorably to refactoring. To minimize this, bring other team members in on the process using an approach like pair programming.
Update (8/10): Another reason to not refactor is when you aren't approaching the existing code base with proper humility and respect. With these qualities you'll tend to be conservative and do only refactorings that really do make a difference. If you approach the code with too much arrogance, you may wind up just making changes instead of refactoring. Is that new method name really clearer, or did the old one have a name with a very specific meaning in your application domain? Did you really need to mechanically reformat that source file to your personal style, when the existing style met project guidelines? Again pair programming can help.
To reinforce the other answer (and touch on issues you mention): do not refactor a part of the code until it's well covered by all relevant kinds of testing. This doesn't mean "don't refactor it" -- the emphasis is on "add the necessary tests" (to do unit-tests properly may well require some refactoring, particularly the introduction of factory DPs and/or dependency injection DPs in code that's now solidly bolted to concrete dependencies).
Note that this does cover your second paragraph's issues: if a section of the code is time-critical it should be well covered by "load-tests" (which like the more usual kind, correctness-test, should cover both specific units [albeit performance-wise -- correctness-checking is other tests' business!-)] AND end-to-end operations -- the equivalent of unit tests and integration tests if one was talking about correctness rather than performance).
Multi-tasking code with subtle sync issues can be a nightmare as no test can really make you entirely confident about it -- no other refactoring (that might in any way affect any fragile sync that just appears to be working now) should be considered BEFORE one intended to make the synchronization much, MUCH more robust and sound (message-passing through guaranteed-threadsafe queues being BY FAR my favorite design pattern in this regard;-).
Hmmm - I disagree with the above (1st response). Given code with no tests, you may refactor it to to make it more testable.
You do not refactor code when you cannot test the resulting code in time to deliver it such that it is still valuable to the recipient.
You do not refactor code when your refactoring will not improve the quality of the code. Quality is not subjective, although at times, design may be.
You do not refactor code when there is no business justification for making an alteration.
There are probably more, but hopefully you get the idea...
As Martin Fowler writes, you shouldn't refactor if a deadline is near. That time in project is better suited to flush out bugs instead of improving design (refactoring). Do the refactoring omitted this time directly after the deadline is over.
Refactoring is not good in and of itself. Rather, its purpose is to improve code quality so that it can be maintained more cheaply and with less risk of adding defects. For actively developed code, the benefits of refactoring are worth the cost. For frozen code that there is no intention to do any further work on, refactoring yields no benefit.
Even for live code, refactoring has its own risks, which unit tests can minimize. It also has its own place in the development cycle, which is towards the front, where it's less disruptive. The best time and place for refactoring is just before you start to make major changes to some otherwise brittle code.
When it is not cost-effective. There's a guy at the place I work who loves refactoring. Making code perfect makes him very happy. He can check out a current project tree and go to town on it, moving functions and classes around and tightening things up so they look great, have better flow, and are more extensible in the future.
Unfortunately, it's not worth the money. If he spends a week refactoring some classes into more functional units that may be easier to work with in the future, that's a week's worth of salary lost to the company with no noticeable bottom-line improvements.
Code will never, ever be absolutely perfect. You learn to live with it, and keep your hands off something that could be done better, but perhaps isn't worth the time.
If the code seems very difficult to refactor without breaking, that's the most important code to refactor!
If there aren't any tests, write some as you refactor.
Honestly, the one case is where you are forbidden to touch some code by management/customer/SomeoneImportant, and when that happens I consider the project broken.
Here is my experience:
Don't refactor:
When you don't have test suite accompanying with the code you want to refactor. You might want to develop the test suit first instead.
When your manager doesn't really care about the maintainablity and extensibility of current code base, instead they care much about if they would be able to deliver the product on schedule, especially for the project with short and tight schedule.
If you stick to the principle that everything that you do should add value for the client/ business, then the times you should not refactor are the following:
Code that works and no new development is planned.
Code that is good enough / works and refactoring simply represents gold plating.
The cost of refactoring is higher than living with the existing code.
The cost of refactoring is higher that rewriting the code from scratch
Some of the other answsers say that you should not refactor code that does not have unit tests. If code needs refactoring, you should refactor it, you must however write tests first. If the code is written in a way that makes it difficult to test, it should be rewritten (in a perfect world).
When you've got other stuff to build. I always feel like refactoring an existing system when I'm supposed to be doing something else.
There's always a balance to be had between fixing or adding to code and refactoring. However, this balance is so far in favor of refactoring that I don't think I've ever been on a team that refactored too much. Chances are, if you think you're erring on the side of refactoring too much, you're right on the money.
Of course, the biggest determining factor is how close the deadline is. If a deadline is imminent, requirements come first.
Isn't the need to refactor code largely based on the propensity of people to cut and paste code rather than thinking the solution through, and doing the factoring in advance? In other words, whenever you feel the need to cut & paste some code, merely make that chunk of code a function, and document it.
I have had to maintain way too much code where people found it easier to cut and paste a whole function, only to make one or two trivial changes, which could easily have been parametrized. But like many other's experience, to try to refactor some of this code would have take a LOT of time and been very risky.
I have 4 projects wherein a 10K line collection of functions was merely copied and modified as needed. This is a horrid maintenance nightmare. Especially when the code has LOTS of problems, e.g. hard-wired endianness assumptions, tons of global variables, etc. I feel bile in my throat just thinking about it.
Don't refactor if you don't have the time to test the refactored code before release. Refactoring can introduce bugs. If you have well-tested and relatively bug-free code, why take the risk? Wait until the next development cycle.
If you're stuck maintaining an old flakey code base with no future beyond keeping it running until management can bite the bullet and do a rewrite then refactoring is a lose-lose situation. First the developer loses because refactoing bad flakey code is a nightmare and secondly the business loses because as the developer attempts to refactor the software breaks in unexpected and unforseen ways.
When you don't really know what the code is doing in the first place. And yes, I have seen people ignore that rule.
It's just a cost-benefit tradeoff. Estimate the cost to refactor, estimate the benefits, determine if you actually have the time to refactor given other tasks, determine if refactoring is the best time-benefit tradeoff. There may be other tasks more worth doing.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to know the reasons that we do refactoring and justify it. I read a lot of people upset over the idea of refactoring. Refactoring was variously described as:
A result of insufficient upfront
design.
Undisciplined hacking
A dangerous activity that needlessly risked destabilizing
working code
A waste of resources.
What are the responsible reasons that lead us to refactor our code?
I also found a similar question here how-often-should-you-refactor, it doesn't provide the reason for refactoring.
Why do we refactor?
Because there's no actual substitute for writing code. No amount of upfront planning or experience can substitute actual code writing. This is what an entire generation (called waterfall) learned the hard way.
Once you start writing the code and be in the middle of it, you reason about the way it works on a lower level you do notice things (performance, usability or correctness things) that escaped the higher design view.
Refactoring is perfecting.
Ask yourself: why do painters do multiple strokes with the brush on the same spot?
Refactoring is the way to pay the technical debt.
I'd like to briefly address three of your points.
1. "A result of insufficient up-front design"
Common sense (and several books and bloggers) tell us we should strive for the simplest, cleanest design possible to address a given problem. While it's quite possible that some code is written without sufficient work on developing an understanding of the requirements and the problem domain, it's probably more common that "poor code" wasn't "poor" when it was written; rather, it is no longer sufficient.
Requirements change, and designs have to support additional features and capabilities. It's not unreasonable to anticipate some future changes up-front, but McConnell et al. rightly caution against high-level, overly-flexible designs when there's no clear and present need for such an approach.
3. "A dangerous activity that needlessly risks destabilising working code"
Well, yes, if done improperly. Before you seek to make any significant modification to a working system, you should put in place proper measures to ensure that you're not causing any harm - a sort of "developmental Hippocratic oath", almost.
Typically, this will be done by a mixture of documentation and testing, and more often than not, the code wins out, because it's the most up-to-date description of the actual behaviour. In practical terms, this translates into having decent coverage with a unit test suite, so that if refactoring does introduce unexpected problems, these are identified and resolved.
Obviously, when you seek to refactor, you're going to break a certain number of tests, not least because you're trying to fix some broken code contracts. It is, however, perfectly possible to refactor with impunity, provided you have that mechanism in place to spot the accidental mistakes.
4. "A waste of resources"
Others have mentioned the concept of technical debt, which is, briefly, the idea that over time, the complexity of such systems builds up, and that some of that build-up has to be reduced, by refactoring and other techniques, in order to reasonably facilitate future development. In other words, sometimes you have to bite the bullet and go ahead with that change you've been putting off, because otherwise you'll be making a bad situation appallingly worse when you come to add something new in that area.
Obviously, there's a time and a place to pay off such things; you wouldn't try and repay a loan until you had the cash to do it, and you can't afford to go around refactoring willy nilly during a critical stage in development. Nevertheless, by making the decision to address some of the problems in your code base, you save future development time, and thus money, and maybe even further into the future, avoid the cost of having to abandon or completely rewrite some component that is beyond your understanding.
In order to keep a maintainable code base?
Code is more read than written, so it is necessary to have a code-base that is readable, understandable and maintainable. When you see something that is poorly written or designed, it can be refactored to improve the design of the code.
You clean your house also regularly, don't you? Although it may be considered a waste of time, it is necessary in order to keep your house clean, so that you have a nice environment to live in.
You may need to refactor if your code is
Inefficient
Buggy
Hard to extend
Hard to maintain
It all boils down to the original code not being very good, so you improve it.
If you have reasonable unit tests it shouldn't be dangerous at all.
Because hindsight is easier than foresight.
Software is one of the most complex things created by humans, so it is not easy to consider everything beforehand. For large projects it can even be impossible for the team (at least for one consisting of humans ;) ) to consider everything before they actually start developing it.
Another reason is that software isn't constructed, it's growing. That means software can and has to adapt to ever changing requirements and environments.
As Martin Fowler says, the only thing surprising about the requirements for software changing is that anyone is surprised by it.
The requirements will change, new features will be requested. This is a good thing. Enhancement efforts succeed most of the time, and when they fail, they fail small, so there is budget to do more. Big up front design projects fail often (one statistics puts the failure rate at 66%), so avoid them. The way to avoid them is to design enough for the first version, and as enhancements are added, refactor to the point where it looks like the system intended to do that in the first place. The lifespan of a project that can do this (there are issues when you publish data formats or APIs - once you go live you can't always be pristine anymore) is indefinite.
In response to the four points, I would say that a process that shuns refactoring demands:
A static world where nothing changes
so that the upfront design can hit a
non-moving target perfectly.
Will
result in ugly hacks to work around
design flaws that aren't being
refactored.
Will lead to dangerous
code duplication as the fear of
changing existing code sets in.
Will
waste resources over engineering the
problem and building large design
artifacts in anticipation of
requirements that never end up
getting built, causing large amounts
of code and complication to drag the
project down while not providing any
value.
One caveat, though. If you don't have the proper support, in an automated tool for simple cases, and thorough unit tests in the more complicated cases, it will hurt, there will be new bugs introduced, and you will develop a (quite rational) fear of doing it more. Refactoring is a great tool, but it requires safety equipment.
Another scenario where you need refactoring is TDD. The textbook approach for TDD is to write only the code you need to pass the test and then refactor it to something nicer afterwards.
...because coding is like gardening. Your codebase grows and you domain changes as time passes. What was a good idea back then often looks like a poor design now and what is a good design now may well not be optimal in the future.
Code should never be considered a permanent artifact nor should it be considered too sacred to touch. Confidence should be garnered through testing and refactoring is a mechanism to facilitate change.
While a lot of other people have already said perfectly valid reasons, here's mine:
Because it's fun. It's like beating your own time in steeplechase, having the stronger bicep in armwrestling or improving your highscore in a game of your choice.
A straightforward answer is, requirements change. No matter how elegant your design is, some requirements later on will not buy it.
Poor understanding of the requirements:
If developers don't have a clear understanding of the requirements, the resulting design and code cannot satisfy the customer. Later as the requirements become more clear, refactor becomes essential.
Supporting new requirements.
If a component is old, in most of the cases it will not be able handle the radical new requirements. It then becomes essential to go for refactoring.
Lots of bugs in the existing code.
If you have spent long hours in office fixing quite a few nasty bugs in a particular component, it becomes a natural choice for refactoring at the earliest.
Upfront: Refactoring does not need to be dangerous when a) supported by tools and b) you have a testsuite that you can run after the refactoring in order to check the functioning of your software.
One of the main reasons for refactoring is that at some point you find out that code is used by more than one code path and you don't want to duplicate (copy&paste) but reuse. This is especially important in cases where you find an error in that code. If you have refactored the duplicated code into an own method, you can fix that method and be done. If you copy&paste code around, there is a high chance that you don't fix all places where this code occurs (just think of projects with several members and thousands of lines of code).
You should of course not do refactoring just because of the sake of refactoring - then it is really a waste of resources.
For whatever reason, when I create or find a function that scrolls off the screen, I know it's time to sit back and consider whether it should be refactored or not - if I'm having to scroll the whole page to take in the function as a whole, chances are it's not a shining example of readability or maintainability.
To make insane stuff sane.
I mainly refactor when the code has suffered so much under copy + paste and a lack of architectural guideance that the action of understanding the code is akin to re-organising it and removing the duplication.
It is human to err, and you're ALWAYS going to make mistakes when you develop software. Creating a good design from the beginning helps a lot, and having skilled programmers on the team is also a good thing, but they will invariably make mistakes, and there will be code that is hard to read, tightly coupled or non-functional, etc. Refactoring is a tool to mend these flaws when they've already occurred. You should never stop working on preventing these things from happening to begin with, but when they do happen, you can fix them.
Refactoring to me is like cleaning my desk; it creates a better working environment because over time it will get messy.
I refactor because, without refactoring, it becomes harder and harder to add new features to a codebase over time. If I have features A, B, and C to add, feature C will be finished sooner, with less pain and suffering on my part, if I take time to refactor after features A and B. I'm happier, my boss is happier, and our customers are happier.
I think it's worth restating, in any conversation involving refactoring, that refactoring is verifiably behavior-preserving. If at the end of your "refactoring" your program has different outputs, or if you only think, but can't prove, that it has the same outputs, then what you've done isn't refactoring. (That doesn't mean it's worthless or not worth doing -- maybe it's an improvement. But it's not refactoring and shouldn't be confused with it.)
Refactoring is a central component in any agile software development methods.
Unless you fully understand all the requirements and technical limitations of your project you can't have a complete upfront design. In this case instead of using a traditional waterfall approach you're probably better off with an agile method - agile methods focus on adapting quickly to changing realities. And how would you adapt your source code without refactoring?
I've found code design and implementation, particularly with unfamiliar and large projects to be a learning process.
The scope and requirements of a project change over time, which has consequences on the design. It may be that after spending some time implementing your product you discover that your planned design is not optimal. Perhaps new requirements were added by the client. Or perhaps you're adding additional functionality to an older product and you need to refactor the code in order to sufficiently provide this functionality.
In my experience code has been written poorly and the refactoring has become necessary to prevent the product from failing and to ensure it is maintainable/extendable.
I believe an iterative design process, with prototyping early on is a good way to minimise refactoring later on. This also allows you to experiment with differing implementations to determine which is most suitable.
Not only that, but new ideas and methods for what you're doing may become available. Why stick with old, fallible code that could become problematic if it can be improved?
In short, projects will change overtime, which necessitates changes in structure to ensure it meets new requirements.
From my own personal experience I refactor because I find if I make software the way I want it made from first go that it takes a very long time to create something.
Therefore I value the pragmatism of developing software over clean code. Once I have something running I then begin to refactor it into the way it should be. Needless to say, the code never devolves into a piece of unreadable tripe.
Just a side note - I did my degree in software engineering after reading some material from Steve Mcconnell as a teen. I love design patterns, good code reuse, nicely thought out designs and so on. But I find when working on my own projects that designing things initially from that point of view just doesnt work unless I'm an absolute expert with the technology I'm using (Which is never the case)
Refactoring is done to help make code easier to understand/document.
To give a method a better name - perhaps the previous wasnt clear or incorrect.
To give variables more descriptive / better names.
Break up a really long method into many smaller methods representing the steps involved in solving the problem.
Move classes to a new package(namespace) to assist organisation.
Reduce duplicate code.
Does point number one even matter? If you're refactoring, the up-front design was obviously flawed. Don't waste time worrying about the flaws in the original design; it's old news. What matters is what you have now, so spend that time refactoring.
I refactor because proper refactoring makes maintenance SO much easier. I've had to maintain a TON of bad, awful code and I don't want to hand down any that I've written for someone else to maintain.
Maintenance costs of smelly code will almost always be higher than maintenance costs for sweet smelling code.
I refactor because:
Often my code is far from optimal first time around.
Hindsight is often 20-20.
My code will be easier to maintain for the next guy.
I have professional pride in the work I leave behind.
I believe time spent now can save a lot more time (and money) further down the track.
All your points are common descriptors of why people do refactor. I would say that the reason people should refactor lies within point #1: A Big Design Up Front (BDUF) is almost always imperfect. You learn about the system as you build it. In trying to anticipate what could happen you often end up building complex solutions to deal with things that never actually happen. (YAGNI - You ain't gonna need it).
Instead of the BDUF approach, a better solution is therefore to design the parts of the system you know you are going to need. Follow the principles of single responsibility principle, use inversion of control/dependency injection so that you can replace parts of your system when needed.
Write tests for your components. And then, when the requirements for your system change or you discover flaws in your initial design, you can refactor and extend your code. Since you have your unit tests and integration tests in place, you will know if and when the refactoring breaks something.
There is a difference between large refactorings (restructuring modules, class hierarchies, interfaces) and "unit" refactorings - within methods and classes.
Whenever I touch a piece of code I do a unit refactoring - renaming variables, extracting methods; because actually seeing the code in front of me gives me more information to make it better. Sometimes refactoring also helps me to better understand what the code is doing. It's like writing or painting, you extract a fuzzy idea out of your head; put a rough skeleton onto paper; then into code. You then refine the rough idea in the code.
With modern refactoring tools like ReSharper in C#, this kind of unit refactoring is extremely easy, quick & low risk.
Large refactorings are harder, break more things, and require communication with your team members. It will become clear to everyone when these need to happen - because requirements have changed so much that the original design no longer works - and then they should be planned like a new feature.
My last rule - only refactor code that you are actually working on. If code's functionality doesn't need to be changed, then it's good enough & doesn't need further work.
Avoid refactoring just for refactoring's sake; that's just refactorbating!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm a contract programmer with lots of experience. I'm used to being hired by a client to go in and do a software project of one form or another on my own, usually from nothing. That means a clean slate, almost every time. I can bring in libraries I've developed to get a quick start, but they're always optional. (and depend on getting the right IP clauses in the contract) Many times I can specify or even design the hardware platform... so we're talking serious freedom here.
I can see uses for constructing automated tests for certain code: Libraries with more than trivial functionality, core functionality with a high number of references, etc. Basically, as the value of a piece of code goes up through heavy use, I can see it would be more and more valuable to automatically test that code so that I know I don't break it.
However, in my situation, I find it hard to rationalize anything more than that. I'll adopt things as they prove useful, but I'm not about to blindly follow anything.
I find many of the things I do in 'maintenance' are actually small design changes. In this case, the tests would not have saved me anything and now they'd have to change too. A highly iterative, stub-first design approach works very well for me. I can't see actually saving myself that much time with more extensive tests.
Hobby projects are even harder to justify... they're usually anything from weekenders up to a say month long. Edge-case bugs rarely matter, it's all about playing with something.
Reading questions such as this one, The most voted on response seems to say that in that poster's experience/opinion TDD actually wastes time if you've got less than 5 people (even assuming a certain level of competence/experience with TDD). However, that appears to be covering initial development time, not maintenance. It's not clear how TDD stacks up over the entire life cycle of a project.
I think TDD could be a good step in the worthwhile goal of improving the quality of the products of our industry as a whole. Idealism on it's own is no longer all that effective at motivating me, though.
I do think TDD would be a good approach in large teams, or any size team containing at least one unreliable programmer. That's not my question.
Why would a sole developer with a good track record adopt TDD?
I'd love to hear of any kind of metrics done (formally or not) on TDD... focusing on solo developers or very small teams.
Failing that, anecdotes of your personal experiences would be nice, too. :)
Please avoid stating opinion without experience to back it. Let's not make this an ideology war. Also the skip greater employment options argument. This is simply an efficiency question.
I'm not about to blindly follow anything.
That's the right attitude. I use TDD all the time, but I don't adhere to it as strictly as some.
The best argument (in my mind) in favor of TDD is that you get a set of tests you can run when you finally get to the refactoring and maintenance phases of your project. If this is your only reason for using TDD, then you can write the tests any time you want, instead of blindly following the methodology.
The other reason I use TDD is that writing tests gets me thinking about my API up front. I'm forced to think about how I'm going to use a class before I write it. Getting my head into the project at this high level works for me. There are other ways to do this, and if you've found other methods (there are plenty) to do the same thing, then I'd say keep doing what works for you.
I find it even more useful when flying solo. With nobody around to bounce ideas off of and nobody around to perform peer reviews, you will need some assurance that you're code is solid. TDD/BDD will provide that assurance for you. TDD is a bit contraversial, though. Others may completely disagree with what I'm saying.
EDIT: Might I add that if done right, you can actually generate specifications for your software at the same time you write tests. This is a great side effect of BDD. You can make yourself look like super developer if you're cranking out solid code along with specs, all on your own.
Ok my turn... I'd do TDD even on my own (for non-spike/experimental/prototype code) because
Think before you leap: forces me to think what I want to get done before i start cranking out code. What am I trying to accomplish here.. 'If I assume I already had this piece.. how would I expect it to work?' Encourages interface-in design of objects.
Easier to change: I can make modifications with confidence.. 'I didn't break anything in step1-10 when i changed step5.' Regression testing is instantaneous
Better designs emerge: I've found better designs emerging without me investing effort in a design activity. test-first + Refactoring lead to loosely coupled, minimal classes with minimal methods.. no overengineering.. no YAGNI code. The classes have better public interfaces, small methods and are more readable. This is kind of a zen thing.. you only notice you got it when you 'get it'.
The debugger is not my crutch anymore : I know what my program does.. without having to spend hours stepping thru my own code. Nowadays If I spend more than 10 mins with the debugger.. mental alarms start ringing.
Helps me go home on time I have noticed a marked decrease in the number of bugs in my code since TDD.. even if the assert is like a Console trace and not a xUnit type AT.
Productivity / Flow: it helps me to identify the next discrete baby-step that will take me towards done... keeps the snowball rolling. TDD helps me get into a rhythm (or what XPers call flow) quicker. I get a bigger chunk of quality work done per unit time than before. The red-green-refactor cycle turns into... a kind of perpetual motion machine.
I can prove that my code works at the touch of a button
Practice makes perfect I find myself learning & spotting dragons faster.. with more TDD time under my belt. Maybe dissonance.. but I feel that TDD has made me a better programmer even when I don't go test first. Spotting refactoring opportunities has become second nature...
I'll update if I think of any more.. this is what i came up with in the last 2 mins of reflection.
I'm also a contract programmer. Here are my 12 Reasons Why I Love Unit Tests.
My best experience with TDD is centered around the pyftpdlib project. Most of the development is done by the original author, and I've made a few small contributions, but it's essentially a solo project. The test suite for the project is very thorough, and tests all the major features of the FTPd library. Before checking in changes or releasing a version, all tests are checked, and when a new feature is added, the test suite is always updated as well.
As a result of this approach, this is the only project I've ever worked on that didn't have showstopper bugs appear after a new release, have changes checked in that broke a major feature, etc. The code is very solid and I've been consistently impressed with how few bug reports have been opened during the life of the project. I (and the original author) attribute much of this success to the comprehensive test suite and the ability to test every major code path at will.
From a logical perspective, any code you write has to be tested, and without TDD then you'll be testing it yourself manually. On the flip side to pyftpdlib, the worst code by number of bugs and frequency of major issues, is code that is/was solely being tested by the developers and QA trying out new features manually. Things don't get tested because of time crunch or falling through the cracks. Old code paths are forgotten and even the oldest stable features end up breaking, major releases end up with important features non-functional. etc. Manual testing is critically important for verification and some randomization of testing, but based on my experiences I'd say that it's essential to have both manual testing and a carefully constructed unit test framework. Between the two approaches the gaps in coverage are smaller, and your likelihood of problems can only be reduced.
It does not matter whether you are the sole developer or not. You have to think of it from the application point of view. All the applications needs to work properly, all the applications need to be maintained, all the applications needs to be less buggy. There are of course certain scenarios where a TDD approach might not suit you. This is when the deadline is approaching very fast and no time to perform unit testing.
Anyways, TDD does not depend on a solo or a team environment. It depends on the application as a whole.
I don't have an enormous amount of experience, but I have had the experience of seeing sharply-contrasted approaches to testing.
In one job, there was no automated testing. "Testing" consisted of poking around in the application, trying whatever popped in your head, to see if it broke. Needless to say, it was easy for flat-out-broken code to reach our production server.
In my current job, there is lots of automated testing, and a full CI-system. Now when code gets broken, it is immediately obvious. Not only that, but as I work, the tests really document what features are working in my code, and what haven't yet. It gives me great confidence to be able to add new features, knowing that if I break existing ones, it won't go unnoticed.
So, to me, it depends not so much on the size of the team, but the size of the application. Can you keep track of every part of the application? Every requirement? Every test you need to run to make sure the application is working? What does it even mean to say that the application is "working", if you don't have tests to prove it?
Just my $0.02.
Tests allow you to refactor with confidence that you are not breaking the system. Writing the tests first allows the tests to define what is working behavior for the system. Any behavior that isn't defined by the test is by definition a by-product and allowed to change when refactoring. Writing tests first also drive the design in good directions. To support testability you find that you need to decouple classes, use interfaces, and follow good pattern (Inversion of Control, for instance) to make your code easily testable. If you write tests afterwards, you can't be sure that you've covered all the behavior expected of your system in the tests. You also find that some things are hard to test because of the design -- since it was likely developed without testing in mind -- and are tempted to skimp on or omit tests.
I generally work solo and mostly do TDD -- the cases where I don't are simply where I fail to live up to my practices or haven't yet found a good way that works for me to do TDD, for example with web interfaces.
TDD is not about testing it's about writing code. As such, it provides a lot of benefits to even a single developer. For many developers it is a mindshift to write more robust code. For example, how often do you think "Now how can this code fail?" after writing code without TDD? For many developers, the answer to that question is none. For TDD practioners it shifts the mindset to to doing things like checking if objects or strings are null before doing something with them because you are writing tests to specifically do that (break the code).
Another major reason is change. Anytime you deal with a customer, they can never seem to make up their minds. The only constant is change. TDD helps as a "safety net" to find all the other areas that could break.Even on small projects this can keep you from burning up precious time in the debugger.
I could go and on, but I think saying that TDD is more about writing code than anything should be enough to justify it's use as a sole developer.
I tend to agree with the validity of your point about the overhead of TDD for 'one developer' or 'hobby' projects not justifying the expenses.
You have to consider however that most best practices are relevant and useful if they are consistently applied for a long period of time.
For example TDD is saving you testing/bugfixing time in a long run, not within 5 minutes after you've created the first unit test.
You're a contract programmer which means that you will leave your current project when it will be finished and will switch to something else, most likely in another company. Your current client will have to maintain and support your application. If you do not leave the support team a good framework to work with they will be stuck. TDD will help the project to be sustainable. It will increase the stability of the code base so other people with less experience will not be able not do too much damage trying to change it.
The same applies for the hobby projects. You may be tired of it and will want to pass it to someone. You might become commercially successful (think Craiglist) and will have 5 more people working besides you.
Investment in proper process always pays-off, even if it is just gained experience. But most of the time you will be grateful that when you started a new project you decided to do it properly
You have to consider OTHER people when doing something. You you have to think ahead, plan for growth, plan for sustainability.
If you don't want to do that - stick to the cowboy coding, it's much simpler this way.
P.S. The same thing applies to other practices:
If you don't comment your code and you have ideal memory you'll be fine but someone else reading your code will not.
If you don't document your discussions with the customer somebody else will not know anything about a crucial decision you made
etc ad infinitum
I no longer refactor anything without a reasonable set of unit tests.
I don't do full-on TDD with unit tests first and code second. I do CALTAL -- Code A LIttle, Test A Little -- development. Generally, code goes first, but not always.
When I find that I've got to refactor, I make sure I've got enough tests and then I hack away at the structure with complete confidence that I don't have to keep the entire old-architecture-becomes-new-architecture plan in my head. I just have to get the tests to pass again.
I refactor the important bits. Get the existing suite of tests to pass.
Then I realize I forgot something, and I'm back to CALTAL development on the new stuff.
Then I see things I forgot to delete -- but are they really unused everywhere? Delete 'em and see what fails in the testing.
Just yesterday -- part way through a big refactoring -- I realized that I still didn't have the exact right design. But the tests still had to pass, so I was free to refactor my refactoring before I was even done with the first refactoring. (whew!) And it all worked nicely because I had a set of tests to validate the changes against.
For flying solo TDD is my copilot.
TDD lets me more clearly define the problem in my head. That helps me focus on implementing just the functionality that is required, and nothing more. It also helps me create a better API, because I'm writing a "client" before I write the code itself. I can also refactor without having to worry about breaking anything.
I'm going to answer this question quite quickly, and hopefully you will start to see some of the reasoning, even if you still disagree. :)
If you are lucky enough to be on a long-running project, then there will be times when you want to, for example, write your data tier first, then maybe the business tier, before moving on up the stack. If your client then makes a requirement change that requires re-work on your data layer, a set of unit tests on the data layer will ensure that your methods don't fail in undesirable ways (assuming you update the tests to reflect the new requirements). However, you are likely to be calling the data layer method from the business layer as well, and possibly in several places.
Let's assume you have 3 calls to a method in the business layer, but you only modify 2. In the third method, you may still be getting data back from your data layer that appears to be valid, but may break some of the assumptions you coded months before. Unit tests at this level (and above) should have been designed to spot broken assumptions, and in failing they should highlight to you that there is a section of code that needs to be revisited.
I'm hoping that this very simplistic example will be enough to get you thinking about TDD a little more, and that it might create a spark that makes you consider using it. Of course, if you still don't see the point, and you are confident in your own abilities to keep track of many thousands of lines of code, then I have no place to tell you you should start TDD.
The point about writing the tests first is that it enforces the requirements and design decisions you are making. When I mod the code, I want to make sure those are still enforced and it is easy enough to "break" something without getting a compiler or run-time error.
I have a test-first approach because I want to have a high degree of confidence in my code. Granted, the tests need to be good tests or they don't enforce anything.
I've got some pretty large code bases that I work on and there is a lot of non-trivial stuff going on. It is easy enough to make changes that ripple and suddenly X happens when X should never happen. My tests have saved me on several occasions from making a critical (but subtle) error that might have gone unnoticed by human testers.
When the tests do fail, they are opportunities to look at them and the production code and make sure that it is correct. Sometimes the design changes and the tests will need to be modified. Sometimes I'll write something that passes 99 out of 100 tests. That 1 test that didn't pass is like a co-worker reviewing my code (in a sense) to make sure I'm still building what I'm supposed to be building.
I feel that as a solo developer on a project, especially a larger one, you tend to be spread pretty thin.
You are in the middle of a large refactoring when all of a sudden a couple of critical bugs are detected that for some reason did not show up during pre-release testing. In this case you have to drop everything and fix them and after having spent two weeks tearing your hair out you can finally get back to whatever you were doing before.
A week later one of your largest customers realizes that they absolutely must have this cool new shiny feature or otherwise they won't place the order for those 1M units they should have already ordered a month ago.
Now, three months later you don't even remember why you started refactoring in the first place let alone what the code you are refactoring was supposed to do. Thank god you did a good job writing those unit tests because at least they tell you that your refactored code is still doing what it was supposed to do.
Lather, rinse, repeat.
..story of my life for the past 6 months. :-/
Sole developer should use TDD on his project (track record does not matter), since eventually this project could be passed to some other developer. Or more developers could be brought in.
New people will have extremely have hard time working with the code without the tests. They will break things.
Does your client own the source code when you deliver the product? If you can convince them that delivering the product with unit tests adds value, then you are up-selling your services and delivering a better product. From the client's perspective, test coverage not only ensures quality, it allows future maintainers to understand the code much more readily since the tests isolate functionality from the UI.
I think TDD as a methodology is not just about "having tests when making changes", thus it does not depend on team- nor on project size. It's about noting one's expectations about what a pice of code/an application does BEFORE one starts to really think about HOW the noted behaviour is implemented. The main focus of TDD is not only having test in place for written code but writing less code because you just do what make the test green (and refactor later).
If you're like me and find it quite hard to think about what a part/the whole application does WITHOUT thinking about how to implement it, I think its fine to write your test after your code and thus letting the code "drive" the tests.
If your question isn't so much about test-first (TDD) or test-after (good coding?) I think testing should be standard practise for any developer, wether alone or in a big team, who creates code which stays in production longer than three months. In my expirience that's the time-span after which even the original author has to think hard about what these twenty lines of complex, super-optimized, but sparsely documented code really code do. If you've got tests (which cover all paths throughth the code), there less to think - and less to ERR about, even years later...
Here are a few memes and my responses:
"TDD made me think about how it would fail, which made me a better programmer"
Given enough experience, being higly concerned with failure modes should naturally become part of your process anyway.
"Applications need to work properly"
This assumes you are able to test absolutely everything. You're not going to be any better at covering all possible tests correctly than you were at writing the functional code correctly in the first place. "Applications need to work better" is a much better argument. I agree with that, but it's idealistic and not quite tangible enough to motivate as much as I wish it would. Metrics/anecdotes would be great here.
"Worked great for my <library component X>"
I said in the question I saw value in these cases, but thanks for the anecdote.
"Think of the next developer"
This is probably one of the best arguments to me. However, it is quite likely that the next developer wouldn't practice TDD either, and it would therefore be a waste or possibly even a burden in that case. Back-door evangelism is what it amounts to there. I'm quite sure a TDD developer would really appeciate it, though.
How much are you going to appreciate projects done in deprecated must-do methodologies when you inherit one? RUP, anyone? Think of what TDD means to next developer if TDD isn't as great as everyone thinks it is.
"Refactoring is a lot easier"
Refactoring is a skill like any other, and iterative development certainly requires this skill. I tend to throw away considerable amounts of code if I think the new design will save time in the long run, and it feels like there would be an awful number of tests thrown away too. Which is more efficient? I don't know.
...
I would probably recommend some level of TDD to anyone new... but I'm still having trouble with the benefits for anyone who's been around the block a few times already. I will probably start adding automated tests to libraries. It's possible that after doing that, I'll see more value in doing it generally.
Motivated self interest.
In my case, sole developer translates to small business owner. I've written a reasonable amount of library code to (ostensibly) make my life easier. A lot of these routines and classes aren't rocket science, so I can be pretty sure they work properly (at least in most cases) by reviewing the code, some some spot testing and debugging into the methods to make sure they behave the way I think they do. Brute force, if you will. Life is good.
Over time, this library grows and gets used in more projects for different customers. Testing gets more time consuming. Especially cases where I'm (hopefully) fixing bugs and (even more hopefully) not breaking something else. And this isn't just for bugs in my code. I have to be careful adding functionality (customers keep asking for more "stuff") or making sure code still works when moved to a new version of my compiler (Delphi!), third party code, runtime environment or operating system.
Taken to the extreme, I could spend more time reviewing old code than working on new (read: billable) projects. Think of it as the angle of repose of software (how high can you stack untested software before it falls over :).
Techniques like TDD gives me methods and classes that are more thoughtfully designed, more thoroughly tested (before the customer gets them) and need less maintenance going forward.
Ultimately, it translates to less time doing maintenance and more time to spend doing things that are more profitable, more interesting (almost anything) and more important (like family).
We are all developers with a good track record. After all, we are all reading Stackoverflow. And many of us use TDD and perhaps those people have a great track record. I get hired because people want someone who writes great test automation and can teach that to others. When working alone, I do TDD on my coding projects at home because I found that if I don’t, I spent time doing manual testing or even debugging, and who needs that. (Perhaps those people have only good track records. I don’t know.)
When it comes to being a good automobile driver, everyone believes they are a “good driver.” This is a cognitive bias all drivers have. Programmers have their own biases. The reasons developers such as the OP don’t do TDD are covered in this Agile Thoughts podcast series. The podcast archive also has content on test automation concepts such as the test pyramid, and an intro about what is TDD and why write tests first starting with episode 9 in the podcast archive.