Debugging is a bad smell - how to persuade them? - debugging

I've been working on a project that can't be described as 'small' anymore (40+ months), with a team that can't be defined as 'small' anymore (~30 people). We've been using Agile/Scrum (1) practices all along, and a healthy dose of TDD.
I'm not sure if I picked this up from Agile or TDD, more likely a combination of the two, but I'm now clearly in the camp of people that looks at debugging as a bad smell. By 'debugging' I'm not referring to the more abstract concept of figuring out what might be wrong with the system, but the specific activity of running the system in Debug mode, stepping through the code to figure out details that are otherwise inscrutable.
Since I'm fairly convinced, this question is not about whether debugging is a bad smell or not. Rather, I'd like to know how I can persuade my team-mates about this.
People that believe debugging mode is the 'standard' mode tend to write code that can be understood only by debugging through it, which leads to a lot of time wasted since every time you work an item on top of code developed by someone else, you get to first spend a considerable amount of time debugging it (and, since there's no bug involved.. the term is becoming increasingly ridiculous) - and then silos happen. So I'd love to convince a few of my team-mates that avoiding debug mode is a Good Thing (2). Since they are used to live in Debug mode, however, they don't seem to see the problem; to them, spending hours debugging someone else code before they even start doing anything related to their new item is the norm; they don't see anything wrong with it. Plus, as they spend time 'figuring it out' they know eventually the developer that worked that area will become available and the item will be passed on to them (leading to yet another silo).
Help me come up with a plan to turn them from the Dark Side !
Thanks in advance.
(1) Also referred to as SCRUM (all caps). Capitalization arguments aside, I think an asterisk after the term must be used since - unsurprisingly - our organization 'tweaked' the Agile and Scrum process to fit the perceived needs of all stakeholders involved. So, in all honesty, I won't pretend this has been 100% according to theory, but that's beside the point of my question.
(2) Yes, there will always be times when we'll have to get in debug mode, I'm not trying to absolutely avoid it, just.. trying to minimize the number of times we have to dive into it.

If you want to persuade your coworkers that your programming practices are better, first demonstrate by your productiveness that you are more effective than they are, at least for some tasks. Then they'll believe you when you explain how you get so much done.
It's also sometimes easier to focus on something concrete. Do your coworkers even talk in terms of "code smell"? Perhaps you could focus on specifics like "When the ABC module fails, it takes forever to debug it; it's much faster to use technique XYZ. Here, let me demonstrate." Then afterwards you can mention your basic principle, which is yeah the debugger is a useful tool, but there's usually other more useful ones.

This is a cross-post, because the first time around it was more of an aside on someone else's answer to a different question. To this question it's a direct answer.
Debugging degrades the quality code of
the code we produce because it allows
us to get away with a lower level of
preparation and less mental
discipline. I learnt this from an
accidental controlled experiment in
early 2000, which I now relate:
I took on a contract as a Delphi
coder, and the first task assigned was
to write a template engine
conceptually similar to a reporting
engine - using Java, a language with
which I was unfamiliar.
Bizarrely, the employer was quite
happy to pay me contract rates to
spend months becoming proficient with
a new language, but wouldn't pay for
books or debuggers. I was told to
download the compiler and learn using
online resources (Java Trails were
pretty good).
The golden rule of arts and sciences
is that whoever has the gold makes the
rules, so I proceeded as instructed. I
got my editor macros rigged up so I
could launch the Java compiler on the
current edit buffer with a single
keystroke, I found syntax-colouring
definitions for my editor and I used
regexes to parse the compiler output
and put my cursor on the reported
location of compile errors. When the
dust settled, I had a little IDE with
everything but a debugger.
To trace my code I used the good old
fashioned technique of inserting
writes to the console that logged
position in the code and the state of
any variables I cared to inspect. It
was crude, it was time-consuming, it
had to be pulled out once the code
worked and it sometimes had confusing
side-effects (eg forcing
initialisation earlier than it might
otherwise have occurred resulting in
code that only works while the trace
is present).
Under these conditions my class
methods got shorter and more and more
sharply defined, until typically they
did exactly one very well defined
operation. They also tended to be
specifically designed for easy
testing, with simple and completely
deterministic output so I could test
them independently.
The long and the short of it is that
when debugging is more painful than
designing, the path of least
resistance is better design.
What turned this from an observation
to a certainty was the success of the
project. Suddenly there was budget and
I had a "proper" IDE with an
integrated debugger. Over the course
of the next two weeks I noticed a
reversion to prior habits, with
"sketch" code made to work by
iterative refinement in the debugger.
Having noticed this I recreated some
earlier work using a debugger in place
of thoughtful design. Interestingly,
taking away the debugger slowed
development only slightly, and the
finished code was vastly better
quality particularly from a
maintenance perspective.
Don't get me wrong: there is a place
for debuggers. Personally, I think
that place is in the hands of the team
leader, to be brought out in times of
dire need to figure out a mystery, and
then taken away again before people
lose their discipline.
People won't want to ask for it
because that would be an admission of
weakness in front of their peers, and
the act of explaining the need and the
surrounding context may well induce
peer insights that solve the problem -
or even better designs free from the
problem.
So, FOR, I not only agree with your position, I have real data from a controlled experiment to support it. It is, however, a rather small sample. More elaborate tests are required before my conclusions are supportable.
Why don't you take what I've said to your team and suggest trials. You have more data than they do (I just gave it to you) and in order to have a credible basis for disagreeing with you they basically have to test the idea, and the only way to do that is to give your idea a go.
You should be ready for it to all fall apart, though, because the whole thing is predicated on the assumption that the developers have the talent and experience to rise to the challenge of stronger design in the absence of step-through debugging.
Step-through debugging was created to make debugging easier. The direct effect of lowering the bar is that people with less talent can participate - if you build a tool that even jackasses can use, you will get jackasses using it -- a lot of them, if the newly accessible activity is well-remunerated.
This causes an exodus of people with talent because they generally use that talent to do rare and precious things in order to be well paid without working too hard, and the market doesn't want to pay for excellence because it cannot distinguish talent well enough to know when paying for it is justified.
Another thought: more recent work with problems on production servers, where it was impossible to install a debugger, has shown the importance of having a codebase for which maintenance doesn't depend on the availability of a debugger. Code that's grown in the absence of debuggers is much less hassle. Choose not to use them when you can change your mind, and then when you can't change your mind it won't be so awful.

Since I'm fairly convinced, this question is not about whether debugging is a bad smell or not.
Well, your local Church might be more appropriate place for your question then.
That aside, convince them by arguments. You might want to reconsider your fundamentalist stance, however, because this is the very opposite of persuasive. One thing you might want to do is drop the term “debugging” in your whole discussion and replace it by “stepping through the code” or the likes, emphasizing that you oppose the uninformend guesswork/patchwork practice of probing that you condemn rather than an informed reflection about the code.
(I would still disagree with you, but that's besides the point since you didn't want a discussion.)

I think the real problem here is
People that believe debugging mode is
the 'standard' mode tend to write code
that can be understood only by
stepping through it
This, if true, should be self evidently wrong and there should be no need to discuss it. If it's not evident it's because they don't see how the badly written code could be improved. Show them, do code reviews where you show how that code could be refactored in a way that is clear without stepping through it.
Code stepping will automatically diminish once better code is written, it just doesn't work the other way around. People will still write bad code and if they avoid stepping through it that will only lead to more wasted time (damn I wish I could step through this spaghetti mess), not to better code.

There is something wrong here, but it's hard to put my finger on it. Perhaps the real issue is that the code has other smells that make it difficult to readily understand. I agree that with TDD one ought to use the debugger less rather than more, since you'll be developing the code in small increments. But, if you can't look at the code and understand it, perhaps it's because the design is too coupled -- there are too many interrelated classes required to make things work.
If the code really needs to be so complex that observation won't suffice, then maybe you need to invest in some good commenting, explaining what is happening -- though I would prefer to see things refactored to the point where comments are not needed. My suspicion is that the debugger may be a symptom rather than the problem.
I know that for me, switching from traditional, code-first development to test-first development has resulted in less time spent debugging...and it's not something I miss. Typically I'll only involve the debugger when its not obvious why the code I just wrote to pass a test, didn't.

This is going to sound like the argument you said you don't want to have, but I think if you want to convince your teammates, you're going to have to make a stronger case. I don't understand your objection. I frequently step through code I'm trying to understand with the debugger. It's a great way to see what's going on. You have not established your claim that people who use the debugger in this way tend to write code which is otherwise difficult to understand. The only convincing way to do so would be through some kind of case/control study which tried to measure and compare the readability of code written by people with varying approaches to the debugger. And you have not even told a plausible story explaining why you think using a tool to understand code execution tends to lead to sloppier code construction. For me it's a complete non sequitur.

A "plan" to convince them of the advantage of another approach is by establishing metrics linked to the number of time you debug the same function for different bugs.
By analysis the trend of that metric, you may convince them that non-regression tests are more useful to spend time writing, and will help them to debug more efficiently.
That way, you do not write completely off the "debug" habit, but you convince them of establishing a solid set of test, allowing them to focus on really useful debug session, if needed.
Should you consider this course of action (metrics), you should know its implementation involves the all hierarchy (stakeholder, project manager, architect, developers). They all need to be implicated in those metrics in order to act on them.
Regarding developers, you could try to suggest:
some new ways of closing a bug case (close it only with the test scenario played to reproduce that bug, meaning they need an independent test in order to, if needed, launch their debug session)
a clear relationship between those metrics and their evaluation by the management (it would be a bad practice to debug over and over the same function)
a larger involvement in architectural decisions: sometimes, knowing some functional or applicative features rather than just classes and code can incite a developer to think more in term of black-box test rather than white-box (which can more easily lead to debug session)
a participation into "operational architecture" process (where you need to deploy your app, and make full front-to-back integration test). Again, a larger picture of the all system can help a developer to get more interested in features rather than 'lines of code'

I think a better phrasing of this question would be "Is non-TDD a code smell?" TDD seems to lead to less time spent in the debugger due to more time spent writing/failing/passing tests. Without TDD, you are more likely to spend time in the debugger to diagnose errors.
At least within Visual Studio, using the debugger is not that painful, so the challenge for you would be to explain to your teammates how TDD would make their development more enjoyable, productive and successful. Just avoiding the debugger is probably not reason enough for a team to switch their development methodology.

Right on roadwarrior.
debugging isn't the problem, it's poorly commented and or documented code and bad archetecture. I work on a smaller team but when a bug does surface, I do step through the code. frequently it's a very small job because the app is well planned out and the doc's on the code are clear.
That said lets get to my point. Want the team to not debug... comment, comment comment. Nothing beats down the urge to debug faster. Sure they'll still do it, but they'll be more likely to step over well documented code.
Oh and though it should go without saying, I'll do it anyway. don't have bugs in your code. :)

I agree with those above who expressed the relative irrelevance of this "debugger issue."
IMO, the 2 most important goals of a developer are:
1) Make the software do what it's supposed to do.
2) Write the code so that a maintenance developer 2 years down the road enjoys the experience of changing existing or adding new features.

Before you make a plan, you should decide how important this change is to you. Although I agree that debugging is a smell, it is also a very well accepted and ingrained practice for developers, so convincing them that they should stop doing it won't be easy or quick - and for good reasons. How much energy do you want to put into this topic?
Second, why do you want to persuade them in the first place? If your motivation is to help them, is it really their top priority problem? When you help people in ways they want to be helped, change becomes easy.
Once you have decided that you want to go on with your change initiative, you need to take into account that different people are convinced by different things. Some people will already be convinced by trying something new and exciting. Some will be convinced by numbers (metrics). Some by getting told about it while eating their favorite type of cookie (seriously!), some by hearing about it from their favorite guru. Some by reading about it in a magazine. Some by seeing that "everyone else is doing it, too". Etc. pp.
There is an insightful interview with Linda Rising on this topic at InfoQ: http://www.infoq.com/interviews/Linda-Rising-Fearless-Change. She can say it much better than me. The book is quite good, too.
Whatever you do, don't press too much, but also don't give up. Change can happen - especially if you take resistance as a resource -, and sometimes it happens at unexpected times, so always keep a sense of wonder.

#FOR : You have a second problem too, here it is :
sadly it doesn't seem the devs are interested in being more productive (they get paid the same anyway)
How do you intend to make them want to be more productive when there is nothing (visible) for them to gain?

Designing software by debugging is a good practice.
The number of environments supporting this way of developing is very small: the best known is Smalltalk. In Smalltalk, you can write a test describing your objects protocol without the methods being implemented. Running this test will then trigger the debugger, and you can add the method to the right class in the debugger, and can continue stepping through the code until all functionality is implemented and the test is green.
This needs a compiler to be available at run-time, and first-class invocations. It offers a very short feedback cycle, and is one of the primary reasons for Smalltalks' productivity

Related

What are some good strategies to fix bugs as code becomes more complex?

I'm "just" a hobbyist programmer, but I find that as my programs get longer and longer the bugs get more annoying--and harder to track. Just when everything seems to be running smoothly, some new problem will appear, seemingly spontaneously. It may take me a long time to figure out what caused the problem. Other times I'll add a line of code, and it'll break something in another unit. This can get kind of frustrating if I thought everything was working well.
Is this common to everyone, or is it more of a newbie kind of thing? I hear about "unit testing," "design frameworks," and various other concepts that sound like they would decrease bugginess, make my apps "robust," and everything easy to understand at a glance :)
So, how big a deal are bugs to people with professional training?
Thanks -- Al C.
The problem of "make a fix, cause a problem elsewhere" is very well known, and is indeed one of the primary motivations behind unit testing.
The idea is that if you write exhaustive tests for each small part of your system independently, and run them on the entire system every time you make a change anywhere, you will see the problem immediately. The main benefit, however, is that in the process of building these tests you'll also be improving your code to have less dependencies.
The typical solution to these sort of problems is to reduce coupling; make different parts less dependent on one another. More experienced developers sometimes have habits or design skills to build systems in this manner. For example, we use interfaces and implementations rather than classes; we use model-view-controller for user interfaces, etc. In addition, we can use tools that help further reduce dependencies, like "Dependency injection" and aspect oriented programming.
All programmers make mistakes. Good and experienced programmers build their programs so that it is easier to find the mistakes and restrict their effects.
And it is a big deal for everyone. Most companies spend more time on maintenance than on writing new code.
Are you automating your tests? If you do not, you're signing up creating bugs without finding them.
Are you adding tests for bugs as you fix them? If you do not, you are signing up for creating the same bugs over and over.
Are you writing unit tests? If not, you are signing up for long debugging sessions when a test fails.
Are you writing your unit tests first? If not, your unit tests will be hard to write when your units are tightly coupled.
Are you refactoring mercilessly? If not, every edit will become more difficult and more likely to introduce bugs. (But make sure you have good tests, first.)
When you fix a bug, are you fixing the entire class? Don't just fix the bug; don't just fix similar bugs throughout your code; change the game so you can never create that kind of bug again.
Bugs are a big deal to everyone. I've always found that the more I program, the more I learn about programming in general. I cringe at the code I wrote a few years back!! I started out as a hobbyist and liked it so much that I went to engineering college to get a Computer Science Engineering major (I am in my final semester). These are the things that I have learned :
I take time to actually design what I am going to write and document the design. It really eliminates a lot of problems down the line. Whether the design is as simple as writing down a few points on what I am going to write or full blown UML modeling (:( ) doesn't matter. Its the clarity of thought and purpose and having material to look back at when I come back to the code after a while that matter the most.
No matter what language I write in, keeping my code simple and readable is important. I think that it is extremely important not to over complicate the code and at the same time not to over simplify it. (Hard learned lesson!!)
Efficiency optimizations and fancy tricks should be applied at the end, only when necessary and only if they are needed. Another thing is that I apply them only If I really know what I am doing and I always test my code!
Learning language dependant details helps me keep my code bug free. For instance I learned that scanf() is evil in C!
Others have already commented on the zen of writing tests. I would like to add that you should always do regression tests. (i.e. Write new code, test all parts of your code to see if it breaks)
Keeping a mental picture of code is hard at times, so I always document my code.
I use methods to make sure that there is a bare minimum dependence between different parts of my code. Interfaces, class hierarchies etc. (Decoupled design)
Thinking before I code and being disciplined in whatever I write is another crucial skill. I know people who don't format their code so its readable (Shudder!).
Reading other peoples source to learn best practices is good. Making my own list is better!. When working in a team, there must be a common set of them.
Don't be paralyzed by analysis. Write tests, then code, then execute and test. Rinse wash repeat!
Learning to read over my own code and combing it for mistakes is important. Improving my arsenal of debugging skills was a great investment. I keep them sharp by helping my classmates fix bugs regularly.
When there is a bug in my code, I assume its my mistake, not the computers and work from there. That is a state of mind that really helps me.
A fresh pair of eyes aids in debugging. Programmers tend to miss even the most obvious errors in their own code when exhausted. Having someone to show your code to is great.
having someone to throw ideas at and not be judged is important. I talk to my mom (who is not a programmer) , throw ideas at her and find solutions. She helps me bounce my ideas back and forth and refine them. If she is unavailable, I talk to my pet cat.
I am not so be discouraged by bugs anymore. I've learned to love removing bugs almost as much as programming.
Using version control has really helped me manage different ideas I get while coding. That helps reduce errors. I recommend using git or any other version control system you might like.
As Jay Bazzuzi said - Refactor code. I just added this point after reading his answer, to keep my list complete. All credit goes to him.
Try to write reusable code. Reuse code, both yours and from libraries. Using libraries which are bug free to do some common tasks really reduces bugs (sometimes).
I think the following quote says it best - "If debugging is the art of removing bugs, programming must be the art of putting them in."
No offense to anyone who disagrees. I hope this answer helps.
Note
As others Peter has pointed out, use Object Oriented Programming if you are writing a large amount of code. There is a limit to code length after which it becomes harder and harder to manage if written procedurally. I like procedural for smaller stuff, like playing with algorithms.
There are two ways to write error-free programs; only the third one works. ~Alan J. Perlis
The only way for errors to occur in a program is by being put there by the author. No other mechanisms are known. Programs can't acquire bugs by sitting around with other buggy programs. ~Harlan Mills
Obviously, bugs are a big deal to any programmer. Just look through the list of questions on Stack Overflow to see this illustrated.
The difference between a hobbyist and an experienced professional is that the pro will be able to use his experience to code in a more "defensive" way, avoiding many types of bugs in the first place.
All the other answers are great. I'll add two things.
Source control is mandatory. I'm assuming you're on windows here. VisualSVN Server is free and maybe 4 clicks to install. TortoiseSVN is also free and it integrates into Windows Explorer, getting around the VS Express limitations of no add-ins. If you create too many bugs, you can revert your code and start over. Without source control, this is next to impossible. Plus you can sync your code if you have a laptop and a desktop.
People are going to recommend many techniques like unit testing, mocking, Inversion of Control, Test Driven Development, etc. These are great practices, but don't try to cram it all into your head too quickly. You have to write code to get better at writing code, so work these techniques slowly into your code writing. You have to crawl before you walk and walk before you can run.
Best of luck in your coding adventures!
This is a common newbie thing. As you get more experience, of course, you'll still have bugs, but they'll be easier to find and fix because you'll learn how to make your code more modular (so that changing one thing doesn't have ripple effects everywhere else), how to test it, and how to structure it to fail fast, close to the source of the problem, rather than in some arbitrary place. One very basic but useful thing that doesn't require complex infrastructure to implement is to check the inputs to all functions that have non-trivial precondtions with asserts. This has saved me several times in cases where I would have otherwise gotten weird segfaults and arbitrary behavior that would have been near impossible to debug.
If bugs weren't a problem then I'd be able to write a 100,000 line program in 10 minutes!
Your question is like, "As an amateur doctor, I worry about my patients' health: sometimes when I'm not careful enough, they sicken. Is patients' health a problem for you professional doctors too?"
Yes: it's the central problem, even the only problem (for any sufficiently all-inclusive definition of 'bug').
Bugs are common to everyone -- professional or not.
The larger and more distributed the project, the more careful one must be. One look at any open source bug database (ex: https://bugzilla.mozilla.org/ ) will confirm this for you.
The software industry has evolved various programming styles and standards, which when used right, make wrong code easier to spot or limited in its impact.
Therefore, training has a very positive on code quality... But at the end of the day, bugs still sneak through.
If you're just a hobbyist programmer, learning full bore TDD and OOP may involve more time than you're willing to put in. So, going on the assumption that you don't want to put in the time on them, a few easily digestible suggestions to cut down on bugs are:
Keep each function doing one thing. Be suspect of a function more than, say, 10 lines long. If you think you can break it into two functions, you probably should. Something that will help you control this is naming your functions according to exactly what they are doing. If you find that your names are long and unwieldy then you function is probably doing too many things.
Turn magic strings into constants. That is, instead of using:
people["mom"]
use instead
var mom = "mom";
people[mom]
Design your functions to either do something (command) or get something (query), but not both.
An extremely short and digestible take on OOP is here http://www.holub.com/publications/notes_and_slides/Everything.You.Know.is.Wrong.pdf. If you get this, you've got the gist of OOP and are quite frankly ahead of a lot of professional programmers.
The prevailing wisdom seems to be that the average programmer creates 12 bugs per 1000 lines of code - depends on who you ask for the exact number, but it's always per lines of code - so, the bigger the program, the more the bugs.
Subpar programmers tend to create way more bugs.
Newbies are often trapped by idiosyncrasies of the language, and lacking experience tends towards more bugs too. As you go on, you will get better, but never will you create bug-free code... well I still have bugs, even after 30 years, but that could be just me.
Nasty bugs happen to everyone from pros to hobbyists. Really good programmers get asked to track down really nasty bugs. It's part of the job. You'll know you've made it as a software developer when you stare at a nasty bug for two days and in frustration you shout, "Who wrote this crap!?!?" ... only to realize it was you. :-)
Part of the skill of a software developer is the ability to keep a large set of interrelated items straight in his/her head. It sounds like you're discovering what happens when your mental model of the system breaks down. With practice you will learn to design software that doesn't feel so brittle. There are tons of books, blogs, etc. out there on the subject of software design. And Stack Overflow of course for specific questions.
All that said, here's a couple of things you can do:
A good debugger is invaluable. Often you have to step through your code line by line to figure out what went wrong.
Use a garbage-collected language such as Python or Java if it makes sense for your project. GC will help you focus on making things work instead of getting bogged down by maddening memory errors.
If you write C++, learn to love RAII.
Write LOTS of code. Software is somewhat of an art form. Lots of practice will make you better at it.
Welcome to Stack Overflow!
What really changed my odds against code complexity and bugs was using a coding standart - how to place brackets an so on. It may seem like just boring and useless thing but it really unifies all the code and makes it much easier to read and maintain. So do you use a coding standart?
If you're not well organized, your codebase will become your very own Zebra Puzzle. Adding more code is like adding more people/animals/houses to your puzzle, and soon you have 150 various animals, people, houses and cigarette brands in your puzzle and you realize that it just took you a week to add 3 lines of code because everything is so inter-related that it takes forever to make sure the code still executes how you want it to.
The most popular organizational paradigm seems to be Object Oriented Programming, if you can break your logic down into small units which can be constructed and used independently of each other, then you will find bugs far less painful when they occur.

How to justify to your colleagues that they produce crappy code?

I am finding somewhat difficult to carry on working in my current job.
The codebase has become a bit wild lately (but definitely not the worse I've seen), and I'm having a hard time dealing with some parts of the code. I could be stupid, but most likely it's just that it demotivates me a lot to start working on something that is hard to reason about.
My boss is already aware of my thoughts - I expressed what it feels like to work like this. He asked me to provide examples of what was wrong. When I pointed out two or three small issues, he said "yeah, ok" but that refactoring costs him a lot of money, and that we have to get the product out (not the first time I hear this).
I have to admit that the examples were not the most compelling, but the problem is actually tough to explain. It's made up of a lot of tiny "bad decisions" throughout the codebase. (We also see this issue is absolutely subjective). For instance, bad naming, dealing with nulls, boilerplate, not making code reusable (or the opposite) and so on. It can be tiring to re-think someone else's code over again to justify I would have done it differently.
Do you have thoughts on how to deal with this?
I am a bit fed up of having to go hacking around a quick 'n dirty codebase every time!
Sometimes your fellow programmers do things very differently than you, and things you might feel are way wrong might actually have positive aspects. We all have our schools we come from. I think I've come across programmers who complain about things I don't understand equally as often that I myself have felt something needs to be complained about.
Make sure you can deduce what you complain about into a concrete disadvantage. If for no other reason so that you can motivate middle management about improvements to make. Things that are hard to deduce into measurable facts usually originates from difference in taste/style rather than quality (there are boooks to read about this subject). The answer posted by smacl have good and concrete advice!
If you can deduce your concern into a real disadvantage, then I really do not agree when people say that one have to "accept" situations like this. I've been exposed to this problem more than once, and let me tell you, refactoring is not the solution to the problem. Refactoring only fixes the symptoms.
Accepting a situation like this is the same as saying "bad quality product lines and expensive and frustrating maintenance is something my company can live with". This is ofcourse seldomly the case. However management (i.e. those with the go/no-go on what projects to prioritize) are very often not technically aware of what the problems are, or why development is expensive. They shouldn't have to be for that matter.
That's why you need a development organization with technical leads, chief architects, a good organisational structure and tiered model etc. Experienced software professionals who have seen where the road leads to if you ignore certain aspects of development. It's about changing the "culture" of your team(s).
Either you stick with your company and try to change how you do things from the roots, or you find another place to work and make sure you find out during the interview exactly how they work in every-day development.
Good luck
I recently faced a very similar problem and a friend gave me some advice that helped a great deal. He said: "keep yourself out of it."
What he meant was, that you must communicate the problems because they are real, costly problems with consequences in terms of time and money. But when you do communicate, talk only about the consequences for the organization. Do not mention the consequences to you, because then it just sounds like whining and will be ignored.
For example:
Not keeping yourself out of it:
"The other developers use these obscure, misleading identifiers and then I have to spend hours going over the code trying to discover what they meant. It's taking up a lot of my time."
Keeping yourself out of it:
"It would be very helpful and cost effective to do some refactoring of class and variable names and also establish some coding standards around identifiers. The immediate payoff will be an easier-to-understand codebase for everyone, leading to better productivity. The longer-term payoff will be that later we'll be able to modify the code and fix things faster. If a critical bug is discovered right before a release, an understandable codebase will be really important."
I hope that helps.
1) Make the problem more visible and get management buy-in
Keep a very detailed diary of the time spent on various coding tasks over the period of about a month. At the end of the month analyse and summarise the contents for your boss, i.e. time wasted and hence money wasted, to illustrate that change of some form is necessary.
2) Think of a cost effective way of moving forward
For example; Rather than refactoring the entire code base, seperate interfaces from implementations, and enforce tighter standards, including unit tests, naming conventions, etc.. at an interface layer. Thus each programmer can have confidence in using code that they have not written. While this is sweeping the crap under the carpet to a certain extent, it is a good way of preparing for larger scale refactoring.
It is important from a management perspective that workflow is not interrupted, and positive results are visible, so plan accordingly.
3) Agree longer terms improvements with your co-workers
Sit down and agree reasonable coding standards for future code with the other programmers.
Perhaps you could setup monthly meetings and at those meetings you could demonstrate good and bad code. Obviously you don't want to point fingers so you'd want to use generic code examples that are based off of stuff you saw in your project. This way you can constructively gather support from others in your style. You might want to compile these after the meetings so people can easily reference them.
I think it is real easy to point out issues, and complain but to mentor people and help them change requires effort. It isn't an easy task but if you are having trouble being motivated with your job perhaps this would give you a nice burst of motivation. You might learn some things a long the way.
You'll find that this is common-place. What you can do is accept that things are done differently by different people. As you fix bugs or add features, you'll get a brief window into a sub-section of the application that you can improve. When you work on the code, you can make it better, and they don't need to know that you're piecemeal improving the code.
Be very careful though. Sometimes code is written in a way that looks 'hacked', but solves a bug that is not easy to discern. Especially if it is older code which has been tried and tested.
On another note, complaining will only get you viewed as a complainer. Think about what outcome you want, and what actions will most likely produce that outcome. You will always hear the answer 'No' when you ask, 'Can I do X-days of work for absolutely no noticable result?'
You could quit and hope to find something better.
Or, you could stick it out and try improve the code that you can control, when you can control it. No matter how well intentioned the developers are, if there is more than one developer the code base will be "ugly" by a competent developers standards. Work with the other developers to improve their abilities and refactor code as you make enhancements.
For starters:
Enforce the use of static code analysis tools. Every language has a few well known tools.
Show some before and after refactored code examples, and explain why you think it's better. Try not to put any one person on the spot.
Code reviews by experienced developers.
keep in mind, some developers can't be helped no matter how much you try...
If someone critiques your code be polite and open minded, you might learn something.
Cyclomatic complexity / number of changesets/bugs. Complex code is more likely to break, cause more bugs which causes more changes, which cost more money!
99% of the time you never get to choose the people you work with. Not all relationships work out, be they work or otherwise.
It would be best if your project was broken up enough so that each developer can contribute to a spec of what the other needs, so programmers don't step on each other's toes.
Getting people to change their coding style is hard. It takes a cast iron technical lead committed to such things and will help when you bring it up. Management types can't do this, leadership needs to provide technical details.
It sounds to me like you don't have a problem with the code so much as your coworkers. It will probably be very difficult for you to force the changes you want to see. Your best bet would probably be to start updating your resume and keep your eyes open for other opportunities
I think that once you're in the middle of the weeds, you do not really have a good chance of getting things done right, you just have to get them done. I would say most developers do not like firefighting and want the ideal code base, but in my opinion this requires you to spend the time up front planning the system out.
I'd recommend trying to work with your manager to ensure that the areas you feel are lacking now are not lacking in the next project. Maybe its putting you on the lead, having more code reviews with peers, maybe it is further training for the entire team.
Either way, I think this is something that most of us go through. I do agree with the other person advising some caution on this. I know that code I wrote yesterday seemed great at the time and looking back on it, can probably find 10 other ways to do it and make it look cleaner.
Have you considered maybe adding fxcop to the automated builds to enforce coding style? Other than that, you could try suggesting TDD which would give the power to whomever writes the test to enfore that the interfaces for each class are structured in a particular way.
Off the top of my head, that's all that I can think of.
Things in life are not perfect and if you start nitpicking, feathers will be ruffled and relationships soured.
The best method is to pick your battles carefully. If something is small enough ignore it and live with it. If it is big and worthwhile (i.e. the management sees ROI in backing you) go for it.
This is apt for your situation...
God, grant me the serenity to accept the things I cannot change courage to change the things I can and the wisdom to know the difference.
One thing I try to do and it may help you. If a part of code is bad, and the idea you propose to fix it is agreed as best but "no time" excuse is given, why dont you rewrite it? say on your own time? If you decide on sticking around at that job for a while it will only help you. And only you will learn and become a better programmer.
Note that it is a good idea and I would even say required, to do a complete code review of that change before check-in and you should try to time the check-in so that it is before a complete regresion test cycle for a release. That way your refactoring is completely tested out. Over a period of 6 months or so, it will start showing a beneficial impact and you can then ask for time allocation for this, with proof to back it up.
The only thing that has a chance of convincing management is demonstrating that the things you are citing as perceived problems become actual problems.
To try to take advantage of this, try to keep the "complainer" tone down to a minimum, that is, focus on how this affects the bottom line rather than how it makes you feel. Point out possible consequences of poor decisions that you see being made. If those consequences come to pass, and they cost more than an up-front fix would have, gently remind management that you foresaw the difficulty and provide a helpful suggestion as to how future similar costs can be avoided with a little up-front effort.
The problem is, in many organizations, the problems will never cause enough of a problem for management to care, or if they do, they won't see the connection between your perception of the problem and the actual problem the way it occurs. In these cases, you end up seemin like a needlessly persnickety technical person, which isn't a reputation you want to have.
So my advice is, pick your battles. If there is something very egregious that others are about to let slip, then you can speak up and perhaps be vindicated later. For the little details that just grind away at you, I'm afraid there's not much you can do but put up with it.
Show them their own forgotten code disguised as yours for critique.
Take an old piece of their code they have forgotten about
Pretend you wrote it
Ask them to figure out something with it
Make sure they point out how bad the code is for whatever reason
Add your own items. Brainstorm what should be done since it's your fault.
Let them know you didn't know how to bring it up to offend them, but it's their code.
If they recall that they wrote it, they might catch on..
If you have a good relationship with your manager, you might be able to use this to work yourself into a "Senior" or "Lead" Developer role. You could propose that it would be best if one person on the team takes technical leadership of the code base. It would be your job to review the code of others and ask them to make improvements when you feel it is necessary. If you go this route, just make sure to take it slowly. If you ask for a lot very quickly, then you could end up pissing off all the other developers.

How not to rush yourself? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I often find that I do a less than complete work on a feature, especially in the Design phase. I detect several reasons:
I'm over-optimistic
I feel the need to provide quick solutions, so sometimes I fool myself into thinking the design is fool-proof when in fact it's still full of holes, just to get the job done faster. Of course I end up paying dearly later.
I'm aware of this behavior of mine for some time, yet I still find I don't manage to compensate. Have you encountered similar problems? How do you approach solving them?
I use a couple of techniques. The first is a simple paper to-do list. In the morning I write down my tasks for the day. I try to work on a task until I can cross it off. I cross it off only when I'm done to my own satisfaction. My to-do list helps me stay focused. When an interruption comes in, I can consciously choose whether it is important enough to interrupt what I'm doing now.
The second technique I use is to give up on the idea of "done" for a design. Instead, I focus on what I've started calling "successions", where a design goes through predictable stages. Each stage supports the current functionality well and will be succeeded at some point by the next stage. This lets me do a good job, a job I can be proud of, without over-designing.
I have the intuition that there is a small catalog of such successions (like http://www.threeriversinstitute.org/FirstOneThenMany.html) that would cover most of design. In the meantime, I try to remember that "sufficient to the day are the troubles thereof".
I run into this problem a lot.
My solution is a notebook. (The old fashioned paper kind).
I write out how I'm planning on implementing the solution as an bulleted overview list, and then I try and flesh out each point on the list.
Often, during that process, I come across issues I hadn't thought of.
Of course, the 80/20 rule still applies... I still come across things when I'm actually doing the implementation that hadn't occurred to me, but with experience these tend to diminish.
EDIT: If I'm still not sure at the end of this process, I put together a throwaway prototype testbed... It's important to make sure it's throwaway, because otherwise you run the risk of including some nasty hacks in your real codebase.
It's very common to miss edge-cases and detail when you're in the planning phase of a project, especially in the software development field. Please don't feel that this is a personal failing; it's something endemic.
To counter this, many software development methodologies have emerged. Most recently there has been a shift by many development teams to 'agile' methods, where there is a focus on rapid development with little up-front technical design (after all, many complexities are only discovered when you actually begin developing). I'm currently using the Scrum system, which has been excellent in my small team:
http://en.wikipedia.org/wiki/Agile_methods
http://en.wikipedia.org/wiki/Scrum_%28development%29
If you find that your organisation will not accept what they may regard as a radical shift in approach, it may be worth investigating whether they will agree to the development of a prototype system. This means that you could code up a feature to investigate the technologies involved and judge whether it's feasible, without having to commit to full development, a quality bar, testing schedules etc. The prototype should be thrown away once the feasibility has been proved or disproved, then proper development may begin, including all that you've learned in the process.
If your problem is more related to time management, then I'd recommend the Getting Things Done approach (http://en.wikipedia.org/wiki/Getting_things_done). This is pragmatic and simple, concentrating on making you productive without overloading you with information that isn't immediately relevant to your current work. I've found that I get overwhelmed with project/feature ideas at times and it really helps to write everything down and file it for a later time when I have the resources available to work effectively.
I hope this helps and best of luck!
Communication.
The best way to not rush yourself into programming mistakes is communication. Yes, good ol' fashioned accountability. If another person in the office is involved in the process, the better the outcome. If a programmer just takes on the task without any concern for anybody else, then there is a higher possiblity for mistakes.
Accountability Checklist:
How do we support this?
Who needs to know what has changed?
Why are we doing this in the first place?
Will there be anybody who doesn't want this changed?
Will someone else understand how I did this?
How will the user perceive and use this change?
A skepticle comrad is usually good enough to help. Functional Specifications are good, they usually answer all of these thoughts. But, sometimes a conversation with another person can help you with it and you can get changes out the door faster.
I have learned, through years of mistakes (though still making them), that almost anything I want to use repeatedly, or distribute, needs to be designed properly. So getting burned enough times will end your optimism.
When getting pressure from management, I tell them I will have to put in the thought anyway, so I should do it when it's cheap. I think on paper as well, so I can actually prove that I'm doing something and it keeps my fingers on the keyboard, both of which provides a soothing effect to management. ;-)
At the risk of sounding obvious - be pessimistic. I had a few experiences where I thought "that should take a few hours" and it ended up taking a couple days because of all the little things that pop up unexpectedly.
By far the best way I've found to manage things is to (much like Andrew's answer) write out the design and requirements as a starting point. Then I go through and look for weak points in the design, gotchas and additional use cases etc. I try to look at this as a critical exercise - there's no code written yet, so this is the time to be totally ruthless and look for every weak point. Look for error conditions you'll have to handle, and whatever amount of time you think it will take to complete each feature/function, pad that amount by a lot. I've had times where I've doubled my initial estimate and still not been that far off the mark.
It's very hard as a programmer to realistically project debugging time - writing the code is easy to estimate, but debugging that into functioning, valid code is something else entirely. Therefore I find there's no exact science to it but I just pad tasks by a whole bunch, so that I have plenty of breathing room for debugging.
See also Evidence Based Scheduling which is a fascinating concept in scheduling developed by FogCreek for their FogBugz product.
You and the rest of the world.
You need more a more detailed design, more accurate estimate, and the willingness to accept that sometimes the optimal solution is not necessarily the best solution (e.g., you could code some loop in assembler to get optimal performance, but that's going to take a lot longer than just doing
for (i=1; i<=10; i++) {}
). Is the time spent doing it really worth it for an accounting package over a missile system.
I like to designing, but over time I've found that much design up front is a lot like building castles into the sky - it's too much speculation, however well-educated, missing critical feedback from actually implementing and using the design.
So today I'm much more into accepting that while implementing a design I will learn a lot of new stuff about it, and need to feed that learning back into the design. Doing that is a skill that is fun to learn, including the skills to keep a design flexible by keeping it simple, free of duplication and cohesive and decoupled, of changing the design in small, controlled steps (=refactoring), and writing the necessary extensive suite of automated tests that make this kind of changes safe.
This seems to be a much more effective approach to me than getting better at "up front design speculation" - and addtionally it makes me equally well prepared for the inevitable moment when the design needs to be changed due to a simply unforseeable change in the requirements.
Divide, divide, divide. List all the steps that will be required to finish the project, then list all the steps those steps will require to be concluded, and so on until you reach atomic items you are absolutely sure you can finish in a day or less. Add the duration of all these values to arrive at a length of time.
Then double it. Now you have a number that, if depressing, is at least somewhat realistic.
If possible "Sleep on your design" before publishing it. I find after I leave work, I usually think of things I have missed. This usually happens while I am lying in bed before falling asleep or even while showering the next day.
I also find it valuable to have a peer/friend that I trust review what I have before distributing it. Somebody else almost always sees something I didn't think of or miscommunicated.
I like to do as others stated here. Write down in pseudo code what the flow of your app will be. This immediately highlights some detailed areas that may require further attention that where not apparent up front.
Pseudo code is also readable to business users who can verify your approach meets their needs.
Using pseudo code also creates a nice set of methods that could be put to use as an interface in the final solution. Once the pseudo code is fairly tight, look for patterns and review some common GOF patterns. They do not have to be perfect but using them will sheild you from having to rewrite the code later during the revisions that are bound to come along.
Just taking an hour or two write psuedo code, yields some invaluable time saving pieces later on:
1. An object model emerges
2. The program's flow is clearly defined for others
3. It can be used as documentation of your design with some refinement
4. Comments are easier to add and will be clearer for someone else reviewing your code.
Best of luck to you!
I've found that the best way to make sure you've chosen a good design is to make sure that you understand the problem, know the limitations you have, and know what things are must-haves vs. nice-to-haves.
Understanding the problem will involve talking to the people who have the need and keeping them anchored to what needs to get done first instead of how they think it ought to get done. Once you know what actually has to happen, you can go back and talk over requirements about how.
Knowing your limitations may be quite easy: needs to run on the iPhone; has to be a web application; needs to integrate with the already-existing Java code and deployment setup; and so on. It may be quite difficult: you don't know what the potential size of your user base is (hundreds? thousands? millions?); you don't know whether you'll need to localize it (though if you're not sure, assume you will have to).
Must-haves vs, nice-to-haves: this is possibly the most difficult part. Users very often have emotional attachments to "requirements" ("It should look just like Excel") that are not actually part of the "has to happen" stuff. You often have to juggle functionality vs. desires to get an acceptable implementation. You can't always give everyone a pony.
Make sure you write all this down! Even if it evolves along the way, or the design is small, having a "this is what we're planning to do now" guide to refer to when you need ot make a decision about committing resources makes it easier to restrain yourself from implementing a really cool whiz-bang feature instead of a boring must-do.
Since you recognize that you feel the need to provide a quick solution, perhaps it will slow you down to realize that you can probably solve the problem faster and deliver it sooner if you spend more upfront time in design. For instance if you spend 3 hours designing and 30 hours writting code, it probably means that if you spend 6 hours designing you might need to only spend 10 hours writing code. (These are not actual figures just examples). You might try to quantify this for yourself on the next few projects you do. Do a couple where you behave as you normally would and see what ratio of design/codewriting/testing&debugging you actually do. Then on the next project deliberately increase the percentage of time you spend on design phase and see if it does shorten the time needed for the other phases. You will have to try for several projects on this as well to get a true baseline since the projects may be quite different. Do it as a test to see if you can improve your performance on the the other phases and thus deliver a faster product if you spend 20% more time or 50% more time or 100% more time on design.
Remember the later in the process you find the problem with a design the harder (and more time-consuming) it is to fix.

What are some reasons why a sole developer should use TDD? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm a contract programmer with lots of experience. I'm used to being hired by a client to go in and do a software project of one form or another on my own, usually from nothing. That means a clean slate, almost every time. I can bring in libraries I've developed to get a quick start, but they're always optional. (and depend on getting the right IP clauses in the contract) Many times I can specify or even design the hardware platform... so we're talking serious freedom here.
I can see uses for constructing automated tests for certain code: Libraries with more than trivial functionality, core functionality with a high number of references, etc. Basically, as the value of a piece of code goes up through heavy use, I can see it would be more and more valuable to automatically test that code so that I know I don't break it.
However, in my situation, I find it hard to rationalize anything more than that. I'll adopt things as they prove useful, but I'm not about to blindly follow anything.
I find many of the things I do in 'maintenance' are actually small design changes. In this case, the tests would not have saved me anything and now they'd have to change too. A highly iterative, stub-first design approach works very well for me. I can't see actually saving myself that much time with more extensive tests.
Hobby projects are even harder to justify... they're usually anything from weekenders up to a say month long. Edge-case bugs rarely matter, it's all about playing with something.
Reading questions such as this one, The most voted on response seems to say that in that poster's experience/opinion TDD actually wastes time if you've got less than 5 people (even assuming a certain level of competence/experience with TDD). However, that appears to be covering initial development time, not maintenance. It's not clear how TDD stacks up over the entire life cycle of a project.
I think TDD could be a good step in the worthwhile goal of improving the quality of the products of our industry as a whole. Idealism on it's own is no longer all that effective at motivating me, though.
I do think TDD would be a good approach in large teams, or any size team containing at least one unreliable programmer. That's not my question.
Why would a sole developer with a good track record adopt TDD?
I'd love to hear of any kind of metrics done (formally or not) on TDD... focusing on solo developers or very small teams.
Failing that, anecdotes of your personal experiences would be nice, too. :)
Please avoid stating opinion without experience to back it. Let's not make this an ideology war. Also the skip greater employment options argument. This is simply an efficiency question.
I'm not about to blindly follow anything.
That's the right attitude. I use TDD all the time, but I don't adhere to it as strictly as some.
The best argument (in my mind) in favor of TDD is that you get a set of tests you can run when you finally get to the refactoring and maintenance phases of your project. If this is your only reason for using TDD, then you can write the tests any time you want, instead of blindly following the methodology.
The other reason I use TDD is that writing tests gets me thinking about my API up front. I'm forced to think about how I'm going to use a class before I write it. Getting my head into the project at this high level works for me. There are other ways to do this, and if you've found other methods (there are plenty) to do the same thing, then I'd say keep doing what works for you.
I find it even more useful when flying solo. With nobody around to bounce ideas off of and nobody around to perform peer reviews, you will need some assurance that you're code is solid. TDD/BDD will provide that assurance for you. TDD is a bit contraversial, though. Others may completely disagree with what I'm saying.
EDIT: Might I add that if done right, you can actually generate specifications for your software at the same time you write tests. This is a great side effect of BDD. You can make yourself look like super developer if you're cranking out solid code along with specs, all on your own.
Ok my turn... I'd do TDD even on my own (for non-spike/experimental/prototype code) because
Think before you leap: forces me to think what I want to get done before i start cranking out code. What am I trying to accomplish here.. 'If I assume I already had this piece.. how would I expect it to work?' Encourages interface-in design of objects.
Easier to change: I can make modifications with confidence.. 'I didn't break anything in step1-10 when i changed step5.' Regression testing is instantaneous
Better designs emerge: I've found better designs emerging without me investing effort in a design activity. test-first + Refactoring lead to loosely coupled, minimal classes with minimal methods.. no overengineering.. no YAGNI code. The classes have better public interfaces, small methods and are more readable. This is kind of a zen thing.. you only notice you got it when you 'get it'.
The debugger is not my crutch anymore : I know what my program does.. without having to spend hours stepping thru my own code. Nowadays If I spend more than 10 mins with the debugger.. mental alarms start ringing.
Helps me go home on time I have noticed a marked decrease in the number of bugs in my code since TDD.. even if the assert is like a Console trace and not a xUnit type AT.
Productivity / Flow: it helps me to identify the next discrete baby-step that will take me towards done... keeps the snowball rolling. TDD helps me get into a rhythm (or what XPers call flow) quicker. I get a bigger chunk of quality work done per unit time than before. The red-green-refactor cycle turns into... a kind of perpetual motion machine.
I can prove that my code works at the touch of a button
Practice makes perfect I find myself learning & spotting dragons faster.. with more TDD time under my belt. Maybe dissonance.. but I feel that TDD has made me a better programmer even when I don't go test first. Spotting refactoring opportunities has become second nature...
I'll update if I think of any more.. this is what i came up with in the last 2 mins of reflection.
I'm also a contract programmer. Here are my 12 Reasons Why I Love Unit Tests.
My best experience with TDD is centered around the pyftpdlib project. Most of the development is done by the original author, and I've made a few small contributions, but it's essentially a solo project. The test suite for the project is very thorough, and tests all the major features of the FTPd library. Before checking in changes or releasing a version, all tests are checked, and when a new feature is added, the test suite is always updated as well.
As a result of this approach, this is the only project I've ever worked on that didn't have showstopper bugs appear after a new release, have changes checked in that broke a major feature, etc. The code is very solid and I've been consistently impressed with how few bug reports have been opened during the life of the project. I (and the original author) attribute much of this success to the comprehensive test suite and the ability to test every major code path at will.
From a logical perspective, any code you write has to be tested, and without TDD then you'll be testing it yourself manually. On the flip side to pyftpdlib, the worst code by number of bugs and frequency of major issues, is code that is/was solely being tested by the developers and QA trying out new features manually. Things don't get tested because of time crunch or falling through the cracks. Old code paths are forgotten and even the oldest stable features end up breaking, major releases end up with important features non-functional. etc. Manual testing is critically important for verification and some randomization of testing, but based on my experiences I'd say that it's essential to have both manual testing and a carefully constructed unit test framework. Between the two approaches the gaps in coverage are smaller, and your likelihood of problems can only be reduced.
It does not matter whether you are the sole developer or not. You have to think of it from the application point of view. All the applications needs to work properly, all the applications need to be maintained, all the applications needs to be less buggy. There are of course certain scenarios where a TDD approach might not suit you. This is when the deadline is approaching very fast and no time to perform unit testing.
Anyways, TDD does not depend on a solo or a team environment. It depends on the application as a whole.
I don't have an enormous amount of experience, but I have had the experience of seeing sharply-contrasted approaches to testing.
In one job, there was no automated testing. "Testing" consisted of poking around in the application, trying whatever popped in your head, to see if it broke. Needless to say, it was easy for flat-out-broken code to reach our production server.
In my current job, there is lots of automated testing, and a full CI-system. Now when code gets broken, it is immediately obvious. Not only that, but as I work, the tests really document what features are working in my code, and what haven't yet. It gives me great confidence to be able to add new features, knowing that if I break existing ones, it won't go unnoticed.
So, to me, it depends not so much on the size of the team, but the size of the application. Can you keep track of every part of the application? Every requirement? Every test you need to run to make sure the application is working? What does it even mean to say that the application is "working", if you don't have tests to prove it?
Just my $0.02.
Tests allow you to refactor with confidence that you are not breaking the system. Writing the tests first allows the tests to define what is working behavior for the system. Any behavior that isn't defined by the test is by definition a by-product and allowed to change when refactoring. Writing tests first also drive the design in good directions. To support testability you find that you need to decouple classes, use interfaces, and follow good pattern (Inversion of Control, for instance) to make your code easily testable. If you write tests afterwards, you can't be sure that you've covered all the behavior expected of your system in the tests. You also find that some things are hard to test because of the design -- since it was likely developed without testing in mind -- and are tempted to skimp on or omit tests.
I generally work solo and mostly do TDD -- the cases where I don't are simply where I fail to live up to my practices or haven't yet found a good way that works for me to do TDD, for example with web interfaces.
TDD is not about testing it's about writing code. As such, it provides a lot of benefits to even a single developer. For many developers it is a mindshift to write more robust code. For example, how often do you think "Now how can this code fail?" after writing code without TDD? For many developers, the answer to that question is none. For TDD practioners it shifts the mindset to to doing things like checking if objects or strings are null before doing something with them because you are writing tests to specifically do that (break the code).
Another major reason is change. Anytime you deal with a customer, they can never seem to make up their minds. The only constant is change. TDD helps as a "safety net" to find all the other areas that could break.Even on small projects this can keep you from burning up precious time in the debugger.
I could go and on, but I think saying that TDD is more about writing code than anything should be enough to justify it's use as a sole developer.
I tend to agree with the validity of your point about the overhead of TDD for 'one developer' or 'hobby' projects not justifying the expenses.
You have to consider however that most best practices are relevant and useful if they are consistently applied for a long period of time.
For example TDD is saving you testing/bugfixing time in a long run, not within 5 minutes after you've created the first unit test.
You're a contract programmer which means that you will leave your current project when it will be finished and will switch to something else, most likely in another company. Your current client will have to maintain and support your application. If you do not leave the support team a good framework to work with they will be stuck. TDD will help the project to be sustainable. It will increase the stability of the code base so other people with less experience will not be able not do too much damage trying to change it.
The same applies for the hobby projects. You may be tired of it and will want to pass it to someone. You might become commercially successful (think Craiglist) and will have 5 more people working besides you.
Investment in proper process always pays-off, even if it is just gained experience. But most of the time you will be grateful that when you started a new project you decided to do it properly
You have to consider OTHER people when doing something. You you have to think ahead, plan for growth, plan for sustainability.
If you don't want to do that - stick to the cowboy coding, it's much simpler this way.
P.S. The same thing applies to other practices:
If you don't comment your code and you have ideal memory you'll be fine but someone else reading your code will not.
If you don't document your discussions with the customer somebody else will not know anything about a crucial decision you made
etc ad infinitum
I no longer refactor anything without a reasonable set of unit tests.
I don't do full-on TDD with unit tests first and code second. I do CALTAL -- Code A LIttle, Test A Little -- development. Generally, code goes first, but not always.
When I find that I've got to refactor, I make sure I've got enough tests and then I hack away at the structure with complete confidence that I don't have to keep the entire old-architecture-becomes-new-architecture plan in my head. I just have to get the tests to pass again.
I refactor the important bits. Get the existing suite of tests to pass.
Then I realize I forgot something, and I'm back to CALTAL development on the new stuff.
Then I see things I forgot to delete -- but are they really unused everywhere? Delete 'em and see what fails in the testing.
Just yesterday -- part way through a big refactoring -- I realized that I still didn't have the exact right design. But the tests still had to pass, so I was free to refactor my refactoring before I was even done with the first refactoring. (whew!) And it all worked nicely because I had a set of tests to validate the changes against.
For flying solo TDD is my copilot.
TDD lets me more clearly define the problem in my head. That helps me focus on implementing just the functionality that is required, and nothing more. It also helps me create a better API, because I'm writing a "client" before I write the code itself. I can also refactor without having to worry about breaking anything.
I'm going to answer this question quite quickly, and hopefully you will start to see some of the reasoning, even if you still disagree. :)
If you are lucky enough to be on a long-running project, then there will be times when you want to, for example, write your data tier first, then maybe the business tier, before moving on up the stack. If your client then makes a requirement change that requires re-work on your data layer, a set of unit tests on the data layer will ensure that your methods don't fail in undesirable ways (assuming you update the tests to reflect the new requirements). However, you are likely to be calling the data layer method from the business layer as well, and possibly in several places.
Let's assume you have 3 calls to a method in the business layer, but you only modify 2. In the third method, you may still be getting data back from your data layer that appears to be valid, but may break some of the assumptions you coded months before. Unit tests at this level (and above) should have been designed to spot broken assumptions, and in failing they should highlight to you that there is a section of code that needs to be revisited.
I'm hoping that this very simplistic example will be enough to get you thinking about TDD a little more, and that it might create a spark that makes you consider using it. Of course, if you still don't see the point, and you are confident in your own abilities to keep track of many thousands of lines of code, then I have no place to tell you you should start TDD.
The point about writing the tests first is that it enforces the requirements and design decisions you are making. When I mod the code, I want to make sure those are still enforced and it is easy enough to "break" something without getting a compiler or run-time error.
I have a test-first approach because I want to have a high degree of confidence in my code. Granted, the tests need to be good tests or they don't enforce anything.
I've got some pretty large code bases that I work on and there is a lot of non-trivial stuff going on. It is easy enough to make changes that ripple and suddenly X happens when X should never happen. My tests have saved me on several occasions from making a critical (but subtle) error that might have gone unnoticed by human testers.
When the tests do fail, they are opportunities to look at them and the production code and make sure that it is correct. Sometimes the design changes and the tests will need to be modified. Sometimes I'll write something that passes 99 out of 100 tests. That 1 test that didn't pass is like a co-worker reviewing my code (in a sense) to make sure I'm still building what I'm supposed to be building.
I feel that as a solo developer on a project, especially a larger one, you tend to be spread pretty thin.
You are in the middle of a large refactoring when all of a sudden a couple of critical bugs are detected that for some reason did not show up during pre-release testing. In this case you have to drop everything and fix them and after having spent two weeks tearing your hair out you can finally get back to whatever you were doing before.
A week later one of your largest customers realizes that they absolutely must have this cool new shiny feature or otherwise they won't place the order for those 1M units they should have already ordered a month ago.
Now, three months later you don't even remember why you started refactoring in the first place let alone what the code you are refactoring was supposed to do. Thank god you did a good job writing those unit tests because at least they tell you that your refactored code is still doing what it was supposed to do.
Lather, rinse, repeat.
..story of my life for the past 6 months. :-/
Sole developer should use TDD on his project (track record does not matter), since eventually this project could be passed to some other developer. Or more developers could be brought in.
New people will have extremely have hard time working with the code without the tests. They will break things.
Does your client own the source code when you deliver the product? If you can convince them that delivering the product with unit tests adds value, then you are up-selling your services and delivering a better product. From the client's perspective, test coverage not only ensures quality, it allows future maintainers to understand the code much more readily since the tests isolate functionality from the UI.
I think TDD as a methodology is not just about "having tests when making changes", thus it does not depend on team- nor on project size. It's about noting one's expectations about what a pice of code/an application does BEFORE one starts to really think about HOW the noted behaviour is implemented. The main focus of TDD is not only having test in place for written code but writing less code because you just do what make the test green (and refactor later).
If you're like me and find it quite hard to think about what a part/the whole application does WITHOUT thinking about how to implement it, I think its fine to write your test after your code and thus letting the code "drive" the tests.
If your question isn't so much about test-first (TDD) or test-after (good coding?) I think testing should be standard practise for any developer, wether alone or in a big team, who creates code which stays in production longer than three months. In my expirience that's the time-span after which even the original author has to think hard about what these twenty lines of complex, super-optimized, but sparsely documented code really code do. If you've got tests (which cover all paths throughth the code), there less to think - and less to ERR about, even years later...
Here are a few memes and my responses:
"TDD made me think about how it would fail, which made me a better programmer"
Given enough experience, being higly concerned with failure modes should naturally become part of your process anyway.
"Applications need to work properly"
This assumes you are able to test absolutely everything. You're not going to be any better at covering all possible tests correctly than you were at writing the functional code correctly in the first place. "Applications need to work better" is a much better argument. I agree with that, but it's idealistic and not quite tangible enough to motivate as much as I wish it would. Metrics/anecdotes would be great here.
"Worked great for my <library component X>"
I said in the question I saw value in these cases, but thanks for the anecdote.
"Think of the next developer"
This is probably one of the best arguments to me. However, it is quite likely that the next developer wouldn't practice TDD either, and it would therefore be a waste or possibly even a burden in that case. Back-door evangelism is what it amounts to there. I'm quite sure a TDD developer would really appeciate it, though.
How much are you going to appreciate projects done in deprecated must-do methodologies when you inherit one? RUP, anyone? Think of what TDD means to next developer if TDD isn't as great as everyone thinks it is.
"Refactoring is a lot easier"
Refactoring is a skill like any other, and iterative development certainly requires this skill. I tend to throw away considerable amounts of code if I think the new design will save time in the long run, and it feels like there would be an awful number of tests thrown away too. Which is more efficient? I don't know.
...
I would probably recommend some level of TDD to anyone new... but I'm still having trouble with the benefits for anyone who's been around the block a few times already. I will probably start adding automated tests to libraries. It's possible that after doing that, I'll see more value in doing it generally.
Motivated self interest.
In my case, sole developer translates to small business owner. I've written a reasonable amount of library code to (ostensibly) make my life easier. A lot of these routines and classes aren't rocket science, so I can be pretty sure they work properly (at least in most cases) by reviewing the code, some some spot testing and debugging into the methods to make sure they behave the way I think they do. Brute force, if you will. Life is good.
Over time, this library grows and gets used in more projects for different customers. Testing gets more time consuming. Especially cases where I'm (hopefully) fixing bugs and (even more hopefully) not breaking something else. And this isn't just for bugs in my code. I have to be careful adding functionality (customers keep asking for more "stuff") or making sure code still works when moved to a new version of my compiler (Delphi!), third party code, runtime environment or operating system.
Taken to the extreme, I could spend more time reviewing old code than working on new (read: billable) projects. Think of it as the angle of repose of software (how high can you stack untested software before it falls over :).
Techniques like TDD gives me methods and classes that are more thoughtfully designed, more thoroughly tested (before the customer gets them) and need less maintenance going forward.
Ultimately, it translates to less time doing maintenance and more time to spend doing things that are more profitable, more interesting (almost anything) and more important (like family).
We are all developers with a good track record. After all, we are all reading Stackoverflow. And many of us use TDD and perhaps those people have a great track record. I get hired because people want someone who writes great test automation and can teach that to others. When working alone, I do TDD on my coding projects at home because I found that if I don’t, I spent time doing manual testing or even debugging, and who needs that. (Perhaps those people have only good track records. I don’t know.)
When it comes to being a good automobile driver, everyone believes they are a “good driver.” This is a cognitive bias all drivers have. Programmers have their own biases. The reasons developers such as the OP don’t do TDD are covered in this Agile Thoughts podcast series. The podcast archive also has content on test automation concepts such as the test pyramid, and an intro about what is TDD and why write tests first starting with episode 9 in the podcast archive.

When is it good (if ever) to scrap production code and start over? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was asked to do a code review and report on the feasibility of adding a new feature to one of our new products, one that I haven't personally worked on until now. I know it's easy to nitpick someone else's code, but I'd say it's in bad shape (while trying to be as objective as possible). Some highlights from my code review:
Abuse of threads: QueueUserWorkItem and threads in general are used a lot, and Thread-pool delegates have uninformative names such as PoolStart and PoolStart2. There is also a lack of proper synchronization between threads, in particular accessing UI objects on threads other than the UI thread.
Magic numbers and magic strings: Some Const's and Enum's are defined in the code, but much of the code relies on literal values.
Global variables: Many variables are declared global and may or may not be initialized depending on what code paths get followed and what order things occur in. This gets very confusing when the code is also jumping around between threads.
Compiler warnings: The main solution file contains 500+ warnings, and the total number is unknown to me. I got a warning from Visual Studio that it couldn't display any more warnings.
Half-finished classes: The code was worked on and added to here and there, and I think this led to people forgetting what they had done before, so there are a few seemingly half-finished classes and empty stubs.
Not Invented Here: The product duplicates functionality that already exists in common libraries used by other products, such as data access helpers, error logging helpers, and user interface helpers.
Separation of concerns: I think someone was holding the book upside down when they read about the typical "UI -> business layer -> data access layer" 3-tier architecture. In this codebase, the UI layer directly accesses the database, because the business layer is partially implemented but mostly ignored due to not being fleshed out fully enough, and the data access layer controls the UI layer. Most of the low-level database and network methods operate on a global reference to the main form, and directly show, hide, and modify the form. Where the rather thin business layer is actually used, it also tends to control the UI directly. Most of this lower-level code also uses MessageBox.Show to display error messages when an exception occurs, and most swallow the original exception. This of course makes it a bit more complicated to start writing units tests to verify the functionality of the program before attempting to refactor it.
I'm just scratching the surface here, but my question is simple enough: Would it make more sense to take the time to refactor the existing codebase, focusing on one issue at a time, or would you consider rewriting the entire thing from scratch?
EDIT: To clarify a bit, we do have the original requirements for the project, which is why starting over could be an option. Another way to phrase my question is: Can code ever reach a point where the cost of maintaining it would become greater than the cost of dumping it and starting over?
Without any offense intended, the decision to rewrite a codebase from scratch is a common, and serious management mistake newbie software developers make.
There are many disadvantages to be wary of.
Rewrites stop new features from being developed cold for months/years. Few, if any companies can afford to stand-still for this long.
Most development schedules are difficult to nail. This rewrite will be no exception. Amplify the previous point by, now, a delay in development.
Bugs that were fixed in the existing codebase through painful experience will be re-introduced. Joel Spolsky has more examples in this article.
Danger of falling victim to the Second-system effect -- in summary, ``People who have designed something only once before try to do all the things they "didn't get to do last time", loading the project up with all the things they put off while making version one, even if most of them should be put off in version two as well.''
Once this expensive, burdensome rewrite is completed, the very next team to inherit the new codebase is likely to use the same excuses for doing another rewrite. Programmers hate learning someone else's code. No one writes perfect code because perfection is so subjective. Find me any real-world application and I can give you a damning indictment and rationale for doing a from-scratch rewrite.
Whether you ultimately rewrite from scratch or not, beginning a refactoring phase now is a good way to both really sit down and understand the problem so that the rewrite will go more smoothly if truly called for, as well as giving the existing codebase an honest look to really see if a rewrite's needed.
To actually scrap and start over?
When the current code doesn't do what you would like it to do, and would be cost prohibitive to change.
I'm sure someone will now link Joel's article about Netscape throwing their code away and how it's oh-so-terrible and a huge mistake. I don't want to talk about it in detail, but if you do link that article, before you do so, consider this: the IE engine, the engine that allowed MS to release IE 4, 5, 5.5, and 6 in quick succession, the IE engine that totally destroyed Netscape... it was new. Trident was a new engine after they threw away the IE 3 engine because it didn't provide a suitable basis for their future development work. MS did that which Joel says you must never do, and it is because MS did so that they had a browser that allowed them to completely eclipse Netscape. So please... just meditate on that thought for a moment before you link Joel and say "oh you should never do it, it's a terrible idea".
A rule of thumb I've found useful is that if given a code base, if I have to re-write more than 25% of the code to make it work or modify it based upon new requirements, you may as well re-write it from scratch.
The reasoning is that you can only patch a body of code so far; beyond a certain point, it's quicker to do over.
There's an underlying assumption that you have a mechanism (such as thorough unit and/or system tests) that will tell you whether your re-written version is functionally equivalent (where it needs to be) as the original.
If it requires more time to read and understand the code (if that is even possible)
than it would to rewrite the entire application, I say scrap it and start over.
Be very carefull with this:
Are you sure you aren't just being lazy and not bothering to read the code
Are you being arrogant about the great code you will write compared to the rubbish anyone else produced.
Remember tested-working code is worth a lot more than imaginary yet-to-be-written code
In the words of our estemed host and overlord, Joel - things you should never do,
it's not always wrong to abandon working code - but you have to be sure about the reason.
I saw an application re-architected within 2 years of its introduction into production, and others rewritten in different technologies (one was C++ - now Java). Both efforts were were not, to my mind, successful.
I prefer a more evolutionary approach to bad software. If you can "componentize" your old app such that you can introduce your new requirements and interface with the old code, you can ease yourself into the new environment without having to "sell" the zero-value (from a biz perspective) investment in rewriting.
Suggested approach - write unit tests for the functionality with which you wish to interface to 1) ensure the code behaves as you expect and 2) provide a safety net for any refactoring that you may wish to do on the old base.
Bad code is the norm. I think IT gets a bad rap from business for favoring rewrites/rearchitecting/etc. They pay the money and "trust" us (as an industry) to deliver solid, extensible code. Sadly, business pressures frequently result in shortcuts that make the code unmaintainable. Sometimes it's bad programmers... sometimes bad situations.
To answer your rephrased question... can code maintenance costs ever exceed rewriting costs... the answer is clearly yes. I don't see anything in your examples, however, that lead me to believe this is your case. I think those issues can be addressed with tests and refactoring.
In terms of business value, I would think it's extremely rare that a real case can be made for a rewrite due solely to the internal state of the code. If the product's customer-facing and is currently live and bringing in money (i.e. is not a mothballed or unreleased product), then consider that:
You already have customers using it. They're familiar with it, and might have built some of their own assets around it. (Other systems that interface to it; products based on it; processes they'd have to change; staff they'd maybe have to retrain). All of this costs the customer money.
Re-writing it might cost less in the long term than making difficult changes and fixes. But you can't quantify that yet, unless your app is no more complex than Hello World. And a re-write means a re-test and a redeploy, and probably an upgrade path for your customers.
Who says the re-write will be any better? Can you honestly say your firm is writing sparkly code now? Have the practices that turned the original code to spaghetti been corrected? (Even if the main culprit was a single developer, where were his peers and management, ensuring quality through reviews, testing, etc.?)
In terms of technical reasons, I'd suggest it could be time for a major rewrite if the original has some technical dependencies that have become problematic. e.g. a third party dependency that's now out of support, etc.
In general though, I think the most sensible move is to refactor piece by piece (very small pieces if it's really that bad), and improve the internal architecture incrementally rather than in one big drop.
Two threads of thought on this one: Do you have the original requirements? Do you have confidence that the original requirements are accurate? What about test plans or unit tests? If you have those things in place it might be easier.
Putting on my customer hat, does the system work or is it unstable? If you've got something that's unstable you've got an argument to change; otherwise you're best of refactoring it bit by bit.
I think the line in the sand is when basic maintenance is taking 25% - 50% longer than it should. There comes a time when maintaining legacy code becomes too costly. A number of factors contribute to the final decision. Time and cost being the most important factors I think.
If there are clean interfaces and you can cleanly delineate module boundaries, then it might be worth refactoring it module by module or layer by layer in order to allow you to migrate existing customers forward into cleaner more stable codebases, and over time, after you've refactored every module, you will have rewritten everything.
But, based on the codereview, doesn't sound like there would be any clean boundaries.
I wonder if the people who vote for scrapping and starting over have ever successfully refactored a large project, or at least seen a large project in poor condition that they think could use a refactoring?
If anything, I err on the opposite side: I've seen 4 large projects that were a mess, that I advocated refactoring as opposed to rewriting. On a couple, there was barely a single line of original code that remained, and major interfaces changed in significant ways, but the process never involved the entire project failing to function as well as it originally did, for any more than a week. (And top-of-trunk was never broken).
Perhaps a project exists that is so severely broken that to attempt to refactor it would be doomed to failure, or perhaps one of the previous projects I refactored would have been better served by a "clean re-write", but I'm not sure I'd know how to recognize it.
I agree with Martin. You really need to weigh the effort that will be involved in writing the app from scratch against the current state of the app and how many people use it, do they like it, etc. Often we may want to completely start from scratch, but the cost far outweighs the benefit. I come across bits of ugly looking code all the time, but I soon realize that some of these 'ugly' areas are really bug fixes and make the program work correctly.
I would try to consider the architecture of the system and see whether it is possible to scrap and rewrite specific well defined components without starting everything from scratch.
What would usually happen is that you can either do that (and then sell that to the customer/management), or that you find out that the code is such a horrible and tangled mess that you become even more convinced that you need a rewrite and have more convincing arguments for it (including: "if we engineer it right, we would never need to scrap the whole thing and do a third rewrite).
Slow maintenance would eventually cause that architectural drift that would make a rewrite more expensive later.
Scrap old code early and often. When in doubt, throw it out. The hard part is convincing non-technical folks of the cost-to-maintain.
So long as the value derived appears to be greater than the cost to operate and maintain, there's still positive value flowing from the software. The question surrounding a rewrite this: "will we get even more value from a rewrite?" Or alternatively "How much more value will we get from a rewrite?" How many person-hours of maintenance will you save?
Remember, the rewrite investment is once only. The return on the rewrite investment lasts forever. Forever.
Focus the value question down to specific issues. You listed a bunch of them above. Stick with that.
"Will we get more value by reducing cost through
dropping the junk that we don't use
but still have to wade through?"
"Will we get more value from dropping the junk that's unreliable and breaks?"
"Will we get more value if we understand it -- not by documenting, but by replacing with something we built as a team?"
Do you homework. You'll have to confront the following show-stoppers.
These will originate somewhere in your executive foodchain from someone who'll respond as follows:
"Is it broken?" And when you say "It's not crashed as such," They'll say "It's not broke - don't fix it."
"You've done the code analysis, you understand it, you no longer need to fix it."
What's your answer to them?
That's only the first hurdle. Here's the worst possible situation. This doesn't always happen, but it does happen with alarming frequency.
Someone in your executive foodchain will have this thought:
"A rewrite doesn't create enough value. Rather than simply rewrite, let's expand it." The justification is that by creating enough value, users are more likely to buy in to the rewrite.
A project where scope is expanded -- artificially -- to add value is usually doomed.
Instead, do the smallest rewrite you can to replace the darn thing. Then expand to fit real needs and add value.
You can only give a definite yes to rewriting in case if you know completely how your application works (and by completely I mean it, not just having a general idea of how it should work) and you know more or less exactly how to make it better. Any other cases and it's a shot in the dark, it depends on too much things. Perhaps gradual refactoring would be safer if it is possible.
If possible, I typically would prefer to rewrite smaller portions of the code over time when I need to refactor a baseline. There are typically many smaller issues such as magic number, poor commenting, etc. that tend to make the code look worse than it actually is. So, unless the baseline is just awful, keep the code and just make improvements at the same time you are maintaining the code.
If refactoring requires a lot of work, I recommend laying out a small re-design plan/todo list that gives you a list of things to work on in order so that you can bring the baseline to a better state. Starting from scratch is always a risky move and you are not guaranteed that the code will be better when you are finished. Using this technique, you will always have a working system that improves over time.
Code with excessively high cyclomatic complexity (like over 100 in a large number of modules) is a good clue. Also, how many bugs does it have / KLOC? How critical are the bugs? How often are bugs introduced when bug fixes are made. If your answer is a lot (I cant remember norms right now), then a rewrite is warranted.
As early as possible. Whenever you get a premonition that your code is slowly turning into an ugly beast that is very likely to consume your soul and give you headaches, and you know the problem is in the underlying structure of the code (so any fix would be a hack, e.g. introduce a global variable), then it's time to start over.
For some reasons people don't like throwing away precious code, but if you feel your better off starting over, you are probably right. Trust your instinct and remember that it wasn't a waste of time, it taught you one more way of NOT approaching the problem. You could (should) always use a version control system so your baby is never really lost.
I do not have any experience with using metrics for this myself, but the
article
"Software Maintainability Metrics Models in Practice" discusses
more or less the same question asked here for two case studies they did.
It starts with the following editor's note:
In the past, when a maintainer
received new code to maintain, the
rule-of-thumb was "If you have to
change more than 40 percent of someone
else's code, you throw it out and
start over." The Maintainability Index
[MI] addressed here gives a much more
quantifiable method to determine when
to "throw it out and start over." This
work was sponsored by the U.S. Air
Force Information Warfare Center and
the U.S. Department of Energy [DOE],
Idaho Field Office, DOE Contract No.
DE-AC07-94ID13223.)
I think the rule was...
The first version is always a throw away
So, if you learned your lesson(s), or his/her lessons, then you can go ahead and write it fresh now that you understand your problem domain better.
Not that there aren't parts that can/should be kept. Tested code is the most valuable code, so if it isn't deficient in any real way other than style, no reason to toss it all out.
When is it good (if ever) to scrap production code and start over?
Never had to do this, but logic would dictate (to me, anyway) that once you pass the inflection point where you're spending more time reworking and fixing bugs in the existing code base than you are adding new functionality, it's time to trash the old stuff and get a fresh start.
If it requires more time to read and understand the code (if that is even possible) than it would to rewrite the entire application, I say scrap it and start over.
I have never completely thrown out code. Even when going from a foxpro system to a c# system.
If the old system worked then why just throw it out?
I have come across a few really bad system. Threads being used where not needed. Horrible inheritance and abuse of interfaces.
It is best to understand what the old code is doing and why it is doing it. Then change it so that it is not confusing.
Of course if the old code doesn't work. I mean can't even compile. Then you might be justified in just starting over. But how often does that actually happen?
Yes, it totally can happen. I've seen money be saved by doing it.
This is not a tech decision, it's a business decision. Code rewrites are long term gains, while "if it ain't totally broke..." is a short term gain. If you are in a first year startup that is focused on getting a product out the door, the answer is usually to just live with it. If you're in an established company, or the errors with the current systems are causing more workload, therefor more company money.. then they might go for it.
Present the problem as best as you can to your GM, use dollar values where you can. "I don't like dealing with it" means nothing. "It'll take twice the time to do everything until this is fixed" means a lot.
I think there are a number of issues here that depend largely on where you are at.
Is the software working well from a customer perspective? (If yes be very careful about changes). I would think there would be little point re-witting unless you were expanding the feature set if the system was working. And are you planning to expand the features and customer base of the software? If so then you have much more reason to change.
As much as anything just trying to understand some else's code even if well written can be difficult, when badly written I would imagine almost impossible. What you describe sounds like something that would be very difficult to expand.
I would take into consideration if the application does what it is intended to do, is required for you to ever make modifications, and are you confident that the app has been thoroughly tested in all scenarios that it will be used in.
Do not invest the time if the app does not need alterations. However, if it doesn't function as you need and you need to control the hours and time invested to make corrections, scrap it and re-write to the standards that your team can support. There's nothing worse than terrible code that you have to support / decipher but still have to live with. Remember, Murphy's Law says it will 10 at night when you'll have to make things work, and that is never productive.
Production code always has some value. The only case where I would truly throw it all out and start again is if we determine the intellectual property is irrevocably contaminated. For example if someone brought large amounts of code from a previous employer, or a large percentage of the code was ripped from a GPLd codebase.
I'm going to post this book every time I see a discussion on Refactoring. Everyone should read "Working Effectively with Legacy Code" by Michael Feathers. I found it to be an excellent book - if nothing else, it's a fun read, and motivational.
When the code has reached a point that is not maintainable or extensible anymore. Is full of short-term hacky fixes. It has lots of coupling. It has long (100+lines) methods. It has database access in the UI. It generates a lot of random, impossible to debug errors.
Bottom line: When maintaining it is more expensive (i.e. takes longer) than rewriting it.
I used to believe in just re-write from scratch, but it is wrong.
http://www.joelonsoftware.com/articles/fog0000000069.html
Changed my mind.
What I would suggested is figuring out a way to properly refactor the code. Keep all existing functionality and test as you go. We have all seen horrible code bases, but it is important to keep the knowledge over time you application has.

Resources