Related
I need some advice on how to work with legacy code.
A while ago, I was given the task to add a few reports to a reporting app. written in Struts 1, back in 2005. No big deal, but the code is quite messy. No usage of Action forms, and basically the code is one huge action, and a lot of if-else statements inside. Also, no one here has functional knowledge on this. We just happened to have it in our contract.
I'm quite unhappy about this, and not sure how to proceed. This application is invisible: Few people (but all very important) use it, so they don't care whether my eyes bleed while reading the code, standards, etc.
However, I feel that a technical debt is to be paid. How should I proceed on this? Continue down the if-else road, or try to do this requirement the right way, ignoring the rest of the project? Starting a huge refactor, risking my deadline?
Legacy code is a big issue, and I'm sure people will not agree!
I would say that starting a big re-factor could be a mistake.
A big re-factor means doing a lot of work to make it function exactly the way that it does now. If you choose to take this on on your own, there won't be a lot of visibility of what you are doing. If it works, no one will know the hours of work you put it. If it does NOT work, and you end up with tidy code, but add some bugs (and who has ever written code without adding some bugs) then you will get 'why did this change' type questions.
I have currently nearly completed a project working on a 10 year old code base. We have done quite a few bits of re-factoring along the way. But for each re-factor we have made we can justify 'this specific change will make the actual task we are doing now easier'. Rather than 'this is now cleaner for future work'. We have found that as we worked on the code, fixing the issues that we actually come up against one at a time, we have cleaned up a lot of it, without breaking it (much).
And I would say before you can re-factor much, you will need automated tests, so you can be fairly happy that you have put it back together right!
Most re-factoring is done to 'make maintenance and future development easier'. Your project sounds like there is not a lot of future development coming. That limits the advantage a re-factor will give the company.
Rule #1: If it ain't broke, don't fix it.
Rule #2: When in doubt, reread rule #1.
Unfortunately, legacy code can very rarely be described as "it ain't broke." Therefor we must tweak the existing code to correct a newly found bug, tweak the existing code to modify behavior that was previously acceptable, or tweak the existing code to add new functionality.
My experience has taught me that any refactoring must be done in 'infinitesimally' small increments. If you must break rule #2, I suggest that you start your search with the inner-most nested loop or IF structure and expand outward until you find a clean, logical separation point and create a new function/method/subroutine that contains only the guts of that loop or structure. This won't make anything more efficient but it should give you a clearer view of the underlying logic and structure. Once you have several new, smaller functions/methods/subroutines you can refactor and consolidate those into something more manageable.
Rule #3: Ignore my previous paragraph and reread the first two rules.
I agree with other comments. If you don't have to, then don't do it. It usually cost far more then it's worth if the code base is more or less dead any way.
On the other hand, if you feel that you cannot get your head around the code then a refactor is probably unavoidable. If this is the case then, since it's a web application, can you create a solid suite of functional tests using selenium? If so this is the fastest and most rewarding test approach for such code and will catch most bugs for the buck.
Second, start with the extract method refactoring to create compose methods of the big difficult methods. Every time you think to your self "This should have a comment to explain what it does" you should extract it to a method with a name that replaces the comment.
Once this has been done, if you still can't add the functionality required, you can go for more advanced refactorings, and perhaps even adding some unit tests. But I usually find that I can add what is required/fix the bug in legacy code by just creating self documenting code.
In a few words: before make any modifications to legacy code its good idea to start from automated unit tests.
This will give developer understanding about key things: dependencies this piece of code has, input data, output results, boundary conditions so on.
When it’s done most likely you will better understand what this code does and how it works.
After that its make sense (but not must) clean code a bit giving more accurate names to local variables, moving some functionality (repetitive code, if any) in to functions with clear human friendly names.
A simple clean up could make code more readable and at the same time save developer from regression issues with unit tests help.
Refactoring – make small changes, step by step, when you have a time and understanding of the requirements and functionality, regularly unit testing the code.
But do not start from refactoring
What do you do when you're assigned to work on code that's
atrocious and antiquated to the point where it's almost incomprehensible?
For example: hardware interface code, mixed with logic, AND user interface code, ALL in the same functions?
We see bad code all the time, but what do you actually do about it?
Do you try to refactor it?
Try to make it OO if it's not?
Or do you try to make some sense of it, make the necessary changes and move on?
Depends on a few factors for me:
Will I be maintaining this code in the future, or is it a one-off fix?
How long until this system is replaced entirely?
How busy am I at the moment?
Ideally, I'd refactor all bad code I had to maintain, but the reality is there are only so many hours in the day.
As is frequently the case, "It Depends".
I tend to ask myself some of the following questions:
Are there unit tests for the existing code?
Is refactoring the code an acceptable risk for my project?
Is the author still available to clarify any questions I might have about the code?
Will my employer consider the time spent on changing existing, functioning code to be an acceptable use of my time?
And so on...
But assuming that I have the capacity to do so, refactoring is preferential as the up front cost of fixing the code now will likely save me a lot of time and effort later in maintenance and development time.
There are other benefits as well, including the fact that the more clean and well maintained you keep your code base, the more likely other developers are to keep it that way. The Pragmatic Programmer calls this the Broken Window Theory.
Developers have an instinct to assume that code is always ugly because of other, inferior developers. Sometimes, code is ugly because the problem space is ugly. All that ugliness isn't just ugliness - it is sometimes institutional memory. Each line of ugly in your code probably represents a bug fix. So think very carefully before you rip it all out.
Basically, I would say that you shouldn't touch code like this unless you actually have to. If there's a real bug that you can solve, refactoring is reasonable, if you can be sure you're maintaining the same amount of functionality. But refactoring for the sake of refactoring (eg, "make the code OO") is what I would generally classify as a classic newbie mistake.
The book Working Effectively with Legacy Code discusses the options you can do. In general the rule is not to change code until you have to (to fix a bug or add a feature). The book describes how to make changes when you can't add testing and how to add testing to complex code (which allows more substantial changes).
You try to refactor it, in the strict sense on the word, where you're not changing the behaviour.
The first target is usually to break up giant methods.
Given the strength of some of the adjectives you use, i.e. atrocious, antiquated and incomprehensible, I'd bin it!
If it is in such a state, like the example you give, it's probably not got any test code for it either. Refactoring is mentioned in many of the other answers but, sometimes, it is not appropriate. I always find that, when refactoring, you generally need a clear path through which the old code can be gradually morphed into the new in a number of well defined steps.
When the old code is so far removed from how you want it to look, such as the extreme cases you seem to be suggesting, you could probably redesign, rewrite and test the new code in a shorter time than it would to take to refactor it.
Scrap it and start over, using the compiled legacy application as a business requirements document.
And spending time in analysis with the users to see what they want changed.
Post it to www.worsethanfailure.com!!!
If no modifications are needed, I don't touch it.
If at all possible, I write automated unit tests first, especially focused on the areas that need modification.
If automated unit tests are not possible, I do what I can to document manual unit tests.
I am just using the tests to document "current" behavior at this point.
If possible, I always keep a version of the code and executable environment that runs things the "original" way (before I touched it) so I can always add new "behavior documentation" tests and better detect regressions I may have caused later.
Once I start changing things, I want to be very careful not to introduce regressions. I do this by continually rerunning (and or adding new tests) to the tests I wrote before I started writing code.
When possible, I leave bugs as-is if there is no business need for them to be fixed. Those bugs may be "features" to some users and may have unclear side effects that wouldn't be clear until the code was redeployed to production.
As far as refactoring, I do that as aggressively as possible, but only in the code that I need to change otherwise anyway. I may refactor more aggressively in my own personal copy of the code that will never be checked in, just to improve the readability of the code for me personally. It's often times difficult to properly test changes that are only made for readability reasons, so for safety reasons, I generally don't check those changes in / deploy them unless I can confidently test that the code changes are completely safe (it's really bad to introduce bugs when you are making changes that are unnecessary for anything but readability).
Really, it's a risk management problem. Proceed with caution. The users do not care if the code is atrocious, they just care that it gets better without getting worse. Your need for beautiful code is not important in this scenario, get past it.
Just like any other code, you leave it slightly better when you leave it than it was when you entered it. You do not ever, ever rewrite the whole code. If that is the work it takes for some reason, then you start a project (small or large) for it.
I am assuming we are talking about a substantial amount of code here.
Not every day is a great day at work you know :)
The first question to ask is: does it work?
If the answer is yes, that would be a huge disincentive to simply ditch it and start over. There may be thousands of man-hours in that code which address edge cases and nasty bugs. Worse yet, there may be other modules in the system that depend on the current incorrect (but known and possibly documented) behavior. Don't mess with it if it isn't broken.
If you are keen on cleaning it up, start by writing test cases for the current behavior. When you run across an instance where the behavior differs from the specification, you must decide whether to accept the behavior as "correct" or go with what the spec say it ought to do.
Only once you have written test cases that all pass should you begin to refactor. The tests will tell you whether your efforts are breaking anything.
I'd talk to my manager and describe the code. Most managers would not want a program held together by banding wire and duct tape per se. If the code is really that bad there are sure to be some business logic errors, hardcoding etc. stuffed in there that will eventually just destroy productivity.
I've come across some pretty bad code before (single letter variable names, no comments, everything crammed onto one line, etc.) and once I mentioned/showed it to my manager they almost always said "go ahead and re-write it", because not only are you taking the hit for reading and changing the code but future co-workers will have to go through the same pain. Better that you take a longer period of time just once to rewrite it rather than having each person who touches the code in the future have to go through and comprehend and decipher it first.
There is an old saying. If is isn't broke, do not fix it. If you have to maintain it then reverse engineer it and document it so the next time you come across it you will know what it does.
You do not know the situation the developer was in when he or she wrote the code. He or she may have been under a time crunch when it was written, (management was all over the developer, etc)
There are also situations where he or she wrote the code per the spec, The spec then changed several times, the developer had to patch the code, as rewrite is out of the question due to time constraints. This happens all of the time.
If the code impacts the performance of robustness of the application and is modular then you can re factor or re-write. Document the situation to assist future programmers in understanding.
Also many programmers consider reverse engineering other developers code as beneath them.
they would rather rewrite without considering the ramifications of doing so.
If you have never done so, try it sometime, it will make you a better developer.
Thanks
Joe
Kill it with fire.
Depends on your time frame and how important that code is to you. If you have to "just make it work" then do that and rewrite the module when time allows.
If its an important or integral part of what you do then refactor refactor refactor.
Then find the guy/girl who wrote it and send them a rude postcard!
The worst offender (in my experience) of really AWFUL code is the ease with which people can do cut & paste these days. Cut & paste should be used rarely. If you think that's the right solution, it's generally better to step back and generalize the problem a little.
Anytime you see code that is "nearly incomprehensible", PROCEED WITH CAUTION. You need to assume that any major re-factoring will result in new bugs being introduced that you'll need to find and correct.
Additionally, I've seen this scenario many times (even fell victim to it myself once or twice): Programmer inherits legacy code, decides code is ancient & unmaintainable and decides to refactor it, ends up deleting key "fixes" or "business rules" subtly patched in over the years, ends up spending a lot of time tracking down and re-introducing similar code when users complain about "a problem fixed years ago is happening again".
Re-factoring (and debugging) almost always takes longer than expected and should never be considered as a "freebie" that comes along with whatever task you're supposed to be doing.
"If it ain't broke, don't 'fix' it" still has a lot of truth.
Im my company we always Refactor Mercilessly. so we still come across atrocious code but LESS and Less and less ...
We write a lot of in-house code and the company is run for about 100 years by the same family. Management usually tells us we have to maintain the code base (evolve) for another 50 years or so. In this setting having code you don't dare to touch is considered a bigger risk to the long term survival of the company then the prospect of downtime because some under-tested code broke because of refactoring.
I run copy-paste detector and findbugs on all legacy code that comes my way.
I then plan my initial refactoring:
remove unused code, unused variable and unused methods
refactor duplicated code
set up a single step build
build a basic functional test
By that point the code meets the basic minimum for maintainability. It can be easily built and basic errors can be found via an automated test.
I often add code like this:
log.debug("is foo null? " + (foo == null));
log.debug("is discount < raw price ? " + (foo.getDiscount() < foo.getRawPrice()));
Some of that code will be recovered for unit tests when I can refactor to it.
I've worked places where we ship that kind of code.
I try to make sense of it, make the necessary changes, and move on.
Of course, making sense of it usually involves some changes; at the very least, I move around the whitespace and line up corresponding braces in the same column like so:
if(condition){
doSomething(); }
// becomes...
if(condition)
{
doSomething();
}
I'll also often change variable names.
And very often, "the necessary changes" require refactoring. :)
Get the idea of what they're doing and the deadline to finish. A larger deadline, typically rebuild much of the code from the ground up, as I find it a very worthwhile experience to not only decipher terrible code and make it legible and document, but somewhere in your brain those neurons are pressed to avoid similar mistakes in the future.
I want to know if there are method to quickly find bugs in the program.
It seems that the more you master the architecture of your software, the more quickly
you can locate the bugs.
How the programmers improve their ability to find a bug?
Logging, and unit tests. The more information you have about what happened, the easier it is to reproduce it. The more modular you can make your code, the easier it is to check that it really is misbehaving where you think it is, and then check that your fix solves the problem.
Divide and conquer. Whenever you are debugging, you should be thinking about cutting down the possible locations of the problem. Every time you run the app, you should be trying to eliminate a possible source and zero in on the actual location. This can be done with logging, with a debugger, assertions, etc.
Here's a prophylactic method after you have found a bug: I find it really helpful to take a minute and think about the bug.
What was the bug exactly in essence.
Why did it occur.
Could you have found it earlier, easier.
Anything else you learned from the bug.
I find taking a minute to think about these things will make it far less likely that you will produce the same bug in the future.
I will assume you mean logic bugs. The best way I have found to capture logic bugs is to implement some sort of testing scheme. Check out jUnit as the standard. Pretty much you define a set of accepted outputs of your methods. Every time you compile your system it checks all of your test cases. If you have introduced new logic that breaks your tests, you will know about it instantly and know exactly what you have to fix.
Test driven design is a pretty big movement in programming right now. You will be hard pressed to find a language that doesn't support some kind of testing. Even JavaScript has a multitude of test suites.
Experience makes you a better debugger. Pay close attention to the bugs that you AND others commonly make. Try to figure out if/how these bugs apply to ALL code that affects you, not the single instance of where the bug was seen.
Raymond Chen is famous for his powers of psychic debugging.
Most of what looks like psychic
debugging is really just knowing what
people tend to get wrong.
That means that you don't necessarily have to be intimately familiar with the architecture / system. You just need enough knowledge to understand the types of bugs that apply and are easy to make.
I personally take the approach of thinking about where the bug may be in the code before actually opening up the code and taking a look. When you first start with this approach, it may not actually work very well, especially if you are pretty unfamiliar with the code base. However, over time someone will be able to tell you the behavior they are experiencing and you'll have a good idea where the problem is located or you may even know what to fix in the code to remedy the problem before even looking at the code.
I was on a project for several years that maintained by a vendor. They were not very good debuggers and most of the time it was up to us to point them to an area of the code that had the problem. What made our problem worse was that we didn't have a nice way to view the source code, so a lot of our "debugging" was just feeling.
Error checking and reporting. The #1 newbie coder debugging mistake is to turn off error reporting, avoid checking for whether what's going on makes sense, etc etc. In general, people feel like if they can't see anything going wrong then nothing is going wrong. Which of course could not be further from the case.
Instead, your code should be chock full of error conditions that will make lots of noise, with detailed reporting, someplace you will see it. (This doesn't mean inside a production web page.) Then, instead of having to trace an error all over the place because it got passed through sixteen layers of execution before it finally got someplace that broke, your errors start happening proximately to the actual issue.
It seems that the more you master the
architecture of your software ,the
more quickly you can locate the bugs.
After understanding the architecture, one's ability to find bugs in the application increases with their ability to identify and write extensive tests.
Know your tools.
Make sure that you know how to use conditional breakpoints and watches in your debugger.
Use static analysis tools as well - they can point out the more obvious issues.
Sleep and rest.
Use programming methods that produce fewer bugs in the first place.
If to implement a single stand-alone functional requirement it takes N separate point-edits to source code, the number of bugs put into the code is roughly proportional to N, so find programming methods that minimize N. Ways to do this: DRY (don't repeat yourself), code generation, and DSL (domain-specific-language).
Where bugs are likely, have unit tests.
Obviously.IMHO, the best unit tests are monte-carlo.
Make intermediate results visible.
For example, compilers have intermediate representations, in the form of 4-tuples. If there is a bug, the intermediate code can be examined. That tells if the bug is in the first or second half of the compiler.
P.S. Most programmers are not aware that they have a choice of how much data structure to use. The less data structure you use, the less are the chances for bugs (and performance issues) caused by it.
I find tracepoints to be an invaluable debugging tool. They are a bit like logging, except you create them during a debugging session to solve a particular issue, like breakpoints.
Printing the stacktrace in a tracepoint can be especially useful. For example, you can print the hash code and stacktrace in the constructor of an object, and then later on when the object is used again you can search for its hashcode to see which client code created it. Same for seeing who disposed it or called a certain method etc.
They are also great for debugging issues related to window focus changes etc, where the debugger would interfere if you drop in break mode.
Static code tools like FindBugs
Assertions, assertions, and assertions.
Some areas of our code has 4 or 5 assertions for each line of real code. When we get a bug report the first thing that happens is that the customer data is processed in our debug build 99 times out a hundred an assert will fire near the cause of the bug.
Additionally our debug build perform redundant calculations to ensure that an optimized algorithm is returning the correct result, and also debug functions are used to examine the sanity of data structures.
The hardest thing new developers have to contend with is getting their code to survive the assertions of the code gthey are calling.
Additionally we do not allow any code to be putback to toplevel that causes any integration or unit test to fail.
Stepping through the code, examining flow/state where unexpected behavior is occurring. (Then develop a test for it, of course).
Writing Debug.Write(message) in your code and using DebugView is another option. And then run your application find out what is going on.
"Architecture" in software means something like:
Several components
The components interact across clearly-defined interfaces
Each component has a well-defined responsibility
The responsibility of one component is unlike the responsibilities of other components
So, as you said, the better the architecture the easier it is to find bugs.
First: knowing the bug, you can decide which functionality is broken, and therefore know which component implements that functionality. For example, if the bug is that something isn't being logged properly, therefore this bug should be in one of 3 places:
In the component that's responsible for logging (your logging library)
Or, above that in the application code which is using this library
Or, below that in the system code which this library is using
Second: examine the data transfered across the interfaces between components. To continue the previous example above:
Set a debugger breakpoint on the application code which invokes the logger API, to verify whether the logger API is being used correctly (e.g. whether it's being invoked at all, whether parameters are as-expected, etc.).
Doing this tells you whether the bug is in the component above this interface, or in the component that's below this interface.
Repeat (perhaps using binary search if the call stack is very deep) until you've found which component is at fault.
When you come to the point that you think there must be a bug in the OS, check your assertions -- and put them into the code with "assert" statements.
Conversely, as you are writing the code, think of the range of valid inputs for your algorithms and put in assertions to make sure you have what you think you have. Same goes for output: Check that you produced what you think you produced.
E.g. if you expect a non-empty list:
l = getList(input)
assert l, "List was empty for input: %s" % str(input)
I'm part of the QA team # work, and knowing anything about the product and how it is developed, helps a lot in finding bugs, also when I make new QA tools I pass it to our dev team to test it, finding bugs in your own code is just plain hard!
Some people say programmers are tainted, so we cannot see bugs in their own product; we are not talking about code here, we are beyond that, usability and functionality itself.
Meanwhile unit testing seams to be a nice solution to find bugs in your own code, its totally pointless if you're wrong even before writing the unit test, how are you going to find the bugs then? you don't!, let your co-worker find them, hire a QA guy.
Scientific debugging is what I always used, and it greatly helps.
Basically, if you can replicate a bug, you can track its origin. You should then experiment some tests, observe the results, and infer hypotheses on why the bug happens.
Writing about all your hypotheses, attempts, expected results and observed results can help you track down the bugs, particularly if they're nasty.
There are automated tools that can help you with that process, particularly git-bisect (and similar bisection tools on other revision systems) to quickly find which change introduced the bug, unit testing to reproduce a bug and prevent regressions in your code (can be used in combination with bisect), and delta debugging to find the culprit in your code (similar to git-bisect but whereas git-bisect works on the code history, delta debugging works on the code directly).
But whatever the tools you are using, the most important benefit is in the scientific methodology, as this is the formalization of what most experienced debuggers do.
When inheriting applications at a new job do you tend to stick to the original developers coding practices or do you start applying your own?
I work in a small shop with no guidelines and always wondered what the rule was here. Some applications are written very well but do not follow the standards I use (variable names etc...) and I do not want to "dirty" them up. I find my self taking a little extra time being consistent.
Others are written very poorly and it looks like the developer was changing his mind every keystroke...
ADDITIONAL THOUGHT
What about when I start my own projects? So now I have introduced a new coding standard to the mix:
The good code - but not my style
The bad code with bad practices and lack of standards
My own standards
If there are standards evident in the code, you should stick to them. If there aren't, start introducing your own.
If there are multiple developers who work on the same module, don't change the style.
If you will hand it off to another developer in the near future (this role is temporary), don't change the style.
If you are taking complete, exclusive, permanent ownership of the module, change it, but follow these rules:
One change at a time.
Fix all indentation to your liking at once, and commit that change.
Fix all brace placement to your liking at once, and commit that change.
Fix all other formatting to your liking at once, and commit that change.
Fix all naming to your liking at once, and commit that change.
Don't spend a lot of time on it.
If it takes more than an hour or two, then cut back.
Make the commit description clear.
So you can quickly ignore these changes when analyzing change history.
Use automated tools
to make sure the result is consistent and complete, so you don't have to mess with it again.
Run your tests
Just because your changes shouldn't affect behavior doesn't mean they won't. (Triple negative, ouch!)
Make sure everyone knows what you're doing
Someone might have a change hanging around that they want to commit now, and it'll be painful to merge with your changes. Also, you don't want anyone to get surprised and go tell your boss before you do.
Don't do it again
This is a one-time thing.
Publish a style guide that follows best practices, and build consensus around following it. Refactor old code as you need to maintain it.
I'm in the same boat as you. Lone developer who inherited some apps from the last guy. I
I've been sticking to what appear to be his standards for existing projects for consistency, and using my own preferences for new stuff.
I've noticed that most people think whoever came before them had no idea how to write code. Then whoever comes after them thinks the same thing. Some things are common sense, but most things are just personal preference.
For major problems, i.e. using comments v.s. not using comments, updating the code will probably make it easier to work with, and easier for anyone else to work with. Even then, your time is probably best-spent updating the code as you come across it, instead of embarking on a huge project to refactor everything (introducing new problems in the process).
For things like indentation, line spacing, variable names, one-line ifs v.s. multi-line ifs, the reality is that your coding style is likely just as bad as you think theirs is.
I think it depends on what you mean by "coding practices". If you mean things like code formatting and naming conventions and things that I would personally consider "cosmetic", then stick with whats already there. If you mean things like coding best prcatices and writing code correctly in the first place, then go back and fix the problems if possible, but at the very least make your new code follow best practices.
Given that most of the applications I've inherited have been hacked together by "cowboy coders" who didn't apply even the most basic of coding practices, my opinion is a little biased.
I say introduce coding standards if there are none or the ones that exist are blatantly wrong and/or stupid (e.g. "All variables must be no more than 4 characters in length", "Every database column is varchar(255) null", etc.). Obviously if you have a team then you'll need to come to an agreement as to what practices to implement, but if you're a solo dev then you have free reign and IMO you should introduce order to the chaos.
If the code works, and seems to have had a clean format. Don't waste time changing the style.
If the code is badly written. By all means change it when you have some down time, or the next time you work on the project.
For new projects do them your way, since there is no standard. As with the other well written programs yours should be easy enough to maintain.
composition is often preferred over inheritance
:-P
If it's just you, go for it. If it's a team, especially if any of the original developers are still around (or likely to be called in for consulting), keep with the existing style and practices as much as you can. Don't follow them down a rat hole - if you think they're doing something stupid, change it, but if it's just a stylistic thing, keep to their style as much as you can.
On several jobs I've been on, we had no rule on coding style other than "if you're making changes to an existing file/class, use the existing style, even for new code."
I follow company standards if there are any.
If there aren't any and the changes are small, I adopt to the used style of coding.
If there are larger changes to be made and I don't like the coder's style, I will use my own.
And if the existing code is bad I will change that too.
Will you ever have a better opportunity to update existing code with a standard style? Probably not. When you are new to the code you are going to have the best chance of taking some extra time to make non-new-feature and non-bug-fix changes. The lack of standards may be discouraging but you are unlikely to have a better chance to standardize than when you first inherit the code.
It sounds like we're talking about a situation with no official style guides / best practices. In that case, as Sean said, I'd take the lead on establishing some. But... if at all possible, pick an existing, widely-used standard. It's more likely to be accepted, all the arguments are done with, and the odds of out-of-the-box tool support (editors, code review tools, etc.) greatly increase.
Getting others to adopt it will often work best from the bottom up -- write new code to the new standards, mention to others that you've done so, ask for feedback. Much easier than trying to get approval and buy-in in advance.
Within the existing, ugly project, avoid wholesale changes to existing modules. For one thing, diffs and version control will get quite confusing if a file is suddenly reindented.
If the chunk you're working on is so bad as to be unreadable, I'd do an initial checkin just to reformat it; follow that up with actual code changes.
I would apply the same refactoring standards to the code as I would if it DID match my style standards. That is, I'd ignore the style and just go on about my business.
If it's not terribly difficult to follow the style that is in the code - with regards to naming conventions, I'd go ahead and use those for new code.
However, I wouldn't bother trying to follow stuff like 'tabs should not be used', 'every line should be indented 2 spaces', etc. There are plenty of editors out there where you can 'pretty' the code whenever you need it these days.
G-Man
I think it depends highly on the specific case.
If you are a consultant on a project for a short time you should stick to the way thing are.
If you are on for a long time. Try to refactor bad code into your own scheme.
If you are on for a short time but you are working on an isolated module, then use your own scheme.
Short answer is, "It depends." Here are a few factors that I'd consider important in determining whether to keep the old style or not:
1) Scope of changes. If it is close to a total re-write of the application, then it may make more sense to put in a new standard if you have one that you feel works well for you.
2) Likelihood of future changes. Will this be changed over and over again? If so, then taking some time early on may well be worth it in the end. This does require a bit of judgement and predicting the future, but it may be easy in some cases to see that there will be changes over and over again for some systems that are fairly complex.
3) How much of the code is a customization on a 3rd party codebase, e.g. a company's specific customizations of Oracle products for their business processes, compared to a completely home grown application. The impact here is that when new versions are relased and an upgrade is requested, how much pain may there be on what breaks since it was customized so much.
When starting your own projects, put in the best standard that you know.
If I inherit code that has obviously never been refactored, I would take that as an opportunity to impose some of my own structure.
If people expect me to make time and cost estimates for adding functionality to the code, I'll need to be intimately familiar it, and make sure it lives up to my standards.
If the code is already well-written, that would be a blessing that I would not mess with. But in my experience, this hasn't happened very often.
As you work in a legacy codebase what will have the greatest impact over time that will improve the quality of the codebase?
Remove unused code
Remove duplicated code
Add unit tests to improve test coverage where coverage is low
Create consistent formatting across files
Update 3rd party software
Reduce warnings generated by static analysis tools (i.e.Findbugs)
The codebase has been written by many developers with varying levels of expertise over many years, with a lot of areas untested and some untestable without spending a significant time on writing tests.
Read Michael Feather's book "Working effectively with Legacy Code"
This is a GREAT book.
If you don't like that answer, then the best advice I can give would be:
First, stop making new legacy code[1]
[1]: Legacy code = code without unit tests and therefore an unknown
Changing legacy code without an automated test suite in place is dangerous and irresponsible. Without good unit test coverage, you can't possibly know what affect those changes will have. Feathers recommends a "stranglehold" approach where you isolate areas of code you need to change, write some basic tests to verify basic assumptions, make small changes backed by unit tests, and work out from there.
NOTE: I'm not saying you need to stop everything and spend weeks writing tests for everything. Quite the contrary, just test around the areas you need to test and work out from there.
Jimmy Bogard and Ray Houston did an interesting screen cast on a subject very similar to this:
http://www.lostechies.com/blogs/jimmy_bogard/archive/2008/05/06/pablotv-eliminating-static-dependencies-screencast.aspx
I work with a legacy 1M LOC application written and modified by about 50 programmers.
* Remove unused code
Almost useless... just ignore it. You wont get a big Return On Investment (ROI) from that one.
* Remove duplicated code
Actually, when I fix something I always search for duplicate. If I found some I put a generic function or comment all code occurrence for duplication (sometime, the effort for putting a generic function doesn't worth it). The main idea, is that I hate doing the same action more than once. Another reason is because there's always someone (could be me) that forget to check for other occurrence...
* Add unit tests to improve test coverage where coverage is low
Automated unit tests is wonderful... but if you have a big backlog, the task itself is hard to promote unless you have stability issue. Go with the part you are working on and hope that in a few year you have decent coverage.
* Create consistent formatting across files
IMO the difference in formatting is part of the legacy. It give you an hint about who or when the code was written. This can gave you some clue about how to behave in that part of the code. Doing the job of reformatting, isn't fun and it doesn't give any value for your customer.
* Update 3rd party software
Do it only if there's new really nice feature's or the version you have is not supported by the new operating system.
* Reduce warnings generated by static analysis tools
It can worth it. Sometime warning can hide a potential bug.
I'd say 'remove duplicated code' pretty much means you have to pull code out and abstract it so it can be used in multiple places - this, in theory, makes bugs easier to fix because you only have to fix one piece of code, as opposed to many pieces of code, to fix a bug in it.
Add unit tests to improve test coverage. Having good test coverage will allow you to refactor and improve functionality without fear.
There is a good book on this written by the author of CPPUnit, Working Effectively with Legacy Code.
Adding tests to legacy code is certianly more challenging than creating them from scratch. The most useful concept I've taken away from the book is the notion of "seams", which Feathers defines as
"a place where you can alter behavior in your program without editing in that place."
Sometimes its worth refactoring to create seams that will make future testing easier (or possible in the first place.) The google testing blog has several interesting posts on the subject, mostly revolving around the process of Dependency Injection.
I can relate to this question as I currently have in my lap one of 'those' old school codebase. Its not really legacy but its certainly not followed the trend of the years.
I'll tell you the things I would love to fix in it as they bug me every day:
Document the input and output variables
Refactor the variable names so they actually mean something other and some hungarian notation prefix followed by an acronym of three letters with some obscure meaning. CammelCase is the way to go.
I'm scared to death of changing any code as it will affect hundreds of clients that use the software and someone WILL notice even the most obscure side effect. Any repeatable regression tests would be a blessing since there are zero now.
The rest is really peanuts. These are the main problems with a legacy codebase, they really eat up tons of time.
I'd say it largely depends on what you want to do with the legacy code...
If it will indefinitely remain in maintenance mode and it's working fine, doing nothing at all is your best bet. "If it ain't broke, don't fix it."
If it's not working fine, removing the unused code and refactoring the duplicate code will make debugging a lot easier. However, I would only make these changes on the erring code.
If you plan on version 2.0, add unit tests and clean up the code you will bring forward
Good documentation. As someone who has to maintain and extend legacy code, that is the number one problem. It's difficult, if not downright dangerous to change code you don't understand. Even if you're lucky enough to be handed documented code, how sure are you that the documentation is right? That it covers all of the implicit knowledge of the original author? That it speaks to all of the "tricks" and edge cases?
Good documentation is what allows those other than the original author to understand, fix, and extend even bad code. I'll take hacked yet well-documented code that I can understand over perfect yet inscrutable code any day of the week.
The single biggest thing that I've done to the legacy code that I have to work with is to build a real API around it. It's a 1970's style COBOL API that I've built a .NET object model around, so that all the unsafe code is in one place, all of the translation between the API's native data types and .NET data types is in one place, the primary methods return and accept DataSets, and so on.
This was immensely difficult to do right, and there are still some defects in it that I know about. It's not terrifically efficient either, with all the marshalling that goes on. But on the other hand, I can build a DataGridView that round-trips data to a 15-year-old application which persists its data in Btrieve (!) in about half an hour, and it works. When customers come to me with projects, my estimates are in days and weeks rather than months and years.
As a parallel to what Josh Segall said, I would say comment the hell out of it. I've worked on several very large legacy systems that got dumped in my lap, and I found the biggest problem was keeping track of what I already learned about a particular section of code. Once I started placing notes as I go, including "To Do" notes, I stopped re-figuring out what I already figured out. Then I could focus on how those code segments flow and interact.
I would say just leave it alone for the most part. If it's not broken then don't fix it. If it is broken then go ahead and fix and improve the portion of the code that is broken and its immediately surrounding code. You can use the pain of the bug or sorely missing feature to justify the effort and expense of improving that part.
I would not recommend any wholesale kind of rewrite, refactor, reformat, or putting in of unit tests that is not guided by actual business or end-user need.
If you do get the opportunity to fix something, then do it right (the chance of doing it right the first time might have already passed, but since you are touching that part again might as well do it right time around) and this includes all the items you mentioned.
So in summary, there's no single or just a few things that you should do. You should do it all but in small portions and in an opportunistic manner.
Late to the party, but the following may be worth doing where a function/method is used or referenced often:
Local variables often tend to be poorly named in legacy code (often owing to their scope expanding when a method is modified, and not being updated to reflect this). Renaming these in line with their actual purpose can help clarify legacy code.
Even just laying out the method slightly differently can work wonders - for instance, putting all the clauses of an if on one line.
There might be stale/confusing code comments there already. Remove them if they're not needed, or amend them if you absolutely have to. (Of course, I'm not advocating removal of useful comments, just those that are a hindrance.)
These might not have the massive headline impact you're looking for, but they are low risk, particularly if the code can't be unit tested.