Dealing with god objects - debugging

I work in a medium sized team and I run into these painfully large class files on a regular basis. My first tendency is to go at them with a knife, but that usually just makes matters worse and puts me into a bad state of mind.
For example, imagine you were just given a windows service to work on. Now there is a bug in this service and you need to figure out what the service does before you can have any hope of fixing it. You open the service up and see that someone decided to just use one file for everything. Start method is in there, Stop method, Timers, all the handling and functionality. I am talking thousands of lines of code. Methods under a hundred lines of code are rare.
Now assuming you cannot rewrite the entire class and these god classes are just going to keep popping up, what is the best way to deal with them? Where do you start? What do you try to accomplish first? How do you deal with this kind of thing and not just want to get all stabby.
If you have some strategy just to keep your temper in check, that is welcome as well.
Tips Thus Far:
Establish test coverage
Code folding
Reorganize existing methods
Document behavior as discovered
Aim for incremental improvement
Edit:
Charles Conway recommend a podcast which turned out to be very helpful. link
Michael Feathers (guy in the podcast) begins with the premise that were are too afraid to simply take a project out of source control and just play with it directly and then throw away the changes. I can say that I am guilty of this.
He essentially said to take the item you want to learn more about and just start pulling it apart. Discover it's dependencies and then break them. Follow it through everywhere it goes.
Great Tip
Take the large class that is used elsewhere and have it implement an emtpy interface. Then take the code using the class and have it instantiate the interface instead. This will give you a complete list of all the dependencies to that large class in your code.

Ouch! Sounds like the place I use to work.
Take a look at Working effectivly with legacy code. It has some gems on how to deal with atrocious code.
DotNetRocks recently did a show on working with legacy code. There is no magic pill that is going to make it work.
The best advice I've heard is start incrementally wrapping the code in tests.

That reminds me of my current job and when I first joined. They didn't let me re-write anything because I had the same argument, "These classes are so big and poorly written! no one could possibly understand them let alone add new functionality to them."
So the first thing I would do is to make sure there are comprehensive testing behind the areas that you're looking to change. And at least then you will have a chance of changing the code and not having (too many) arguments (hopefully). And by tests, I mean testing the components functionally with integration or acceptance tests and making sure it is 100% covered. If the tests are good, then you should be able to confidently change the code by splitting up the big class into smaller ones, getting rid of duplication etc etc

Even if you cannot refactor the file, try to reorganize it. Move methods/functions so that they are at least organized within the file logically. Then put in lots of comments explaining each section. No, you haven't rewritten the program, but at least now you can read it properly, and the next time you have to work on the file, you'll have lots of comments, written by you (which hopefully means that you will be able to understand them) which will help you deal with the program.

Code Folding can help.
If you can move stuff around within the giant class and organize it in a somewhat logical way, then you can put folds around various blocks.
Hide everthing, and you're back to a C paradigm, except with folds rather than separate files.

I've come across this situation as well.
Personally I print out (yeah, it can be a lot of pages) the code first. Then I draw a box around sections of code that are not part of any "main-loop" or are just helper functions and make sure I understand these things first. The reason is they are probably referred to many times within the main body of the class and it's good to know what they do
Second, I identify the main algorithm(s) and decompose them into their parts using a numbering system that alternates between numbers and letters (it's ugly but works well for me). For example you could be looking at part of an algorithm 4 "levels" deep and the numbering would be 1.b.3.e or some other god awful thing. Note that when I say levels, I am not referring directly to control blocks or scope necessarily, but where I have identified steps and sub-steps of an algorithm.
Then it's a matter of just reading and re-reading the algorithm. When you start out it sounds like a lot of time, but I find that doing this develops a natural ability to comprehend a great deal of logic all at once. Also, if you discover an error attributed to this code, having visually broken it down on paper ahead of time helps you "navigate" the code later, since you have a sort of map of it in your head already.
If your bosses don't think you understand something until you have some form of UML describing it, a UML sequence diagram could help here if you pretend the sub-step levels are different "classes" represented horizontally, and start-to-finish is represented vertically from top-to-bottom.

I feel your pain. I tackled something like this once for a hobby project involving processing digital TV data on my computer. A fellow on a hardware forum had written an amazing tool for recording shows, seeing everything that was on, and more. Plus, he had done incredibly vital work of working around bugs in real broadcast signals that were in violation of the standard. He'd done amazing work with thread scheduling to be sure that no matter what, you wouldn't lose those real-time packets: on an old Pentium, he could record four streams simultaneously while also playing Doom and never lose a package. In short, this code incorporated a ton of great knowledge. I was hoping to take some pieces and incorporate them into my own project.
I got the source code. One file, 22,000 lines of C, no abstraction. I spent hours reading it; there was all this great work, but it was all done badly. I was not able to reuse a single line or even a single idea.
I'm not sure what the moral of the story is, but if I had been forced to use this stuff at work, I would have begged permission to chip pieces off it one at a time, build unit tests for each piece, and eventually grow a new, sensible thing out of the pieces. This approach is a bit different than trying to refactor and maintain a large brick in place, but I would rather have left the legacy code untouched and tried to bring up a new system in parallel.

The first thing I would do is write some unit tests to box the current behavior, assuming that there are none already. Then I'd start in the area where I need to make the change and try to get that method cleaned up -- i.e. refactor working code before introducing changes. Use common refactoring techniques to extract and reuse methods from existing long methods to make them more understandable. When you extract a method, look for other places in the code where similar code exists, box that area, and reuse the method you've just extracted.
Look for groups of methods that "hang together" that can be broken out into their own classes. Write some tests for how those classes should work, build the classes using the existing code as a template if need be, then substitute the new classes into the existing code, removing the methods that they replace. Again, using your tests to make sure that you're not breaking anything.
Make enough improvement to the existing code so that you feel you can implement your new feature/fix in a clean way. Then write the tests for the new feature/fix and implement to pass the tests. Don't feel that you have to fix everything the first time. Aim for gradual improvement, but always leave the code better than you found it.

Related

good programming practice

I'm reading Randall Hyde's Write great code (volume 2) and I found this:
[...] it’s not good programming practice to create monolithic
applications, where all the source code appears in one source file (or is processed by a single compilation) [...]
I was wondering, why is this so bad?
Thanks everyone for your answers, I really wish to accept more of them, but I've chosen the most synthetic, so that who reads this question finds immediately the essentials.
Thanks guys ;)
The main reason, more important in my opinion than the sheer size of the file in question, is the lack of modularity. Software becomes easy to maintain if you have it broken down into small pieces whose interactions with each other are through a few well defined places, such as a few public methods or a published API. If you put everything in one big file, the tendency is to have everything dependent on the internals of everything else, which leads to maintenance nightmares.
Early in my career, I worked on a Geographic Information System that consisted of over a million lines of C. The only thing that made it maintainable, the only thing that made it work in the first place is that we had a sharp dividing line between everything "above" and everything "below". The "above" code implemented user interfaces, application specific processing, etc, and everything "below' implemented the spatial database. And the dividing line was a published API. If you were working "above", you didn't need to know how the "below" code worked, as long as it followed the published API. If you were working "below", you didn't care how your code was being used, as long as you implemented the published API. At one point, we even replaced a huge chunk of the "below" side that had stored stuff in proprietary files with tables in SQL databases, and the "above" code didn't have to know or care.
Because everything gets so crammed.
If you have separate files for separate things then you'll be able to find and edit it much faster.
Also, if there's an error, you can locate it more easily.
Because you quickly get (tens of) thousands of lines of unmaintainable code.
Impossible to read, impossible to maintain, impossible to extend.
Also, it's good for team work, since there are a lot of small files which can be edited in parallel by different developers.
Some reasons:
It's difficult to understand something that's all jammed together like that. Breaking
the logic into functions makes it easier to follow.
Factoring the logic into discrete components allows you to re-use those components elsewhere. Sometimes "elsewhere" means "elsewhere in the same program."
You cannot possibly unit-test something that's built in a monolithic fashion. Breaking
a huge function (or program) into pieces allows you to test those pieces individually.
While I understand that HTML and CSS aren't 'programming languages,' I suppose you could look at it in terms of having all of the .css, .js and .html all on one page: It makes it difficult to recycle or debug code.
Including all the source code in a single file makes it hard to manage code. The better approach should be to divide your program in separate modules. Each module should have its own source file and all the modules should be linked finally to create the executable. This helps a lot in maintaining code and makes it easy to find module specific bugs.
Moreover, if you have 2 or more people working simultaneously on the same project, there is a high probability that you will be wasting lots of your time in resolving code conflicts.
Also if you have to test individual components (or modules), you can do it easily without bothering about the other independent modules (if any).
Because decomposition is a computer science fundamental: Solve large problems by decomposing them into smaller pieces. Not only are the smaller problems more manageable, but they make it easier to keep a picture of the entire problem in your head.
When you talk about a monolithic program in a single file, all I can think of are the huge spaghetti code that found its way into COBOL and FORTRAN way back in the day. It encourages a cut and paste mentality that only gets worse with time.
Object-oriented languages are an attempt to help with this. OO languages decompose problems software components, with data and functions encapsulated together, that map to our mental models of how the world works.
Think about it in terms of the Single Responsibility Principle. If that one large file, or one monolithic application, or one "big something" is responsible for everything concerning that system, it's a catch-all bucket of functionality. That functionality should be separated into its discrete components for maintainability, re-usability, etc.
Programmers do a thousand line of codes right, if you do things on separate side. you can easily manage and locate your files.
Apart from all the above mentioned reasons, typically all the text from a compilation unit is included in final program. However if you split things up and it turns out all code in particular file is not getting used it will not be linked. Even if its getting linked it might be easy to turn it into a DLL should you want to decide using the functionality at run time. This facilitates dependency management, reduces build times by compiling only the modified source file leading to greater maintainability and productivity.
From some really bad experience I can tell you that it's unreadable. Function after function of code, you can never understand what the programmer meant or how to find your way out of it.
You cannot even scroll in the file because a little movement of the scroll bar scrolls two pages of code.
Right now I'm rewriting an entire application because the original programmers thought it a great idea to create 3000-line code files. It just cannot be maintained. And of course it's totally untestable.
I think you have never worked in big company that has 6000 lines of code in one file... and every couple thousand lines you see comments like /** Do not modify this block, it's critical */ and you think the bug assigned to you is coming from that block. And your manager says, "yeah look into that file, it's somewhere in it."
And the code breaks all those beautiful OOP concepts, has dirty switches. And like fifty contributors.
Lucky you.

How do you decide which parts of the code shall be consolidated/refactored next?

Do you use any metrics to make a decision which parts of the code (classes, modules, libraries) shall be consolidated or refactored next?
I don't use any metrics which can be calculated automatically.
I use code smells and similar heuristics to detect bad code, and then I'll fix it as soon as I have noticed it. I don't have any checklist for looking problems - mostly it's a gut feeling that "this code looks messy" and then reasoning that why it is messy and figuring out a solution. Simple refactorings like giving a more descriptive name to a variable or extracting a method take only a few seconds. More intensive refactorings, such as extracting a class, might take up to a an hour or two (in which case I might leave a TODO comment and refactor it later).
One important heuristic that I use is Single Responsibility Principle. It makes the classes nicely cohesive. In some cases I use the size of the class in lines of code as a heuristic for looking more carefully, whether a class has multiple responsibilities. In my current project I've noticed that when writing Java, most of the classes will be less than 100 lines long, and often when the size approaches 200 lines, the class does many unrelated things and it is possible to split it up, so as to get more focused cohesive classes.
Each time I need to add new functionality I search for already existing code that does something similar. Once I find such code I think of refactoring it to solve both the original task and the new one. Surely I don't decide to refactor each time - most often I reuse the code as it is.
I generally only refactor "on-demand", i.e. if I see a concrete, immediate problem with the code.
Often when I need to implement a new feature or fix a bug, I find that the current structure of the code makes this difficult, such as:
too many places to change because of copy&paste
unsuitable data structures
things hardcoded that need to change
methods/classes too big to understand
Then I will refactor.
I sometimes see code that seems problematic and which I'd like to change, but I resist the urge if the area is not currently being worked on.
I see refactoring as a balance between future-proofing the code, and doing things which do not really generate any immediate value. Therefore I would not normally refactor unless I see a concrete need.
I'd like to hear about experiences from people who refactor as a matter of routine. How do you stop yourself from polishing so much you lose time for important features?
We use Cyclomatic_complexity to identify the code that needs to be refactored next.
I use Source Monitor and routinely refactor methods when the complexity metric goes aboove around 8.0.

How to refactor rapidly evolving code?

I have some research code that's a real rat's nest, with code duplication everywhere, and clearly needs to be refactored. However, the code base is evolving as I come up with new variations on the theme and fit them into the codebase. The reason I've put off refactoring so long is because I feel like the minute I spend a few days coming up with good abstractions, seeing what design patterns fit where, etc., I'll want to try out some new unforeseen idea that makes my abstractions completely inadequate. In other words, because of the rate at which the code is evolving, I really have no idea where abstraction lines belong, even though there is no shortage of (approximate) duplication and the general messiness of the code makes adding stuff to it a real pain. What are some general best practices for coping with this kind of situation?
Don't spend so long refactoring!
When you're about make a change in a piece of code, consider refactoring it to make the change easier.
After making the change, refactor again to clean up the damage done by that change.
In both cases, make the refactorings small and do them quickly, and move on.
You don't have to keep your code pristine at all times, but remember that it's easier to go fast if you have well-factored code to work in (and if you have good unit tests, of course).
Test Driven Development:
Red, Green, Refactor. Rinse, repeat.
Since it's one of the steps in every single cycle, you'll notice that's a LOT of usually minor refactoring taking place. That's the way it should be.
Your situation is pretty familiar to me. While doing investigative coding often you have no idea what the "right" abstraction will be, and as you say it can change with every new idea.Other posters have suggested:
Continuous small refactoring, which helps to avoid getting into the rats-nest situation
Test-Driven Development, which helps to find good, re-usable abstractions. It's important to note that TDD is less about testing than about doing good designs!
However, for investigative research code there is another strategy: the prototype. This seems to be what you are currently doing: coding as quickly as possible to prove a concept. There's nothing wrong with that, but a prototype should always be throw-away. Tweak it until you have all the necessary input and knowledge, then throw away the code and start over with TDD and continuous refactoring, and all your other "doing the things right" strategies.
Don't keep any of the code. Don't copy-paste anything. Don't refer back to it. Just start over with your new knowledge.
Clean up the code a little bit at a time. Always when you touch a class, try to leave the class cleaner that it was before you touched it ("the boy scout rule"). Refactoring is best done in very small steps, but very often.
Things like renaming some variable, splitting a method etc. take only some seconds or minutes. Large refactorings such as splitting or joining classes, may take an hour or two (and you make it in small steps, so that all tests pass at least every five minutes - otherwise you have entered Refactoring Hell and you should revert to the last known working state). If it takes days or weeks for you to refactor something, then it's not anymore "refactoring" - it's more like rewriting.
An article about this topic:
http://blog.objectmentor.com/articles/2007/07/20/whats-your-unit-of-measure
Put it in Distributed SCM like Git at least, that way when you break something refactoring you can reverse time divisibly to find the commit prior to the change, as well as being able to work on changes and commit them in branches without interfering with others work.
Gits Branch merge is great for things like this and you'll know easily if 2 people made incompatible changes in parallel without having to worry about the rest of the code.
For the above reasons, I would also create a seperate branch in the repository just for re factoring code with, and keep it up-dated regularly. This way, not only will others not interfere with your progress, but they can keep an eye on it and see changes in it that will eventually hit the main branch so they can pre-emptively code around those changes.
If you already know where there is duplication, you don't need several days to refactor it away.
Sometimes a rewrite is the only choice. This seems to be the case.
The CloneDR finds duplicate code, both exact copies and near-misses, across large source systems, parameterized by langauge syntax. It supports Java, C#, COBOL, C++, PHP and many other languages.
When it shows a parameterized abstraction of a set of found clones, it is essentially proposing that you refactor the code with that abstraction implemented (as a method, a function, a class, ...).
So running the CloneDR gets a list of potential abstractions to be added to your code, and replacing the clone instances by calls on the abstraction refactors your code thus cleaning it up (somewhat).
Even more remarkably, when it shows the parameter bindings used at each clone site needed to invoke the abstraction, it often shows a bungled clone instance, easily recognized when the bound paramters are conceptually inconsistent. If a parameer is bound to variables named YYYY-MM-DD, and one of them is YY-MM-DD, the "its a 4 digit-year" parameter type looks violated and in this this case there's a broken Y2K remediation. So examining the clone bindings often finds bugs.
This is a very common problem in scientific computing. Some of the most effective ideas for reducing the size and complexity of code require leveraging assumptions, and science demands that you constantly change those assumptions.
All you can do is try to refactor your code as you go, and try not to write yourself into any corners. Also work with good people who understand the value of not making a mess.

Improving really bad systems

How would you begin improving on a really bad system?
Let me explain what I mean before you recommend creating unit tests and refactoring. I could use those techniques but that would be pointless in this case.
Actually the system is so broken it doesn't do what it needs to do.
For example the system should count how many messages it sends. It mostly works but in some cases it "forgets" to increase the value of the message counter. The problem is that so many other modules with their own workarounds build upon this counter that if I correct the counter the system as a whole would become worse than it is currently. The solution could be to modify all the modules and remove their own corrections, but with 150+ modules that would require so much coordination that I can not afford it.
Even worse, there are some problems that has workarounds not in the system itself, but in people's head. For example the system can not represent more than four related messages in one message group. Some services would require five messages grouped together. The accounting department knows about this limitation and every time they count the messages for these services, they count the message groups and multiply it by 5/4 to get the correct number of the messages. There is absolutely no documentation about these deviations and nobody knows how many such things are present in the system now.
So how would you begin working on improving this system? What strategy would you follow?
A few additional things: I'm a one-men-army working on this so it is not an acceptable answer to hire enough men and redesign/refactor the system. And in a few weeks or months I really should show some visible progression so it is not an option either to do the refactoring myself in a couple of years.
Some technical details: the system is written in Java and PHP but I don't think that really matters. There are two databases behind it, an Oracle and a PostgreSQL one. Besides the flaws mentioned before the code itself is smells too, it is really badly written and documented.
Additional info:
The counter issue is not a synchronization problem. The counter++ statements are added to some modules, and are not added to some other modules. A quick and dirty fix is to add them where they are missing. The long solution is to make it kind of an aspect for the modules that need it, making impossible to forget it later. I have no problems with fixing things like this, but if I would make this change I would break over 10 other modules.
Update:
I accepted Greg D's answer. Even if I like Adam Bellaire's more, it wouldn't help me to know what would be ideal to know. Thanks all for the answers.
Put out the fires. If there are any issues of critical priority, whatever they are, you've got to handle them first. Hack it in if you must, with a smelly codebase it's ok. You know you'll improve it going forward. This is your sales technique targeted at whomever you're reporting to.
Pick some low-hanging fruit. I assume you're relatively new to this particular software and that you were re-tasked to deal with it. Find some apparently easy problems in a related subsystem of the code that shouldn't take more than a day or two to resolve apiece, and fix them. This may involve refactoring, or it may not. The goal is to familiarize yourself with the system and with the style of the original author. You may not get really lucky (One of the two incompetents who worked on my system before me always post-fixed his comments with four punctuation marks instead of one, which made it very easy to distinguish who wrote the particular segment of code.), but you'll develop insight into the author's weaknesses so you know what to look out for. Extensive, tight coupling with global state vs poor understanding of language tools, for example.
Set a big goal. If your experience parallels mine, you'll find yourself in a particular bit of spaghetti code more and more often as you perform the prior step. This is the first knot you need to untangle. With the experience you've gained understanding the component and knowledge about what the original author likely did wrong (and thus, what you need to watch out for), you can start envisioning a better model for this subset of the system. Don't worry if you still have to maintain some messy interfaces to maintain functionality, just take it one step at a time.
Lather, rinse, repeat! :)
Given time, consider adding unit tests for your new model one level underneath your interfaces with the rest of the system. Don't engrave the bad interfaces in code via tests that use them, you'll be changing them in a future iteration.
Addressing the particular issues you mention:
When you run into a situation that users are working around manually, talk with the users about changing it. Verify that they'll accept the change if you provide it before sinking the time into it. If they don't want the change, your job is to maintain the broken behavior.
When you run into a buggy component that multiple other components have worked around, I espouse a parallel component technique. Create a counter that works how the existing one should work. Provide a similar (or, if practical, identical) interface and slide the new component into the codebase. When you touch external components that work around the broken one, try to replace the old component with the new one. Similar interfaces ease porting of the code, and the old component is still around if the new one fails. Don't remove the old component until you can.
What is being asked of you right now? Are you being asked to implement functionality, or fix bugs? Do they even know what they want you to do?
If you don't have the manpower, time, or resources to "fix" the system as a whole, then all you can do is bail water. You're saying you should be able to make some "visible progress" in a few months' time. Well, with the system being as bad as you described, you may actually make the system worse. Under pressure to do something noticeable, you'll simply add code, and make the sysem even more convoluted.
You need to refactor, eventually. There is no way around it. If you can find a way to refactor that is visible to your end users, that would be ideal, even if it takes 6-9 months or a year instead of "a few months." But if you can't, then you have a choice to make:
Refactor, and risk being viewed as "not accomplishing anything" despite your efforts
Don't refactor, accomplish "visible" goals, and make the system more convoluted and more difficult to refactor one day. (Maybe after you find a better job, and hope the next developer to come along can never find out where you live.)
Which one is most beneficial to you personally depends on your company's culture. Will they one day decide to hire more developers, or replace this system completely with some other product?
Conversely, if your efforts to "fix things" actually break other things, will they be understanding about the monstrosity you're being asked to tackle single-handedly?
No easy answers here, sorry. You have to evaluate based on your unique, individual situation.
This is a whole book that will basically say unit test and refactor, but with more practical advice on how to do it
http://ecx.images-amazon.com/images/I/51RCXGPXQ8L._SL500_AA240_.jpg
http://www.amazon.com/Working-Effectively-Legacy-Robert-Martin/dp/0131177052
You open the directory that contains this system with Windows Explorer. Then, press Ctrl-A, and then Shift-Delete. That sounds like an improvement in your case.
Seriously though: that counter sounds like it's got thread-safety issues. I'd put a lock around the increasing functions.
And regarding the rest of the system, you can't do the impossible so try to do the possible. You need to attack your system from two fronts. Take care of the more visibly problematic issues first, so you can show progress. At the same time, you should deal with the more infrastructural problems, so that you have a chance at actually fixing this thing some day.
Good luck, and may the source be with you.
Pick one area that would be of medium difficulty to refactor. Create a skeleton of the original code with only the method signatures of the existing ones; maybe use an Interface even. Then start hacking away. You can even point the "new" methods to the old ones until you get to them.
Then, testing, testing, testing. Since there aren't any unit tests, maybe just use good old fashioned Voice-Activated-Unit Tests (people)? Or write your own tests as you go.
Document your progress as you go in some kind of repository, including frustrations and questions, so that when the next poor schmuck who gets this project won't be where you are :).
Once you get the first part done, move on to the next. The key is to build on top of incremental progress, that's why you shouldn't start with the hardest part first; it'll be too easy to get demoralized.
Joel has a couple of articles on rewriting/refactoring:
http://www.joelonsoftware.com/articles/fog0000000069.html
http://www.joelonsoftware.com/articles/fog0000000348.html
I've been working with a legacy system with the same characteristics for almost three years now, and there are no shortcuts that I'm aware of.
What bothers me most with our legacy system is that I'm not allowed to fix some bugs, since many other functions could break if I fixed them. This calls for ugly workarounds or creating new versions of old functions. Calls to the old functions can then be replaced with the new one at a time (while testing).
I'm not sure what the goal of your task is, but I strongly advise you to touch as little of the code as possible. Only do what you need to do.
You may want to get as much as possible documented by interviewing people. This is a huge task, since you don't know which questions to ask, and people will have forgotten a lot of details.
Other than that: make sure you're getting paid and enough moral support. There will be weeping and gnashing of teeth...
Well you need to start somewhere, and it sounds like there are bugs that need fixing. I would work through those bugs, making quick win refactorings, and writing any unit tests possible along the way. I would also use a tool like SourceMonitor to identify some of the most 'complex' parts of code in the system and see if I could simplify their design in any way. Ultimately, you just have to accept that it will be a slow process, and make small steps towards a better system.
I would try to pick a part of the system that could be extracted and rewritten in isolation fairly quickly. Even if it doesn't do much, you could show progress pretty quickly, and you don't have the problem of interfacing with the legacy code directly.
Hopefully, if you could pick off a few such tasks, they will see you making visible progress, and you could put forward an argument for hiring more people to rewrite the bigger modules. When parts of the system rely on broken behaviour, you don't have much choice but to separate before you fix anything.
Hopefully, you could gradually build a team capable of rewriting the whole lot.
All of this would have to go hand in hand with some decent training, otherwise people's old habits will stick, and your work will get the blame when things don't work as expected.
Good luck!
Deprecate everything that currently exists that has problems, and write new ones that work correctly. Document as much as you can about what will change and put big red flashing signs all over the place pointing to this documentation.
By doing it that way, you can keep your existing bugs (the ones that are being compensated for somewhere else) around without slowing down your progress towards getting an actual working system.

Legacy Code Nightmare [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I've inherited a project where the class diagrams closely resemble a spider web on a plate of spaghetti. I've written about 300 unit tests in the past two months to give myself a safety net covering the main executable.
I have my library of agile development books within reach at any given moment:
Working Effectively with Legacy Code
Refactoring
Code Complete
Agile Principles Patterns and Practices in C#
etc.
The problem is everything I touch seems to break something else.
The UI classes have business logic and database code mixed in. There are mutual dependencies between a number of classes. There's a couple of god classes that break every time I change any of the other classes. There's also a mutant singleton/utility class with about half instance methods and half static methods (though ironically the static methods rely on the instance and the instance methods don't).
My predecessors even thought it would be clever to use all the datasets backwards. Every database update is sent directly to the db server as parameters in a stored procedure, then the datasets are manually refreshed so the UI will display the most recent changes.
I'm sometimes tempted to think they used some form of weak obfuscation for either job security or as a last farewell before handing the code over.
Is there any good resources for detangling this mess? The books I have are helpful but only seem to cover half the scenarios I'm running into.
It sounds like you're tackling it in the right way.
Test
Refactor
Test again
Unfortunately, this can be a slow and tedious process. There's really no substitute for digging in and understanding what the code is trying to accomplish.
One book that I can recommend (if you don't already have it filed under "etc.") is Refactoring to Patterns. It's geared towards people who are in your exact situation.
I'm working in a similar situation.
If it is not a small utility but a big enterprise project then it is:
a) too late to fix it
b) beyond the capabilities of a single person to attempt a)
c) can only be fixed by a complete rewriting of the stuff which is out of the question
Refactoring can in many cases be only attempted in your private time at your personal risk. If you don't get an explicit mandate to do it as part of you daily job then you're likely not even get any credit for it. May even be criticized for "pointlessly wasting time on something that has perfectly worked for a long time already".
Just continue hacking it the way it has been hacked before, receive your paycheck and so on. When you get completely frustrated or the system reaches the point of being non-hackable any further, find another job.
EDIT: Whenever I attempt to address the question of the true architecture and doing the things the right way I usually get LOL in my face directly from responsible managers who are saying something like "I don't give a damn about good architecture" (attempted translation from German). I have personally brought one very bad component to the point of non-hackability while of course having given advanced warnings months in advance. They then had to cancel some promised features to customers because it was not doable any longer. Noone touches it anymore...
I've worked this job before. I spent just over two years on a legacy beast that is very similar. It took two of us over a year just to stabilize everything (it's still broke, but it's better).
First thing -- get exception logging into the app if it doesn't exist already. We used FogBugz, and it took us about a month to get reporting integrated into our app; it wasn't perfect right away, but it was reporting errors automatically. It's usually pretty safe to implement try-catch blocks in all your events, and that will cover most of your errors.
From there fix the bugs that come in first. Then fight the small battles, especially those based on the bugs. If you fix a bug that unexpectedly affects something else, refactor that block so that it is decoupled from the rest of the code.
It will take some extreme measures to rewrite a big, critical-to-company-success application no matter how bad it is. Even you get permission to do so, you'll be spending too much time supporting the legacy application to make any progress on the rewrite anyway. If you do many small refactorings, eventually either the big ones won't be that big or you'll have really good foundation classes for your rewrite.
One thing to take away from this is that it is a great experience. It will be frustrating, but you will learn a lot.
I have (once) come across code that was so insanely tangled that I couldn't fix it with a functional duplicate in a reasonable amount of time. That was sort of a special case though, as it was a parser and I had no idea how many clients might be "using" some of the bugs it had. Rendering hundreds of "working" source files erroneous was not a good option.
Most of the time it is imminently doable, just daunting. Read through that refactoring book.
I generally start fixing bad code by moving things around a bit (without actually changing implementation code more than required) so that modules and classes are at least somewhat coherent.
When that is done, you can take your more coherent class and rewrite its guts to perform the exact same way, but this time with sensible code. This is the tricky part with management, as they generally don't like to hear that you are going to take weeks to code and debug something that will behave exactly the same (if all goes well).
During this process I guarantee you will discover tons of bugs, and outright design stupidities. It's OK to fix trivial bugs while recoding, but otherwise leave such things for later.
Once this is done with a couple of classes, you will start to see where things can be modularized better, designed better, etc. Plus it will be easier to make such changes without impacting unrelated things because the code is now more modular, and you probably know it thoroughly.
Mostly, that sounds pretty bad. But I don't understand this part:
My predecessors even thought it would
be clever to use all the datasets
backwards. Every database update is
sent directly to the db server as
parameters in a stored procedure, then
the datasets are manually refreshed so
the UI will display the most recent
changes.
That sounds pretty close to a way I frequently write things. What's wrong with this? What's the correct way?
If your refactorings are breaking code, particularly code that seems to be unrelated, then you're trying to do too much at a time.
I recommend a first-pass refactoring where all you do is ExtractMethod: the goal is simply to name each step in the code, without any attempts at consolidation whatsoever.
After that, think about breaking dependencies, replacing singletons, consolidation.
If your refactorings are breaking things, then it means you don't have adequate unit test coverage - as the unit tests should have broken first. I recommend you get better unit test coverage second, after getting exception logging into place.
I then recommend you do small refactorings first - Extract Method to break large methods into understandable pieces; Introduce Variable to remove some duplication within a method; maybe Introduce Parameter if you find duplication between the variables used by your callers and the callee.
And run the unit test suite after each refactoring or set of refactorings. I'd say run them all until you gain confidence about which tests will need to be rerun every time.
No book will be able to cover all possible scenarios. It also depends on what you'll be expected to do with the project and whether there is any kind of external specification.
If you'll only have to do occasional small changes, just do those and don't bother starting to refactor.
If there is a specification (or you can get someone to write it), consider a complete rewrite if it can be justified by the foreseeable amount of changes to the project
If "the implementation is the specification" and there are a lot of changes planned, then you're pretty much hosed. Write LOTS of unit tests and start refactoring in small steps.
Actually, unit tests are going to be invaluable no matter what you do (if you can write them to an interface that's not going to change much with refactorings or a rewrite, that is).
See blog post Anatomy of an Anti-Corruption Layer, Part 1 and Anatomy of an Anti-Corruption Layer, Part 2.
It cites Eric Evans, Domain-Driven Design: Tackling Complexity in the Heart of Software:
Access the crap behind a facade
You could extract and then refactor some part of it, to break the dependencies and isolate layers into different modules, libraries, assemblies, directories. Then you re-inject the cleaned parts in to the application with a strangler application strategy. Lather, rinse, repeat.
Good luck, that is the tough part of being a developer.
I think your approach is good, but you need to focus on delivering business value (number of unit tests is not a measure of business value, but it may give you an indication if you are on or off track). It's important to have identified the behaviors that need to be changed, prioritize, and focus on the top ones.
The other piece of advise is to remain humble. Realize that if you wrote something so large under real deadlines and someone else saw your code, they would probably have problems understanding it as well. There is a skill in writing clean code, and there is a more important skill in dealing with other people's code.
The last piece of advise is to try to leverage the rest of your team. Past members may know information about the system you can learn. Also, they may be able to help test behaviors. I know the ideal is to have automated tests, but if someone can help by verifying things for you manually consider getting their help.
I particularly like the diagram in Code Complete, in which you start with just legacy code, a rectangle of fuzzy grey texture. Then when you replace some of it, you have fuzzy grey at the bottom, solid white at the top, and a jagged line representing the interface between the two.
That is, everything is either 'nasty old stuff' or 'nice new stuff'. One side of the line or the other.
The line is jagged, because you're migrating different parts of the system at different rates.
As you work, the jagged line gradually descends, until you have more white than grey, and eventually just grey.
Of course, that doesn't make the specifics any easier for you. But it does give you a model you can use to monitor your progress. At any one time you should have a clear understanding of where the line is: which bits are new, which are old, and how the two sides communicate.
You might find the following post useful:
http://refactoringin.net/?p=36
As it is said in the post, don't discard a complete overwrite that easily. Also, if at all possible, try to replace whole layers or tiers with third-party solution like for example ORM for persistence or with new code. But most important of all, try to understand the logic (problem domain) behind the code.

Resources