What criteria would a buggy feature have to meet in order to be scrapped as opposed to fixing it? - debugging

I am crowdsourcing some ideas on criteria for scrapping a buggy feature instead of fixing it. When is it worth keeping and improving on the feature and when is it worth starting over?
I am reading other SO posts about refactoring in small bits vs a complete rewrite. While these have been helpful, they're not quite hitting the nail on the head re: my question.
Hoping to get some pros and cons for fixing vs scrapping.

Related

Premature refactoring? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
We have all heard of premature optimization, but what do you think about premature refactoring? Is there any such thing in your opinion? Here is what I am getting at.
First off, reading Martin Fowler's seminal work "Refactoring" quite literally changed my life in regards to programming.
One thing that I have noticed, however, is that if I start refactoring a class or framework too quickly, I sometimes find myself coded into a corner so-to-speak. Now, I suspect that the issue is not really refactoring per se, but maybe premature/poor design decisions/assumptions.
What are your thoughts, insights and/or opinions on this issue? Do you have any advice or common anti-patterns related to this issue?
EDIT:
From reading your answers and reflecting on this issue more, I think I have come to the realization that my problem in this case is really an issue of "premature design" and not necessarily "premature refactoring". I have been guilty of assuming a design and refactoring in that direction to early in the coding process. A little patience on my part to maintain a level of design agnosticism and focus on refactoring towards clean code would keep me from heading down these design rabbit trails.
I actually think the opposite.
The earlier you start thinking about whether or not your design needs refactoring, the better. Refactor constantly, so it's never a large issue.
I've also found that the more I refactor early on, the better I've gotten about writing code more cleanly up front. I tend to create fewer large methods, and have fewer problems.
However, if you find yourself "refactoring" yourself into a corner, I'd expect that is more a matter of lack of initial design or lack of planning for the scope of use of a class. Try writing out how you want to use the class or framework before you start writing the code - it may help you avoid that issue. This is also I think one advantage to test driven design - it helps you force yourself to look at using your object before it's written.
Remember, refactoring technically should NEVER lock you into a corner - it's about reworking the internals without changing how a class is used. If your trapping yourself by refactoring, it means your initial design was flawed.
Chances are you'll find that, over time, this issue gets better and better. Your class and framework design will probably end up more flexible.
We have all heard of Premature Optimization, but what do you thing about Premature Refactoring? Is there any such thing in your opinion?
Yes, there is. Refactoring is a way of paying down technical debt that has accrued over the life of your development process. However, the mere accrual of technical debt is not necessarily a bad thing.
To see why, imagine that you are writing tax-return analysis software for the IRS. Suddenly, new regulations are introduced at the last minute which break several of your original assumptions. Although you designed well, your domain model has fundamentally shifted from under your feet in at least one important place. It's April 14th, and the project must go live tomorrow, come hell or high water. What do you do?
If you implement a nuts-and-bolts solution at the cost of some moderate technical debt, your system will become more rigid and less able to withstand another round of these changes. But the site can go live and proceed onward, and there will be no risk of delivering late; you're confident you can make the required changes.
On the other hand, if you take the time to refactor the solution so that it now supports the new design in more sophisticated and flexible way, you'll have no trouble adapting to future changes. But you run the risk of your company's flagship product running up against the clock; you're not sure if the redesign will take longer than today.
In this case, the first option is the better choice. Assuming you have little previous technical debt, it's worth it to take your lumps now and pay it down later. This is, of course, a business decision, and not a design one.
I think it is possible to refactor too early.
At the nuts and bolts end of design is the code itself. This final stage of the design comes in to existence as you code, it will at times be flawed, and you'll see that as the code evolves. If you refactor too early it makes it harder to change the flawed design.
For example, it's much easier to delete a single long function when you realise it's rubbish or going in the wrong direction than it is to delete a nice well-formed function and the functions it uses and the functions they use, etc., whilst ensuring you're not breaking something else that was part of the refactor.
It could be said that perhaps you should have spent more time designing, but a key element in an agile process is that coding is part of the design process and in most cases, having put some reasonable effort into design, it's better to just get on with it.
Edit In response to comments:-
Design isn't done until you've written code. We can't solve all problems in pre-coding design, the whole point behind Agile is that coding is problem solving. If the non-code design solved all problems up-front before coding there would be no need to re factor, we would simply convert the design to well factored code in one step.
Anyone remember the late 1980s and early 1990s structured design methods, the ones where you got all the problems solved in clever diagrams before you wrote a line of code?
Premature refactoring is refactoring without unit-tests. You are at that point simply not ready for a refactoring. First get some unit-tests and then start thinking about refactoring. Otherwise you will (might) hurt the project more than help.
I am a strong believer in constant refactoring. There is no reason to wait until some specific time to start refactoring.
Anytime you see something that should be done better, Refactor.
Just keep this in my mind. I know a developer (a pure genius) who refactors so much (he is so smart he can always find a better way) he never finishes a project.
I think any "1.0" project is susceptible to this kind of ... let's call it "iterative design". If you don't have a clear spec before you start designing you're objects, you'll likely think of many designs and approaches to problems.
So, I think overcoming this specific problem is to clearly design things before you start writing code.
There are a couple of promising solutions to this type of problem, depending on the situation.
If the problem is that you decide something can be optimized in a certain way and you extract a method or something and realize that because of that decision, you are forced to code everything else in a convoluted way, the problem is probably that you didn't think far enough in the design process. If there had been a well written and planned spec, you would have known about this problem ahead of time (unless you didn't read the spec, but that's another issue :) )
Depending on the situation, rapid prototyping can also address this problem, since you'll have a better idea of these implementation details when you start working on the real thing.
The reason why premature optimization is bad is that optimization usually leads to a worse design. Unlike refactoring, which leads to a better and cleaner design, if done thoughtful and right. What I learned to be useful for me to analyze the usefulness of a refactoring was first looking at our UML diagram to visualize the change and then writing the code-doc (e.g Javadoc) for the class first and adding stubs ahead of any real code. Of course experience help a lot with that, if in doubt, ask your favorite architect ;)

How to justify to your colleagues that they produce crappy code?

I am finding somewhat difficult to carry on working in my current job.
The codebase has become a bit wild lately (but definitely not the worse I've seen), and I'm having a hard time dealing with some parts of the code. I could be stupid, but most likely it's just that it demotivates me a lot to start working on something that is hard to reason about.
My boss is already aware of my thoughts - I expressed what it feels like to work like this. He asked me to provide examples of what was wrong. When I pointed out two or three small issues, he said "yeah, ok" but that refactoring costs him a lot of money, and that we have to get the product out (not the first time I hear this).
I have to admit that the examples were not the most compelling, but the problem is actually tough to explain. It's made up of a lot of tiny "bad decisions" throughout the codebase. (We also see this issue is absolutely subjective). For instance, bad naming, dealing with nulls, boilerplate, not making code reusable (or the opposite) and so on. It can be tiring to re-think someone else's code over again to justify I would have done it differently.
Do you have thoughts on how to deal with this?
I am a bit fed up of having to go hacking around a quick 'n dirty codebase every time!
Sometimes your fellow programmers do things very differently than you, and things you might feel are way wrong might actually have positive aspects. We all have our schools we come from. I think I've come across programmers who complain about things I don't understand equally as often that I myself have felt something needs to be complained about.
Make sure you can deduce what you complain about into a concrete disadvantage. If for no other reason so that you can motivate middle management about improvements to make. Things that are hard to deduce into measurable facts usually originates from difference in taste/style rather than quality (there are boooks to read about this subject). The answer posted by smacl have good and concrete advice!
If you can deduce your concern into a real disadvantage, then I really do not agree when people say that one have to "accept" situations like this. I've been exposed to this problem more than once, and let me tell you, refactoring is not the solution to the problem. Refactoring only fixes the symptoms.
Accepting a situation like this is the same as saying "bad quality product lines and expensive and frustrating maintenance is something my company can live with". This is ofcourse seldomly the case. However management (i.e. those with the go/no-go on what projects to prioritize) are very often not technically aware of what the problems are, or why development is expensive. They shouldn't have to be for that matter.
That's why you need a development organization with technical leads, chief architects, a good organisational structure and tiered model etc. Experienced software professionals who have seen where the road leads to if you ignore certain aspects of development. It's about changing the "culture" of your team(s).
Either you stick with your company and try to change how you do things from the roots, or you find another place to work and make sure you find out during the interview exactly how they work in every-day development.
Good luck
I recently faced a very similar problem and a friend gave me some advice that helped a great deal. He said: "keep yourself out of it."
What he meant was, that you must communicate the problems because they are real, costly problems with consequences in terms of time and money. But when you do communicate, talk only about the consequences for the organization. Do not mention the consequences to you, because then it just sounds like whining and will be ignored.
For example:
Not keeping yourself out of it:
"The other developers use these obscure, misleading identifiers and then I have to spend hours going over the code trying to discover what they meant. It's taking up a lot of my time."
Keeping yourself out of it:
"It would be very helpful and cost effective to do some refactoring of class and variable names and also establish some coding standards around identifiers. The immediate payoff will be an easier-to-understand codebase for everyone, leading to better productivity. The longer-term payoff will be that later we'll be able to modify the code and fix things faster. If a critical bug is discovered right before a release, an understandable codebase will be really important."
I hope that helps.
1) Make the problem more visible and get management buy-in
Keep a very detailed diary of the time spent on various coding tasks over the period of about a month. At the end of the month analyse and summarise the contents for your boss, i.e. time wasted and hence money wasted, to illustrate that change of some form is necessary.
2) Think of a cost effective way of moving forward
For example; Rather than refactoring the entire code base, seperate interfaces from implementations, and enforce tighter standards, including unit tests, naming conventions, etc.. at an interface layer. Thus each programmer can have confidence in using code that they have not written. While this is sweeping the crap under the carpet to a certain extent, it is a good way of preparing for larger scale refactoring.
It is important from a management perspective that workflow is not interrupted, and positive results are visible, so plan accordingly.
3) Agree longer terms improvements with your co-workers
Sit down and agree reasonable coding standards for future code with the other programmers.
Perhaps you could setup monthly meetings and at those meetings you could demonstrate good and bad code. Obviously you don't want to point fingers so you'd want to use generic code examples that are based off of stuff you saw in your project. This way you can constructively gather support from others in your style. You might want to compile these after the meetings so people can easily reference them.
I think it is real easy to point out issues, and complain but to mentor people and help them change requires effort. It isn't an easy task but if you are having trouble being motivated with your job perhaps this would give you a nice burst of motivation. You might learn some things a long the way.
You'll find that this is common-place. What you can do is accept that things are done differently by different people. As you fix bugs or add features, you'll get a brief window into a sub-section of the application that you can improve. When you work on the code, you can make it better, and they don't need to know that you're piecemeal improving the code.
Be very careful though. Sometimes code is written in a way that looks 'hacked', but solves a bug that is not easy to discern. Especially if it is older code which has been tried and tested.
On another note, complaining will only get you viewed as a complainer. Think about what outcome you want, and what actions will most likely produce that outcome. You will always hear the answer 'No' when you ask, 'Can I do X-days of work for absolutely no noticable result?'
You could quit and hope to find something better.
Or, you could stick it out and try improve the code that you can control, when you can control it. No matter how well intentioned the developers are, if there is more than one developer the code base will be "ugly" by a competent developers standards. Work with the other developers to improve their abilities and refactor code as you make enhancements.
For starters:
Enforce the use of static code analysis tools. Every language has a few well known tools.
Show some before and after refactored code examples, and explain why you think it's better. Try not to put any one person on the spot.
Code reviews by experienced developers.
keep in mind, some developers can't be helped no matter how much you try...
If someone critiques your code be polite and open minded, you might learn something.
Cyclomatic complexity / number of changesets/bugs. Complex code is more likely to break, cause more bugs which causes more changes, which cost more money!
99% of the time you never get to choose the people you work with. Not all relationships work out, be they work or otherwise.
It would be best if your project was broken up enough so that each developer can contribute to a spec of what the other needs, so programmers don't step on each other's toes.
Getting people to change their coding style is hard. It takes a cast iron technical lead committed to such things and will help when you bring it up. Management types can't do this, leadership needs to provide technical details.
It sounds to me like you don't have a problem with the code so much as your coworkers. It will probably be very difficult for you to force the changes you want to see. Your best bet would probably be to start updating your resume and keep your eyes open for other opportunities
I think that once you're in the middle of the weeds, you do not really have a good chance of getting things done right, you just have to get them done. I would say most developers do not like firefighting and want the ideal code base, but in my opinion this requires you to spend the time up front planning the system out.
I'd recommend trying to work with your manager to ensure that the areas you feel are lacking now are not lacking in the next project. Maybe its putting you on the lead, having more code reviews with peers, maybe it is further training for the entire team.
Either way, I think this is something that most of us go through. I do agree with the other person advising some caution on this. I know that code I wrote yesterday seemed great at the time and looking back on it, can probably find 10 other ways to do it and make it look cleaner.
Have you considered maybe adding fxcop to the automated builds to enforce coding style? Other than that, you could try suggesting TDD which would give the power to whomever writes the test to enfore that the interfaces for each class are structured in a particular way.
Off the top of my head, that's all that I can think of.
Things in life are not perfect and if you start nitpicking, feathers will be ruffled and relationships soured.
The best method is to pick your battles carefully. If something is small enough ignore it and live with it. If it is big and worthwhile (i.e. the management sees ROI in backing you) go for it.
This is apt for your situation...
God, grant me the serenity to accept the things I cannot change courage to change the things I can and the wisdom to know the difference.
One thing I try to do and it may help you. If a part of code is bad, and the idea you propose to fix it is agreed as best but "no time" excuse is given, why dont you rewrite it? say on your own time? If you decide on sticking around at that job for a while it will only help you. And only you will learn and become a better programmer.
Note that it is a good idea and I would even say required, to do a complete code review of that change before check-in and you should try to time the check-in so that it is before a complete regresion test cycle for a release. That way your refactoring is completely tested out. Over a period of 6 months or so, it will start showing a beneficial impact and you can then ask for time allocation for this, with proof to back it up.
The only thing that has a chance of convincing management is demonstrating that the things you are citing as perceived problems become actual problems.
To try to take advantage of this, try to keep the "complainer" tone down to a minimum, that is, focus on how this affects the bottom line rather than how it makes you feel. Point out possible consequences of poor decisions that you see being made. If those consequences come to pass, and they cost more than an up-front fix would have, gently remind management that you foresaw the difficulty and provide a helpful suggestion as to how future similar costs can be avoided with a little up-front effort.
The problem is, in many organizations, the problems will never cause enough of a problem for management to care, or if they do, they won't see the connection between your perception of the problem and the actual problem the way it occurs. In these cases, you end up seemin like a needlessly persnickety technical person, which isn't a reputation you want to have.
So my advice is, pick your battles. If there is something very egregious that others are about to let slip, then you can speak up and perhaps be vindicated later. For the little details that just grind away at you, I'm afraid there's not much you can do but put up with it.
Show them their own forgotten code disguised as yours for critique.
Take an old piece of their code they have forgotten about
Pretend you wrote it
Ask them to figure out something with it
Make sure they point out how bad the code is for whatever reason
Add your own items. Brainstorm what should be done since it's your fault.
Let them know you didn't know how to bring it up to offend them, but it's their code.
If they recall that they wrote it, they might catch on..
If you have a good relationship with your manager, you might be able to use this to work yourself into a "Senior" or "Lead" Developer role. You could propose that it would be best if one person on the team takes technical leadership of the code base. It would be your job to review the code of others and ask them to make improvements when you feel it is necessary. If you go this route, just make sure to take it slowly. If you ask for a lot very quickly, then you could end up pissing off all the other developers.

How not to rush yourself? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I often find that I do a less than complete work on a feature, especially in the Design phase. I detect several reasons:
I'm over-optimistic
I feel the need to provide quick solutions, so sometimes I fool myself into thinking the design is fool-proof when in fact it's still full of holes, just to get the job done faster. Of course I end up paying dearly later.
I'm aware of this behavior of mine for some time, yet I still find I don't manage to compensate. Have you encountered similar problems? How do you approach solving them?
I use a couple of techniques. The first is a simple paper to-do list. In the morning I write down my tasks for the day. I try to work on a task until I can cross it off. I cross it off only when I'm done to my own satisfaction. My to-do list helps me stay focused. When an interruption comes in, I can consciously choose whether it is important enough to interrupt what I'm doing now.
The second technique I use is to give up on the idea of "done" for a design. Instead, I focus on what I've started calling "successions", where a design goes through predictable stages. Each stage supports the current functionality well and will be succeeded at some point by the next stage. This lets me do a good job, a job I can be proud of, without over-designing.
I have the intuition that there is a small catalog of such successions (like http://www.threeriversinstitute.org/FirstOneThenMany.html) that would cover most of design. In the meantime, I try to remember that "sufficient to the day are the troubles thereof".
I run into this problem a lot.
My solution is a notebook. (The old fashioned paper kind).
I write out how I'm planning on implementing the solution as an bulleted overview list, and then I try and flesh out each point on the list.
Often, during that process, I come across issues I hadn't thought of.
Of course, the 80/20 rule still applies... I still come across things when I'm actually doing the implementation that hadn't occurred to me, but with experience these tend to diminish.
EDIT: If I'm still not sure at the end of this process, I put together a throwaway prototype testbed... It's important to make sure it's throwaway, because otherwise you run the risk of including some nasty hacks in your real codebase.
It's very common to miss edge-cases and detail when you're in the planning phase of a project, especially in the software development field. Please don't feel that this is a personal failing; it's something endemic.
To counter this, many software development methodologies have emerged. Most recently there has been a shift by many development teams to 'agile' methods, where there is a focus on rapid development with little up-front technical design (after all, many complexities are only discovered when you actually begin developing). I'm currently using the Scrum system, which has been excellent in my small team:
http://en.wikipedia.org/wiki/Agile_methods
http://en.wikipedia.org/wiki/Scrum_%28development%29
If you find that your organisation will not accept what they may regard as a radical shift in approach, it may be worth investigating whether they will agree to the development of a prototype system. This means that you could code up a feature to investigate the technologies involved and judge whether it's feasible, without having to commit to full development, a quality bar, testing schedules etc. The prototype should be thrown away once the feasibility has been proved or disproved, then proper development may begin, including all that you've learned in the process.
If your problem is more related to time management, then I'd recommend the Getting Things Done approach (http://en.wikipedia.org/wiki/Getting_things_done). This is pragmatic and simple, concentrating on making you productive without overloading you with information that isn't immediately relevant to your current work. I've found that I get overwhelmed with project/feature ideas at times and it really helps to write everything down and file it for a later time when I have the resources available to work effectively.
I hope this helps and best of luck!
Communication.
The best way to not rush yourself into programming mistakes is communication. Yes, good ol' fashioned accountability. If another person in the office is involved in the process, the better the outcome. If a programmer just takes on the task without any concern for anybody else, then there is a higher possiblity for mistakes.
Accountability Checklist:
How do we support this?
Who needs to know what has changed?
Why are we doing this in the first place?
Will there be anybody who doesn't want this changed?
Will someone else understand how I did this?
How will the user perceive and use this change?
A skepticle comrad is usually good enough to help. Functional Specifications are good, they usually answer all of these thoughts. But, sometimes a conversation with another person can help you with it and you can get changes out the door faster.
I have learned, through years of mistakes (though still making them), that almost anything I want to use repeatedly, or distribute, needs to be designed properly. So getting burned enough times will end your optimism.
When getting pressure from management, I tell them I will have to put in the thought anyway, so I should do it when it's cheap. I think on paper as well, so I can actually prove that I'm doing something and it keeps my fingers on the keyboard, both of which provides a soothing effect to management. ;-)
At the risk of sounding obvious - be pessimistic. I had a few experiences where I thought "that should take a few hours" and it ended up taking a couple days because of all the little things that pop up unexpectedly.
By far the best way I've found to manage things is to (much like Andrew's answer) write out the design and requirements as a starting point. Then I go through and look for weak points in the design, gotchas and additional use cases etc. I try to look at this as a critical exercise - there's no code written yet, so this is the time to be totally ruthless and look for every weak point. Look for error conditions you'll have to handle, and whatever amount of time you think it will take to complete each feature/function, pad that amount by a lot. I've had times where I've doubled my initial estimate and still not been that far off the mark.
It's very hard as a programmer to realistically project debugging time - writing the code is easy to estimate, but debugging that into functioning, valid code is something else entirely. Therefore I find there's no exact science to it but I just pad tasks by a whole bunch, so that I have plenty of breathing room for debugging.
See also Evidence Based Scheduling which is a fascinating concept in scheduling developed by FogCreek for their FogBugz product.
You and the rest of the world.
You need more a more detailed design, more accurate estimate, and the willingness to accept that sometimes the optimal solution is not necessarily the best solution (e.g., you could code some loop in assembler to get optimal performance, but that's going to take a lot longer than just doing
for (i=1; i<=10; i++) {}
). Is the time spent doing it really worth it for an accounting package over a missile system.
I like to designing, but over time I've found that much design up front is a lot like building castles into the sky - it's too much speculation, however well-educated, missing critical feedback from actually implementing and using the design.
So today I'm much more into accepting that while implementing a design I will learn a lot of new stuff about it, and need to feed that learning back into the design. Doing that is a skill that is fun to learn, including the skills to keep a design flexible by keeping it simple, free of duplication and cohesive and decoupled, of changing the design in small, controlled steps (=refactoring), and writing the necessary extensive suite of automated tests that make this kind of changes safe.
This seems to be a much more effective approach to me than getting better at "up front design speculation" - and addtionally it makes me equally well prepared for the inevitable moment when the design needs to be changed due to a simply unforseeable change in the requirements.
Divide, divide, divide. List all the steps that will be required to finish the project, then list all the steps those steps will require to be concluded, and so on until you reach atomic items you are absolutely sure you can finish in a day or less. Add the duration of all these values to arrive at a length of time.
Then double it. Now you have a number that, if depressing, is at least somewhat realistic.
If possible "Sleep on your design" before publishing it. I find after I leave work, I usually think of things I have missed. This usually happens while I am lying in bed before falling asleep or even while showering the next day.
I also find it valuable to have a peer/friend that I trust review what I have before distributing it. Somebody else almost always sees something I didn't think of or miscommunicated.
I like to do as others stated here. Write down in pseudo code what the flow of your app will be. This immediately highlights some detailed areas that may require further attention that where not apparent up front.
Pseudo code is also readable to business users who can verify your approach meets their needs.
Using pseudo code also creates a nice set of methods that could be put to use as an interface in the final solution. Once the pseudo code is fairly tight, look for patterns and review some common GOF patterns. They do not have to be perfect but using them will sheild you from having to rewrite the code later during the revisions that are bound to come along.
Just taking an hour or two write psuedo code, yields some invaluable time saving pieces later on:
1. An object model emerges
2. The program's flow is clearly defined for others
3. It can be used as documentation of your design with some refinement
4. Comments are easier to add and will be clearer for someone else reviewing your code.
Best of luck to you!
I've found that the best way to make sure you've chosen a good design is to make sure that you understand the problem, know the limitations you have, and know what things are must-haves vs. nice-to-haves.
Understanding the problem will involve talking to the people who have the need and keeping them anchored to what needs to get done first instead of how they think it ought to get done. Once you know what actually has to happen, you can go back and talk over requirements about how.
Knowing your limitations may be quite easy: needs to run on the iPhone; has to be a web application; needs to integrate with the already-existing Java code and deployment setup; and so on. It may be quite difficult: you don't know what the potential size of your user base is (hundreds? thousands? millions?); you don't know whether you'll need to localize it (though if you're not sure, assume you will have to).
Must-haves vs, nice-to-haves: this is possibly the most difficult part. Users very often have emotional attachments to "requirements" ("It should look just like Excel") that are not actually part of the "has to happen" stuff. You often have to juggle functionality vs. desires to get an acceptable implementation. You can't always give everyone a pony.
Make sure you write all this down! Even if it evolves along the way, or the design is small, having a "this is what we're planning to do now" guide to refer to when you need ot make a decision about committing resources makes it easier to restrain yourself from implementing a really cool whiz-bang feature instead of a boring must-do.
Since you recognize that you feel the need to provide a quick solution, perhaps it will slow you down to realize that you can probably solve the problem faster and deliver it sooner if you spend more upfront time in design. For instance if you spend 3 hours designing and 30 hours writting code, it probably means that if you spend 6 hours designing you might need to only spend 10 hours writing code. (These are not actual figures just examples). You might try to quantify this for yourself on the next few projects you do. Do a couple where you behave as you normally would and see what ratio of design/codewriting/testing&debugging you actually do. Then on the next project deliberately increase the percentage of time you spend on design phase and see if it does shorten the time needed for the other phases. You will have to try for several projects on this as well to get a true baseline since the projects may be quite different. Do it as a test to see if you can improve your performance on the the other phases and thus deliver a faster product if you spend 20% more time or 50% more time or 100% more time on design.
Remember the later in the process you find the problem with a design the harder (and more time-consuming) it is to fix.

When is the right time and the wrong time to do the quick and dirty solution? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
When is it the right time and when is it the wrong time to take the quick and dirty approach versus the proper elegant solution?
This started from the comments on my question: Decode Base64 data in Java.
I wanted a way to do something using only internal Java or something I wrote. The only built in way is to use the sun.* package which is unsupported. So when is it the right time and when is it the wrong time to take that approach?
In my opinion the only "right time" to use a quick and dirty solution (aka hack) is when...
There is a mission-critical bug that needs immediate fixing and the proper solution is significantly complicated to implement.
It's an internal application and some higher ups need a quick feature implemented (hey, neither you nor your customers use it).
In any situation, you have to balance the time that will be spent on solving the problem, and then the time that will be spent in maintaining that solution. If the lifetime of whatever you do is expected to be lengthy, then the extra time spent up front to do things the right way will, presumably, be easier to maintain and will save you in the long run.
However, if there are time constraints, the penalties for going over may far outweigh the maintenance cost. For example, if you MUST get a product out the door, do a quick-and-dirty solution now, and then log it as a bug to get fixed later in a patch, or the next version. That will require you to re-do the work, but like I said, it is all a balancing act, and often driven by cost at it's root.
The right answer is that its never time to do the quick and dirty solution.
But sometimes (when having a big customerbase and lots of programs out there) its better to ship a fix with a Q&D that maybe breaks something, but FOR SURE fix more.
I dont know if I understand the question right but Ill leave it with this.
If there exists a simple but less elegant solution that closely matches the style and techniques used elsewhere in the code base (and consequently, is more easily understood by your colleagues), it may be safer to take that path -- compared to the solution that is perfect from an academic point of view, but difficult to understand if you never happened to read that specific paper.
Similarly, a small change may be considerably safer than a major refactoring effort that the elegant solution would basically require. This is especially relevant in any world that lacks unit tests.
If the project is a "proof of concept" work rather than a production implementation, it may be the right time for a quick and dirty solution that gets the job done of illustrating how easy or hard some particular part is of a system. I'm thinking of where one may want to test out a new ERP or CRM system and just want to see the mechanics of it without spending a ton of time and money to get a more realistic prototype working.
In most other cases, it is a question of trade offs. How many defects can something have that gets released into the wild? Are just no showstoppers bugs OK or does there have to be few only a few minor or cosmetic bugs left. If some project managers or business units want something quick but not necessarily fully working then it the quick and dirty solution may work if those asking understand the risks. In a way this is like the question of how many tests should a doctor give before making a diagnosis: Should you be examined head to toe through a variety of different waves of energy, e.g. X-ray, CT scan, MRI, etc. Or can the doctor just look at what you are doing and know what is wrong like Dr. House does from time to time.
It seems to me that quick and dirty solution is more often necessitated by a lack of planning somewhere moreso than it is because of a sudden change of business rules (not to say that doesn't happen though!). Either someone wasn't given appropriate tools to do their job, someone vastly underestimated the amount of time something will take, or someone got too attached to a feature that is overly complicated. In most of those cases, it's best to push back your release or scale back the features in the release.
In the case that it was necessitated by a business priority that you have no control over (like a legal issue), then you need to get it out the door as quickly as possible. But you should make the fix better for the next release!
It's the right time to use a quick and dirty solution when:
You need the solution to be quick (assuming no professional solution exists that is approximately as quick), and
You don't suffer consequences for it being dirty.
In some ways, that's a flippant answer, but it does cover it.
In your situation, you can't introduce a new library quickly. And you do have a process to mitigate the risk of using experimental code. As long as that process is reliable, and you are confident that production code won't include your quick & dirty solution, then go ahead.
Once my company was struggling to get some code working for a demo at a trade show. They were running out of time, the demo just one day away. I suggested they hard-code some options instead of making them configurable and dynamic.
It worked -- they got the demo running for the next day's show. Then when they got back to the office, they immediately removed the hard-coded part and finished their feature.
If you can rely on replacing the quick & dirty hack being top priority, then quick & dirty can be appropriate. It's just so seldom the case that it's top priority, that using this solution is usually a bad idea.
As noted by tj111 and others, there are times when the risk of an elegant solution are greater than that of the quick and dirty. I.e., you deliver something beautiful, but by that time the customers no longer want it, want something different, or are pissed off.
To be sure though, there is always a long-term cost with doing a quick and dirty solution and it is important to deliver a firm and forceful "by the way" caveat to management. Else the quick and dirty becomes like crack to management who often do not know or care of the difference between the elegant and dirty (especially when it is under the hood). That is until things start to go to hell, and even then the fixitnow stress is offloaded to the hapless (often new) technical team.
So yeah, sometimes you have to do it because of $$ and time considerations, but don't cloak it or forget it and strive and push to re-factor soon when the crisis wanes.
Remember a lot Application get built off of their prototypes.
We always want to think its throw away code. But it will grow!
In my case I work in a research and development position where we have a long process for getting approval when adding external libraries.
If code is marked as experimental it will be cleaned up before moving to production. I needed this for a quick experiment to see if something was even going to make sense for us to work on and needed to get the data processing quickly. So the quick and dirty approach was the right one to get things done asap. If we decide to work on the experiment I will have time to replace the functionality properly.
In the standard case where as kdgregory put it in the comments of one answer:
Bzzt. In a professional environment,
using an unsupported, undocumented
feature is never the correct decision.
And in a corporate environment,
"experiments" become "production code"
with no chance to fix the hacks.
I think he is correct that it is not the appropriate approach when you work in a position where you have even a doubt that it may go into production. The only exception with this is if you absolutely do not have the time for a proper fix but can mark it as being a high priority issue in some visible public place (bug tracker) and will definitely be able to fix it in a future release.
Sometimes a quick and dirty solution is synonymous with an unmaintainable solution. Other times it just means solving the (simple) problem you actually have, not the (more complex) generalization of that problem. I'm a grad student doing research in bioinformatics and I often need to write small programs that do simple things like reformat or summarize a few hundred megs of data. Occasionally I need to write several of these apps per day. If I wrote "proper" solutions to these I'd never get anything done.
Solving the general case (not quick and dirty) would mean making an app that handled errors robustly, was usable by people other than me, was configurable, etc. This would solve lots of problems that I don't have. I'm probably the only one that will ever use these programs. All the configurability I need is a few command line options and the ability to edit the source file as needed. All the error handling I need is displaying a decent error message and exiting if there's a problem.
Of course, sometimes these programs do grow into something more important and sometimes I do end up reusing code from them. Maintainability counts. Even in a quick and dirty solution, magic numbers should not be hard coded, variables should not be named "foo", "bar", and "stuff", and programs should not be written as one giant 400-line main() function. I learned these the hard way.
Sometimes you have to do things in Q&D way. What you need to do in that circumstance is to protect yourself by hiding the dirty implementation behind an interface or an adapter.
To use the cited example, the Sun.* package works for now, but may not sometime in the future. Fine. Then wrap it in an adapter that implements the desired interface. Then, when Sun changes the API (hopefully providing something permanent like Java.BASE64), you can quickly update your implementation to cope with the change.
I do a quick and dirty fix when three conditions are met:
There isn't enough time to do a proper solution
The fix and it's scope are clear and contained (it's easy to come back, know what the fix addressed, can be easily removed and replaced by a proper implementation)
The fix is atomic or has no dependencies
"Quick and Dirty", is often synonymous with "cheap yet actually works", which is generally all you need to get the job done. If anybody criticizes your code, pretty much the best defence is to say "yeah- it was just a quick and dirty solution". That way your coder colleagues can feel superior to you, whilst you pointy haired boss looks on, rubbing his chin in silent approval.

Rewrite or repair?

I'm sure you have all been there, you take on a project where there is a creaky old code base which is barely fit for purpose and you have to make the decision to either re-write it from scratch or repair what already exists.
Conventional wisdom tends to suggest that you should never attempt a re-write from scratch as the risk of failure is very high. So what did you do when faced with this problem, how did you make the decision and how did it turn out?
It really depends on how bad it is.
If it's a small system, and you fully understand it, then a rewrite is not crazy.
On the other hand, if it's a giant legacy monster with ten million lines of undocumented mystery code, then you're really going to have a hard time with a full rewrite.
Points to consider:
If it looks good to the user, they
won't care what kind of spaghetti
mess it is for you. On the other
hand, if it's bad for them too, then
it's easier to get agreement (and
patience).
If you do rewrite, try to do it one
part at a time. A messy,
disorganized codebase may make this
difficult (i.e, replacing just one
part requires a rewrite of large
icebergs of dependency code), but if
possible, this makes it a lot easier
to gradually do the rewrite and get
feedback from users along the way.
I would really hesitate to take on a giant rewrite project for a large system without being able to release the new edition one part at a time.
Just clean up the code a little bit every time you work with it. If there isn't one already, setup a unit testing framework. All new code should get tests written. Any old code you fix as a result of bugs, try to slide in tests too.
As the cleanups progress, you'll be able to sweep more and more of the nasty code into encapsulated bins. Then you can pick those off one by one in the future.
A tool like javadoc or doxygen, if not already in use, can also help improve code documentation and comprehensibility.
The arguments against a complete rewrite a pretty strong. Those tons of "little bugs" and behaviors that were coded in over the time frame of the original project will sneak right back in again.
See Joel Spolsky's essay Things You Should Never Do. In summary, when you rewrite you lose all the lessons you learned to make your current code work the way it needs to work.
See also: Big Ball of Mud
It is rare for a re-write of anything complex to succeed. It's tempting, but a low percentage strategy.
Get legacy code under unit tests and refactor it, and/or completely replace small portions of it incrementally when opportune.
Refactor unless it is very bad indeed.
Joel has a lot to say on this...
At the very least, rewrite the code with the old code in front of you and don't just start over from scratch. The old code may be terrible, but it is the way it is for a reason and if you ignore it you'll end up seeing the same bugs that were probably fixed years ago in the old code.
One reason for rewriting at one of my previous jobs was an inability to find developers with enough experience to work on the original code base.
The decision was made to first clean up the underlying database structure, then rewrite in something that would make it easier to find full-time employees and/or contractors.
I haven't heard yet how it worked out :)
I think people have a tendency to go for rewrites because it seems more fun on the surface.
We get to rebuild from scratch!
We'll do it right this time!
etc.
There is a new book coming out, Brownfield Application Development in .NET by Baley and Belcham. The first chapter is free, and talks about these issues from a mostly platform agnostic perspective.
Repair, or more importantly, refactor. Both because Joel said so and also because, if it's your code, you've probably learned a ton more stuff since you touched this code last. If you wrote it in .NET 1.1, you can upgrade it to 3.5 SP1. You get to go in and purge all the old commented out code. You're 100x better as a developer now than when you first wrote this code.
The one exception I think is when the code uses really antiquated technologies - in which case you might be better served by writing a new version. If you're looking at some VB6 app with 10,000 lines of code with an Access database backend obviously set up by someone who didn't know much about how databases work (which could very well be you eight years ago) then you can probably pull off a quicker, C#/SQL-based solution in a fraction of the time and code.
It's not so black and white... it really depends on a lot of factors (the more important being "what does the person paying you want you to do")
Where I work we re-wrote a development framework, and on the other hand, we keep modifying some old systems that cannot be migrated (because of the client's technology and time restrictions). In this case, we try to mantain the coding style and sometimes you have to implement a lot of workarounds because of the way it was built
Depending on your situation, you might have another option: in-license third-party code.
I've consulted at a couple of companies where that would be the sensible choice, although seemingly "throwing away IP" can be a big barrier for management. At my current company, we seriously considered the viable option of using third-party code to replace our core framework, but that idea was ultimately rejected more for business reasons than technical reasons.
To directly answer your question, we finally chose to rewrite the legacy framework - a decision we didn't take lightly! 14 months on, we don't regret this choice at all. Just considering the time spent fixing bugs, our new framework has nearly paid for itself. On the negative side, it is not quite feature-complete yet so we are in the unenviable position of maintaining two separate frameworks in parallel until we can port the last of our "front-end" applications.
I highly recommend reading "Working Effectively with Legacy Code" by Michael Feathers. It's coaching advice on how to refactor your code so that it is unit testable.

Resources