As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm a member of a software development team, working on a small project.
We think that we can release a beta quality product after 2 or 3 month of continuos work.
Since this is our first teamwork, I decided to ask, which software development methodology would you suggest for a small project with small number of developers (less than 10)?
There are two approaches to software development:
Write down what you are going to do, do it, then agree that you have done it.
Start developing stuff, agree that what you have done is good, repeat until finished.
Both have their adherents and both pop up repeatedly under a variety of names. Each new generation of software developers (ie about every 2 years, this is a fast changing industry and software developers have the lifespan of a mayfly) rejects the previous generation's approach, re-discovers the approach used by the generation before last, renames it something funky and declares it to be the ONE TRUE WAY.
The choice between the approaches ought to depend on the culture of (a) the customer organisation and (b) to a lesser extent, the culture of the supplier organisation (ie your software developer team).
So, if you work for a buttoned-down conservative enterprise approach 1 is indicated. If you look down and see that you are wearing surf shorts and came to work this morning on your skateboard, go with approach 2.
And, in case you have read this far, the most serious bit is the paragraph before the one before this final one, ie the one starting 'The choice ...' This is a cultural / organisational issue rather than a technical one. Both approaches have been used on many many successful projects, neither has a monopoly on unsuccesful projects.
This really does depend on what you are intending to build. If the project is going to be something you want to build upon and have regular intervales something like Agile / Scrum would be very suited.
But it really depends on what the project is to determine release iterations and the like etc.
I think that you need to start from Joel Test and try to implement most of this list:
http://en.wikipedia.org/wiki/The_Joel_Test
And as product development use KISS = Keep It Simple & Stupid, for first release
Also really good start is Getting Real book, available free from 37 signals:
http://gettingreal.37signals.com/toc.php
This really does depend on your customer.
If the customer can accept fixed
time, fixed resources, fixed quality
(100% working code), and slightly
variable scope, I recommend choosing
an agile methodology.
If the customer cannot accept the
above, i.e. the pre-condition for
using an agile methodology is not
present, I recommend choosing any
methodology you like.
The important thing is that you do have a methodology, learn what is working as you go, and use the knowledge to adapt the methodology.
Don't do waterfall, this never worked and will never work. Thinking waterfall is a working methodology is like thinking banging your head against the wall is good, because even the sturdiest wall MUST crumble at some point.
I'd go with a reasonable agile methodology, like Scrum (XP is a bit harsh). Also, introduce things like TDD, DDD, DBC and you should be fine.
I wont suggest this as THE best answer, without having a better idea of the context and circumstances, but I am personally becoming a fan of the Lean / Kanban approach. In general I find a lot of the agile / scrum methods can be fairly developer focused, and almost anti-manager sometimes, which is sometimes appropriate but not always. The lean approaches tend to address the entire value stream rather than just the development itself.
You can read more about it at :http://www.limitedwipsociety.org/
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have a question which is not strictly-speaking programming related, but nevertheless caused by being an analyst and a programmer at the same time.
It's about starting new projects which seem to be unfeasible, because they have an unknown domain, lack specifications, and/or require technology which I am not familiar with. I get some sort of panic when I approach such a project, and then relax as I proceed along with domain and technology understanding.
Is this something you experience? How do you cope with it?
The best way that I know of to try to contain and control the human factors in a project is to have a clear idea of your own processes.
Start with some Domain Driven Design, work with the users and help them to understand their domain and the business processes that surround the domain. Often developers are far better at abstraction than the managers/business people so we can often help them to understand their own domain.
Build up a set of acceptance criteria, these form your tests which actually form your spec.
Once you have an idea of the above you know much more about feasibility and how long it will take (and even if the technology that has been specified is the right one)
As for approaching new technologies, start small, build a proof of concept and make your mistakes there rather than on production code. There is a huge amount of best practice on the web and places like StackOverflow are good places to start.
I would suggest working in an agile fashion, get the project owners to prioritise the work that needs to be done, work out what is needed for the next two week sprint and deliver it (which may mean stubbing out a lot of functionality). They'll tell you when it is wrong and it may influence their own decision making.
Don't view the entire project as a nasty whole, break it down into deliverable sections and one step at a time.
Calm down.
If the project is initially infeasible (even if only in your own mind) then start with a feasibility study. This is a sub-project with which you will define the project (or at least the next sub-project).
You've already defined several major tasks within the feasibility study: learn about the domain, write some specifications, learn enough about the new technologies.
As for me, no I never panic about this sort of situation, I love starting with a blank sheet of paper, and experience has taught me how to start filling it in real quick.
So, take a few deep calming breaths and jump in.
Yep, I get this felling all the time. But I always think that technologies are like tools. Once you got how to handle then, the rest will be easy.
Whenever I don't feel like that is when disaster lurks! It's like eating an elephant, just do it one bite at a time. Do some part you do understand, and that gives a handle to the next bit.
unfeasible,
unknown domain,
lack specifications,
require technology which I am not familiar with
I think that's how we start our life too. As long as you are confident that you can pull it off, just stick to it and you will see that things are working in your favor provided:
You understand the importance of being Self Starter
You take responsibility for who you are
You ask the right questions at right time
All the best!!!
Often the trouble with these infeasible projects is that the client is on a limited budget and will go bust before you complete your feasibility study. In this case it might be worth taking a step back from technology and looking at economics. May be sub-contracting to someone with the required knowledge will ease the pain.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I posted this question on Reddit Programming and did not get a single response. So I am hoping that Stack Overflow community will have an opinion.
Have any of you ever been on a software project that had fallen behind, where 'Crashing' or 'Fast-Tracking' the project schedule actually brought the project schedule back on track? I have never seen either of these project management techniques actually work. And all the articles on software development that I have read all state that these 2 techniques do not work and actually pushing the project further behind (for example literature on the Mythical Man Month). So who has seen it work?
Thanks Bill.
I have only ever seen it work once. It was a three or four month long project that was projected to run an extra two months over the original delivery date. The project got fast-tracked and things ended up getting back on track for the release.
...keep in mind though, that was only once. I've been on many more projects where the PM tried to use one of those two methods and they failed miserably and dragged the project out for months beyond already extended date.
It can work. But there's a price to be paid: lower quality (more bugs, less testing) and turnover of burned-out programmers.
And in many cases, a fast-tracked project will both fail to deliver on time and will still pay the full negative price, for the reasons stated in Mythical man-month.
I've seen it work but it's not the norm.
Things I'd want to see before I thought it might be feasible:
1) Staff available with suitable skills and approach. By that I don't mean ".NET programmer", I mean detailed technical skills, business domain skills (so they understand the problem), personality fit and understand the tools and the approach (source control, methodology and so on). This can happen in large companies where there are common tools, standards and knowledge but you need to be sure that they're ticking pretty much all the boxes.
2) Tasks must be nicely divisible. The best situation is where there are whole modules, applications or tasks unstarted and you can put new people on that. It minimises upskilling, additional communication and so on. If you can't separate out what the new people will do you're likely to majorly disrupt the existing team.
3) The whole team must have bought into the approach. If the existing team don't agree that bringing people on board will be right they'll likely fight it and you're doomed.
4) You need to be sure you've addressed why it was running late in the first place. If it was just bad estimates then are you confident the new estimates are good? If it was scope creep have you got the scope and change control in hand now? If it was because the deadline moved, are you sure it won't move again?
If you can't tick all four of those off, it isn't going to work.
Crashing and Fast-Tracking are two very different things...
Fast Tracking is where you take something (tasks or work packages) out of sequence and do it early. This may because of hardware delivery lead times, availability of resources, risk or whatever. So you might do things in parallel where originally you had planned to do it sequentially. I've fast tracked a lot of projects.. and yes it works.
Crashing a project is different in that you typically throw more resources at a problem to get it done quicker... this can be tricky. If it's done as a crisis response it can be painful adding extra people as you are already under the pump. In some situations you just add more problems.
Another alternative to crashing is to reduce scope. This is not always possible, but it should be considered.
With fast tracking or crashing... the sooner you know when you need to make a schedule change the easier to manage. This is why early deadlines are so important, they indicate how the rest of the project will go.
Both of these project management techniques work well to maintain a schedule, but they should be used intelligently by judiciously analyzing the network diagram:
study the variance,
study lead and lags;
decide what suits to your project: ‘Crashing’ or ‘Fast-Tracking’.
There is a software management principle that says adding manpower to a late project makes it later.
That said, as long as the measures taken are sensible it should be ok. Don't expect too much of your staff and provide reasonable incentives and don't take short cuts. It won't make miracles happen but if you're practical and want to push things just that little bit faster it can definitely be done.
When people have a stake in the potential success of something it's amazing how much more effort they're willing to put in.
It depends on what you mean by "work". I don't think I've ever seen it make a way late project deliver on time, if that's what you are asking.
However, I have seen it make way late projects deliver only a bit late. From the fuzzy perspective of management, that might be called "working". I've also seen it significantly lower the customer-based pressure on the company. Some might also call that "working".
Of course the price is rather high. Employees burn out, develop health problems or big problems in their neglected personal lives, etc. All of that has large financial repurcussions to the company. So I doubt the company comes out ahead in the long run. Is that "working"?
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have been in the IT industry for 10 years now but have worked in "traditionally" managed project teams (both well managed and badly managed ones).
I have heard of the "new" scrum or XP type of project management and yearned to be part of one (as s/w folks we always like anything new I guess) but have not got an opportunity.
My question is this - what are your experiences in moving to the "new" way - was it significantly better or worse or not any different? Has there been any project success rate improvement when using XP way of development or it is same as any well managed traditional projects?
This should not be a political question but just your experiences as you have moved to the new world or experienced at least once and back.
Thanks in advance
Before I ever heard of XP, I had a really good manager (Mike) at an early job I had. He was used to managing engineers and transitioned to managing software. After a few bad working experiences I looked back at his style versus typical project management I had before and after working with him.
Met with everyone at least once a day but gave us space to work
Used a whiteboard with two columns, people working and what they are working on anyone could look at that board and see if something had been done or was being done
Had everyone cross-train. I learned rcs and then cvs there and how to use make files
Ran productive "post mortum" when a task was completed. He would ask question like "would it have helped if X?" or "next time, can we try to..."
Kept everyone working on short tasks and managed our time so we always working on something but never had a ton of stuff piled up
Mike did everything on paper. He would keep notebooks and index cards with him. He insisted that anything asked of him by management be converted into manageable tasks, often written on note cards. He refused to have anyone work on anything that couldn't be clearly explained or had a clear objective. He would ask the VPs "what do you mean by faster?" "What kinds of metrics are the reports meant to show?" "Why should this be a priority?" He seemed to have near infinite patience in writing out what needed to be done and what was meant by "done"
When I first read the XP book, I was amazed by how much was familiar as "the way Mike worked"
It seems that Agile is just about implementing a set of best practices and evaluating how they work in your environment. When they don't work, change them. When they do work, stick to them.
I think the real problem with traditional project management is that more often than not, it doesn't really exist. I'm amazed by how many shops claim to use RUP or Code Complete or even Agile and don't actually have anything recognizable as project management. Sure, there are meetings. And people called project managers. But ask a simple question like "what has been done on project X" or "what is left to do on project Y" and no one has an answer. They have to dig though emails or point to a comically inaccurate MS project file.
If a person claimed to be on a diet and couldn't answer questions on what they were eating or how they were exercising; would you accept that they were really on a diet?
You take your old baggage with you when you go. Meaning that any project management bad practices you had before will still linger.
However, I will say that things improved greatly when we began to close the loop between us and the customer. Greater and more frequent feedback and prototyping with the customer means far fewer moments of the customer saying, "This is not what I wanted."
I've used (a slightly modified) Scrum before at work and here are my thoughts:
The daily meetings and burn-down provided motivation to make progress on tasks.
Our manager could talk to colleagues across the pond and show them "this is what we're working on this month."
You knew exactly what tasks you needed to get done, and had already estimated the time required to complete.
When priorities changed (new tasks, important bugs added), there was a well-defined process to handle adding them to the sprint or simply pushing them to the backlog.
These are lovely answers, but I think everyone's confusing project management with development/design methodologies.
I'm on a team that started Scrum a few months ago and we seem to be getting things done faster and with much less "waste" (projects that are scrapped). Just my observations from our small team (4 devs).
I've found the overall move to Agile/XP practices very positive, in many ways it front loads quality into the project/development process. You'll need buy-in from management and from the team to really see success...a few suggestions:
trial any change with a small project (2-3 people)
understand what areas your current team can most improve (quality? productivity? time-to-market?) and incorporate a few Agile/XP/Scrum (what ever) processes in...don't incorporate them all in at the same time and understand which processes address which issues prior to any change
if possible - track those areas you're looking to change and compare to another project running at the same time (the mere focus of improving something often is enough to improve it ,there's a study/term for this, but I forget what it is)
sometimes you'll see a dip in performance as you begin a new process, this is part of the learning curve
never assume that a good change today will remain a good change tomorrow, always review your project areas and be ready to change any process at any time
no change remains good forever, just like refactoring code, refactor your processes
ensure you get buy in from the team and management, you can't force success
I like some of the things the agile approaches do, but I also value some of the things traditional approaches do.
Both can work, as can a mixture of the two, which is what I find works best for my team now. I have implemented incremental development and it really helps us; iterative development is a little harder and we're still working on that. However, we have a variety of constituents, and many of our stakeholders (and PMs) prefer traditional artifacts and milestones. So we have to keep finding the right balance.
I have also found that even more important than the methodology is the people implementing it. Good people find a way to do good work and get things done regardless of the methodology, although certainly the methodology can have effects on efficiency (and morale :) ). Poorly aligned resources, however, can use the finest methodology and find ways to deliver poor results.
For developers, the great lessons of XP & Co. are shorter release cycles, and a more evolutionary approach - in the sense that change of requirements is accepted as a natural part of any project. Also, Customers suggest solutions, but designers and developers need to understand the problems.
Lessons for managers: Developers are not exchangable spec-to-code-converters, their individual strengths and weaknesses can make a productivity difference of 10 or more for a given topic. Knowledge and experience are the most valuable skills in your team, and developers can teach each oterh. Managers need not understand what developers do in order to enforce desired results.
XP & Co. are usually mixing solutions to these with the problem to make a company change. The heroic XP consultant singlehandledly saving a doomed, delayed and derailed project acts as large part as a buffer between development and management. But if you are looking at what to learn, you have to separate these aspects.
What I learnt in the recent years is that bugs aren't a personality fault, and that the sky doesn't fall when specs change. I've learnt that while design errors are still the most expensive to make, there isn't a single "perfect" design. Instead of getting one thing right we need to implement safeguards that of all the many details none goes wrong - and I've learnt to use the leeway between "right" and "not wrong" to our advantage.
My experience has been that I prefer to use Scrum over traditional approaches as it hasn't happened often that requirements could stay unchanged for the length of a project where usually projects seem to run at least 6 months to my current one that is over a year.
There can also be the case where there isn't any project management and everyone just scrambles to "make it work" so having some formal structure is good over nothing. There is something to the question of how well does the team come together and egos rarely appear as it isn't someone's code but rather the code of the team and there is a kind of group think where while each person has their view, no one tries to make everyone else see things that way.
At times it seems to me that some Scrum and Agile approaches I've used end up being like rapids instead of a big waterfall. What I mean is that the cycle of gather requirements - Analyse and Design - Implement - Test - Deploy and get updated requirements seems to be repeated over and over so that what comes out in the end would be extremely hard to state at the beginning of the project unless the project sponsor could give very detailed requirements that would never change.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
When looking at the myriad types of software written at our company, I instantly jump to conclusions of the quality of the entire product based on the UI. If I find misspellings, weird tab orders, fields not lined up, odd colors, I assume that the entire application is of poor quality.
I'm assuming that if the programmer doesn't care enough to make the outside look good, that they don't care enough at all. I am NOT assuming if the UI looks good that the application does what it should, although I am not immediately down on it -- it gets more leeway when it's being evaluated.
Is this a valid decision to make? For commercial software as well?
It may or may not be. But that's not really relevant. To your end user, crappy UI = bad code.
I think it's a good indicator of the care that a developer has for their work - basically a sense of professional pride.
It's a given that most devs don't make fantastic UI designers, but there are a basic set of rules that should be followed when developing professional software and these apply as much to the UI as they do to the internals.
So, basically I agree with you.
IF the application was written by one developer its not an unfair assumption that a slovenly UI is indicative of the underlying code quality.
However if it was written by a team of 5 or 7 or 13 there will likely be a wide range of quality under the hood (it just might be the newbee was given the UI).
Also if the app is 5+ years into its lifecycle with maintenance being performed by FBN contractors or interns or whoever is handy you may find a lot of good code under the hood thats slowly rotting because of indifferent management and undisciplined developers who just throw a "patch" at it, compile it, check it back in and throw it over the wall to production.
A crappy UI can be indicative of a lot of things, none of them good, some worse than others.
In my opinion it is a valid decision. And you are right when you say that good looking software is not necessarily good software internally.
But definitely, if the programmers don't care about the usability of the program, most likely they won't care about it's functionality.
If the UI is riddled with typos and inconsistencies, it is probably fair to say that the QA process and project management were a bit lacking. Doesn't really infer that the codebase is riddled with bugs.
In a commercial product, it most likely means that less people will buy it, so whilst sales are not really a quality metric, they're pretty important in the overall scheme of things.
People are more likely to buy things that look good, behave as they expect them to, and "Don't make them think".
Many programmers suck at UI design, and that's not their fault, it does not mean they suck at coding. They're just generally more interested in the internal beauty of what they make, otherwise, they'd be liberal arts majors instead.
It really depends. I know of software developers who are excellent at just about all aspects of design and implementation but have lousy UI skills. Many times the UI is an afterthought as a nod to the user. In the cases of scientific software or other software where the processing is central or key, it might not be a good idea to judge the quality of the rest of the code by the UI. However, overall - it might be a good indicator that the software company has not done its job well.
It all really depends on each case, but if the UI is not usable or a pain in the neck, then the underlying code is harder to use and not worth the time I suppose.
The opposite is also not true - flashy, beautiful UIs do not mean that the underlying code is good at all. Anyone can wrap a piece of junk with a nice UI.
I'd agree with the masses here. Poor UI mean that the product development team dropped the ball.. That said. I consider myself a good coder. Great at math, but dyslexic and attention deficit disorder.. Give me a set of earphones and some code and I'm on my way. Don't however expect me to mock up a great GUI. Line things up.. That I do.
Now ADD to that the fact that as the "programmer" even when I see things in the GUI that bug the crap out of me (as a person who uses it), I don't get to fix them.. Hell when I do fix them I get QA asking me for the design document and the approval from on high. After a while I stoped caring about the GUI..
I write solid code, that works. It's fast, clean and small.. it's where I get to have an impact. The GUI is beyond my pay grade. :(
In my experience it's usually the other way around. You get good quality UI's by having people who spend "huge" amounts of time focusing on widget behavior and look&feel instead of domain model or automated tests.
Some of the best quality systems I've worked with had auto-generated UIs, that were rather unpleasant to use.
As much as I really want to say, "Yes, absolutely," it's not always a valid conclusion. The programmer or QA team may have an excellent understanding of the application but a terrible grasp of the language or presentation.
Some people simply focus on what they consider to be important—and get it fairly close to perfect—and all but ignore what they consider "fluff" or "window dressing."
But I do have very a strong tendency to pre-judge the overall quality of the software based on first impressions.
No, UI is not indicative of internal code. Many a time's we come across things that are shiny and look cool but serve no purpose. Think of it as a seeing a Ferrari parked at the store. It looks awesome and you wonder what it would be like to get behind the wheel -- only to find out it's a body kit slapped on a 1980 late model Acura that has 500k miles on it.
A personal example, at my current employer, we have stellar code in our software (and I say this subjectively since I was not there for 99% of it's creation). But when you look at our UI, it can seem a bit old and rusty. That and many a times the UI developers don't even touch much of the internal code.
"I'm assuming that if the programmer doesn't care enough to make the outside look good, that they don't care enough at all" - I don't believe this to be true. I think most programmers look for functionality as opposed to shininess as they tend to be creatures of logic, not artists.
Take linux as an example -- stellar internal code, but UI was lacking for a long time, thus why no one in the mainstream has used it extensively as opposed to Windows or Mac.
Short version: !UI.Equals(InternalQuality)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
You've just written a pile of code to deliver some important feature under pressure. You've cut a few corners, you've mashed some code into some over-bloated classes with names like SerialIndirectionShutoffManager..
You tell your boss you're going to need a week to clean this stuff up.
"Clean what up?"
"My code - its a pigsty!"
"You mean there's some more bug fixing?"
"Not really, its more like.."
"You're gonna make it run faster?"
"Perhaps, buts thats not.."
"Then you should have written it properly when you had the chance. Now I'm glad you're here, yeah, I'm gonna have to go ahead and ask you to come in this weekend.. "
I've read Matin Fowler's book, but I'm not sure I agree with his advice on this matter:
Encourage regular code reviews, so refactoring work is encouraged as a natural part of the development process.
Just don't tell, you're the developer and its part of your duty.
Both these methods squirm out of the need to communicate with your manager.
What do you tell your boss?
It's important to include refactoring time in your original estimates. Going to your boss after you've delivered the product and then telling him that you're not actually done is lying about being done. You didn't actually make the deliverable deadline. It's like a surgeon doing surgery and then not making sure he put everything back the way it was supposed to be.
It is important to include all the parts of development (e.g. refactoring, usability research, testing, QA, revisions) in your original schedules. Ultimately this isn't so much a management problem as a programmer problem.
If, however, you've inherited a mess then you will have to explain to the boss that the last set of programmers in a rush to get the project out the door cut corners and that it's been limping along. You can band-aid the problem for awhile (as they likely did), but each band-aid just delays the problem and ultimately makes the problem that much more expensive to fix.
Be honest with your boss and understand that a project isn't done until it's done.
Speak in a language he can understand.
Refactoring is paying design debt.
Ask your boss why he pays the company credit card bill every month vs not paying it until there is a collections notice. Tell him refactoring is like making your monthly payment.
Just do it and schedule it into your normal process. Estimate refactoring time into starting a new change or into finishing a change (ideal).
I always refactor while I'm initially exploring new code (extracting methods, etc).
Lie. Tell him it's research into a new technology. Then tell him you decided the cost didn't justify the benefits. He'll think you did a great job.
lol # people down modding / marking offensive.
Really, if it's a penny pinching boss, who doesn't understand good software from cheap software, what he doesn't know will ultimately make him happier. if it was me, i would leave the company and go someplace where they respect their developers ability to write good code. But then again, this is why I'm in a senior position.
Tell him 80% of the costs associated with a software project comes in the maintenance phase of the lifecycle. Any refactoring done now to alleviate future problems, and have some examples, will net substantial cost benefits later on when the need arises to maintaining that code.
This is assuming you are refactoring for a reason and not for programmer vanity.
Refactoring you should do all the time.... so you shouldn't have to justify it.
Cleaning up big messes / Redesign may include refactoring in order to get it under control, however its not "Refactoring"
Refactoring should be a matter of moments...or if you have no tool support, minutes.
In one of Robert Glass's recent books (I'll have to look up the reference) he mentioned a study on the cost of well maintained code. What they found is that well maintained code was edited more often than poorly maintained code. That sounds counter intuitive but when they dug deeper the discovered the reason:
Well maintained code has more features added to it in the same time frame than poorly maintained code.
Does your Boss like features? Sure, they all do. If more you improve the maintainability of the code, the more features you will be able to deliver with that limited budget.
I like the answer given in "Refactoring" by Martin Fowler. Tell your boss that you are going to develop software the fastest way that you know how. It happens that in most cases the fastest way to develop software is to refactor as you go.
The other thing to tell your boss is you are reducing the cost to make future improvements.
Less money now for me to refactor...
or more money later to fix whatever goes wrong and for me to refactor.
Sometimes, it's just time to get a new job. There are certian poeple who just want you to "get it done". If you are ever in one of those situations, and I've been there, then just leave.
But yeah, all that other stuff about future costs and such is good idea. I just think that most bosses lie to themselves because they want what they want when they want it, and they are just not able to see what's going to happen in the future.
So, good luck with your boss. Hpefully he or she is reasonable.
Dont.... just go get a new job in a place thats more in synch with you.
I think you should just start working on it without telling your boss. This is truly how I've done my best work. I just don't tell my boss what I'm doing and slowly replace bad/legacy code when I have time.
It has acutally saved my ass on more than one occasion.
If your boss doesn't understand the need to refactor or clean up code, then you have to wonder if he has enough engineering knowledge to be an engineering manager.
It's rare to find a boss who will give you time to refactor...just do it as you go along.
In my opinion, the simplest case to make for refactoring is fixing overly complex code. Measure the McCabe cyclomatic complexity of the source code in question (Source Monitor is an excellent tool for such a problem). Source code with high cyclomatic complexity has a strong correlation defects and bad fixes. What this means in simple terms is that complex code is harder to fix and more likely to have bad fixes. What this means to a manager is that the quality of the product will likely be worse, and the bugs harder to fix, and the schedule for the project ultimately worse. However, in refactoring out the complexity, you are improving the transparency of the code, reducing the likelihood of obscure / difficult bugs, and making it easier to maintain (e.g. a maintenance programmer can have a larger maintenance scope because of this).
Additionally, you can make the case (if it isn't a dead product in maintenance cycle) that decreasing complexity makes the application easier to extend when new requirements are added to the project.
The boss has to trust the dev to make correct technical decisions (including when to refactor).
Establish that trust or replace the boss or replace the dev.
Another good analogy is the maintenance of a tidy building site. The only catch here is that a programmer does not represent a construction worker, and a manager does not represent a foreman. If that were the case, his counter of "do it right first time" would still apply, since a competent and conscientious construction worker is responsible for maintaining good order on their workspace as they go.
Really the code itself represents the labourers, and the development process is the foreman. The mess is generated by various trades going about their business around one another (i.e. by different code features interacting, where each feature does its job well, but the seams between them are disorganised) and is cleaned up by the foreman taking a firm hand and keeping an eye on where disorder is setting in, and acting to get it cleaned up (i.e. the software process demanding refactoring).
What I just did recently is to explain to my business counterpart that the re-factory process helps to develop new features faster and decrease the probability of new bugs because the code has a better order and structure, and is even posible to make some speeds improvements because you can inspect the code easier than before.
When the business guys get that, if they are smart, they will encourage you to do a constant re-factory process.
You can explain that with a building metaphor. If you don't do refactory you will end with a crappy building with a bad core so you will have problems with the pipes, windows, doors.