Calculating Project Programming Times [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
As a lead developer I often get handed specifications for a new project, and get asked how long it'll take to complete the programming side of the work involved, in terms of hours.
I was just wondering how other developers calculate these times accurately?
Thanks!
Oh and I hope this isn't considered as a argumentitive question, I'm just interested in finding the best technique!

Estimation is often considered a black art, but it's actually much more manageable than people think.
At Inntec, we do contract software development, most if which involves working against a fixed cost. If our estimates were constantly way off, we would be out of business in no time.
But we've been in business for 15 years and we're profitable, so clearly this whole estimation thing is solvable.
Getting Started
Most people who insist that estimation is impossible are making wild guesses. That can work sporadically for the smallest projects, but it definitely does not scale. To get consistent accuracy, you need a systematic approach.
Years ago, my mentor told me what worked for him. It's a lot like Joel Spolsky's old estimation method, which you can read about here: Joel on Estimation. This is a simple, low-tech approach, and it works great for small teams. It may break down or require modification for larger teams, where communication and process overhead start to take up a significant percent of each developer's time.
In a nutshell, I use a spreadsheet to break the project down into small (less than 8 hour) chunks, taking into account everything from testing to communication to documentation. At the end I add in a 20% multiplier for unexpected items and bugs (which we have to fix for free for 30 days).
It is very hard to hold someone to an estimate that they had no part in devising. Some people like to have the whole team estimate each item and go with the highest number. I would say that at the very least, you should make pessimistic estimates and give your team a chance to speak up if they think you're off.
Learning and Improving
You need feedback to improve. That means tracking the actual hours you spend so that you can make a comparison and tune your estimation sense.
Right now at Inntec, before we start work on a big project, the spreadsheet line items become sticky notes on our kanban board, and the project manager tracks progress on them every day. Any time we go over or have an item we didn't consider, that goes up as a tiny red sticky, and it also goes into our burn-down report. Those two tools together provide invaluable feedback to the whole team.
Here's a pic of a typical kanban board, about halfway through a small project.
You might not be able to read the column headers, but they say Backlog, Brian, Keith, and Done. The backlog is broken down by groups (admin area, etc), and the developers have a column that shows the item(s) they're working on.
If you could look closely, all those notes have the estimated number of hours on them, and the ones in my column, if you were to add them up, should equal around 8, since that's how many hours are in my work day. It's unusual to have four in one day. Keith's column is empty, so he was probably out on this day.
If you have no idea what I'm talking about re: stand-up meetings, scrum, burn-down reports, etc then take a look at the scrum methodology. We don't follow it to the letter, but it has some great ideas not only for doing estimations, but for learning how to predict when your project will ship as new work is added and estimates are missed or met or exceeded (it does happen). You can look at this awesome tool called a burn-down report and say: we can indeed ship in one month, and let's look at our burn-down report to decide which features we're cutting.
FogBugz has something called Evidence-Based Scheduling which might be an easier, more automated way of getting the benefits I described above. Right now I am trying it out on a small project that starts in a few weeks. It has a built-in burn down report and it adapts to your scheduling inaccuracies, so that could be quite powerful.
Update: Just a quick note. A few years have passed, but so far I think everything in this post still holds up today. I updated it to use the word kanban, since the image above is actually a kanban board.

There is no general technique. You will have to rely on your (and your developers') experience. You will have to take into account all the environment and development process variables as well. And even if cope with this, there is a big chance you will miss something out.
I do not see any point in estimating the programming times only. The development process is so interconnected, that estimation of one side of it along won't produce any valuable result. The whole thing should be estimated, including programming, testing, deploying, developing architecture, writing doc (tech docs and user manual), creating and managing tickets in an issue tracker, meetings, vacations, sick leaves (sometime it is batter to wait for the guy, then assigning task to another one), planning sessions, coffee breaks.
Here is an example: it takes only 3 minutes for the egg to become roasted once you drop it on the frying pen. But if you say that it takes 3 minutes to make a fried egg, you are wrong. You missed out:
taking the frying pen (do you have one ready? Do you have to go and by one? Do you have to wait in the queue for this frying pen?)
making the fire (do you have an oven? will you have to get logs to build a fireplace?)
getting the oil (have any? have to buy some?)
getting an egg
frying it
serving it to the plate (have any ready? clean? wash it? buy it? wait for the dishwasher to finish?)
cleaning up after cooking (you won't the dirty frying pen, will you?)
Here is a good starting book on project estimation:
http://www.amazon.com/Software-Estimation-Demystifying-Practices-Microsoft/dp/0735605351
It has a good description on several estimation techniques. It get get you up in couple of hours of reading.

Good estimation comes with experience, or sometimes not at all.
At my current job, my 2 co-workers (that apparently had a lot more experience than me) usually underestimated times by 8 (yes, EIGHT) times. I, OTOH, have only once in the last 18 months gone over an original estimate.
Why does it happen? Neither of them, appeared to not actually know what they are doing, codewise, so they were literally thumb sucking.
Bottom line:
Never underestimate, it is much safer to overestimate. Given the latter you can always 'speed up' development, if needed. If you are already on a tight time-line, there is not much you can do.

If you're using FogBugz, then its Evidence Based Scheduling makes estimating a completion date very easy.
You may not be, but perhaps you could apply the principles of EBS to come up with an estimation.

This is probably one of the big misteries in the IT business. Countless failed software projects have shown that there is no perfect solution to this yet, but the closest thing to solving this I have found so far is to use the adaptive estimation mechanism built in to FogBugz.
Basically you are breaking your milestones into small tasks and guess how long it will take you to complete them. No task should be longer than about 8 hours. Then you enter all these tasks as planned features into Fogbugz. When completing the tasks, you track your time with FogBugz.
Fogbugz then evaluates your past estimates and actual time comsumption and uses that information to predict a window (with probabilities) in which you will have fulfilled your next few milestones.

Overestimation is rather better than underestimation. That's because we don't know the "unknown" and (in most cases) specifications do change during the software development lifecycle.
In my experience, we use iterative steps (just like in Agile Methodologies) to determine our timeline. We break down projects into components and overestimate on those components. The sum of these estimations used and add extra time for regression testing and deployment, and all the good work....
I think that you have to go back from your past projects, and learn from those mistakes to see how you can estimate wisely.

Related

Is it possible manage developers with high turnover if you can't lower the turnover rate? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I lead a small group of programmers in a university setting, having just moved into this position last year. While the majority of our team are full time employees, we have a couple of people who are traditionally graduate assistants.
The competition for these assistantships is fairly intense, as they get free graduate school tuition on top of their salary while they have the job. We require that they sign up for at least a year, though we consider ourselves lucky if they stay for two. After that, they get their master's degree and move on to bigger and better things.
As you can imagine, hiring and re-training these positions is time- and resource-intensive. To make matters worse, up to now they have typically been the sole developer working on their respective projects, with me acting in an advisory and supervisory role, so wrangling the projects themselves to fight the entropy as we switch from developer to developer is a task unto itself.
I'm tempted to bring up to the administrators the possibility of hiring a full- (and long-haul) developer to replace these two positions, but for a school in a budget crisis, paying for two half-time graduate assistants is far cheaper (in terms of salary and benefits) than paying for one full-time developer. Also, since I'm new to this position, I'd like to avoid seeming as though I'm not able to deal with what I signed up for. For the forseeable future, I don't think the practice of hiring short-term graduate assistants is going to change.
My question: What can I do to create an effective training program considering that the employees may be gone after as little as a year on the job?
How much time should I invest in training them, and how much would simply be a waste of time?
How much time should they take simply getting acclamated to our process and the project?
Are there any specific training practices or techniques that can help with this kind of situation?
Has anyone dealt with a similar situation before?
Do I worry too much, or not enough?
By the way, and for the record, we do the vast majority of our development in Perl. It's hard to find grad students who know Perl, while on the other hand everybody seems to have at least an academic understanding of Java. Hence this question which I asked a while back.
Why don't you ask the students what they find difficult and make cheat sheets, lectures, etc. for the parts of the job that they have trouble with? Maybe you need to create some introductory Perl lectures or purchase some dead trees. How about a Safari subscription at O'Reilly? I'd ask the students how they prefer to learn, though, before embarking on a training project. Everyone has different learning styles.
I'd also spend some time and capital creating a culture of professional software development at work. It'll be tough since academic programmers are often neophytes and used to kludging up solutions (I'm an academic programmer, btw) but the students will thank you in the long run. Maybe you can all go out to lunch once a week to discuss programming and other topics. You might also want to take some time to do code reviews so people can learn from each other.
With high turnover you definitely need to ensure that knowledge transfer occurs. Make sure you are using source code control and that your students understand proper commenting. I'd also make the students create brief documentation for posterity. If they are getting credit, make them turn in a writeup of their progress once a semester. You can put this in a directory in the project's repository for anyone who inherits it. As mentioned in other posts, a group wiki can really help with knowledge transfer. We use Mediawiki in our group and like it a lot.
One last thing I should add is that I find it helps to keep a list of projects for new developers that relatively easy and can be completed in a month or so. They are a great way for new people to get acclimated to your development environment.
This is a relative question, and should be taken on a case-by-case basis. If the new hire already knows Perl, you don't need to go over this piece of training (yes, you could put Perl as a mandatory prerequisite, but that would significantly limit your applicant pool), and their first bit of training should be something like fixing a bug in an existing application or walking them through an application they will maintain. Though, given that the developers are only there for a year makes me think the development styles are going to vary some (if not a lot).
Getting the new person up to speed with your process is very important, as long as your process works. In this high turnover environment, you should put a strong emphasis on documentation in your process. A Wiki is a great thing to have for this documentation, since it's centralized and any of the developers can access it. Having them try to figure out how a project works by themselves (with little to no documentation) is a waste of both their time and your time.
Perhaps I'm reading too much into the question, but if your university teaches java, why are you using Perl? Wouldn't it make more sense to use the tools that your students already know? This alone would cut the learning curve significantly. [once you eliminate the legacy code of course]
other than that, try:
break the projects up into month-sized bites
overlap the internships by at least 2 months, if not 6, so the new guy can work with and be trained by the old guy(s)
document whatever repeatable processes you have (as was suggested by Mark Nold)
if the grad students are cheaper than full-time pros, quit whinin' ;-) If not, go for the pros.
Have you considered making a "three ring binder" like Macdonalds and many other high turn over industries have? Have one folder which you can print out and hand to the new hire which shows the new hire some basics of getting up and running with Perl in your environment. This should be a "hello world", plus some basic regex and array manipulation. Lastly your manual should go on to show examples of the 5 things you find yourself doing all the time.
The example code may be authenticating users against an external security system, walking through recordsets or using ghostscript to create PDFs. Whatever they are, they should cover the basics of what you meet 80% of the time. More importantly the examples should show users how you expect the code to be written for clarity (eg: naming and approach), and give them some insight into servers and software in use and other practicalities which a generic book won't show them.
You won't get the binder right first time, but since you have a high staff turn over, you'll have plenty of time to test and improve it.
On top of this i would pick a single Perl programming book and give the new user their own copy of your three ring binder, plus "Programming Perl" to keep on their first day. At a cost of $50 per hire i'm sure it's a lot cheaper than the alternatives and you'll have them flipping burgers.... i mean cutting code in no time.
My initial couple of thoughts are that you should:
hire for the position, i.e. it's Perl-centric so make that a big part of the pre-requisites. That way you don't need that piece of training as well.
invest time in the on-boarding process, maybe use a wiki so that you can easily update it to help bringing them on-board.
Edit: Some extra points:
maybe have a chat and see if Perl can be introduced into the curriculum? If not, then make it known six months before the ads go up that applicants need to know Perl. This way you'll get people who have Perl experience and who have actively demonstrated their motivation.
can you open up some small projects so that they could be done by potential candidates during this six months?
approach the design of your large-scale projects so that they can be done in a piece meal manner. This is how The ACE Components have been done iirc.
allow a specific period for documentation and review of the work done by the departing grad student.
allow an overlap period of at least a couple of weeks where the new grad student can work with the departing grad student. They can learn the development environment and they can be guinea pigs for any updates to your wiki.
Still more to come...
HTH
cheers
That is a pickle, but it is not as uncommon in the commercial sector as you would think. I heard a statistic once that the average tenure of a programmer industry-wide is about 18-24 months. Normally I would suggest getting more experienced programmers who would require less ramp-up time and only need to be trained on the problem domain/technology updates and not the basics
I think your best bet is just to ask for about 30-50% more grad students than it will take to actually perform the job to account for the learning and ramp-up time and invest in some additional resources for testing as this environment is a recipe for mistakes since everyone is learning on the job. Also, this is probably difficult given the academic schedule, but try to stagger the start dates as much as possible to maximize the overlap between employees. Pair-programming teams of new-hires/old-hires might also help increase consistency and supplement the training without sacrificing too much productivity.
How much time should I invest in training them, and how much would simply be a waste of time?
Answer: It's not the amount of time or the amount of waste, but perhaps the approach. Would it be possible to video train - video yourself training one person and provide it as training for subsequent students/developers. You can add over time, but it does reduce your time needed to go through the same process.
How much time should they take simply getting acclamated to our process and the project?
Answer: This is all dependent on the person...min. of a day max of 2-3 weeks I would guess on avg.
Are there any specific training practices or techniques that can help with this kind of situation?
Answer: Video training (home made), having the current student/developer create/update a wiki of needed info, points of interest, etc.
Has anyone dealt with a similar situation before?
Answer: Use to be a 12-18 month avg. turnover - I would imagine that it's changed now (longer), but every company has turn-over, but perhaps not forced like yours due to the resource being students.
Do I worry too much, or not enough?
Answer: Knowledge lost through transition is a key risk area...
Is the application something you could consider open-sourcing?

Do you inflate your estimated project completion dates? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If so why? How much?
I tend to inflate mine a little because I can be overly optimistic.
Hofstadter's Law: Any computing project will take twice as long as you think it will — even when you take into account Hofstadter's Law.
If you inflate your estimate based on past experiences to try and compensate for your inherent optimism, then you aren't inflating. You are trying to provide an accurate estimate. If however you inflate so that you will always have fluff time, that's not so good.
Oh yes, I've learnt to always multiply my initial estimation by two. That's why FogBUGZ's Evidence-Based Scheduling tool is so really useful.
Any organization that asks its programmers to estimate time for coarse-grained features is fundamentally broken.
Steps to unbreak:
Hire technical program managers. Developers can double as these folks if needed.
Put any feature request, change request, or bug into a database immediately when it comes in. (My org uses Trac, which doesn't completely suck.)
Have your PMs break those requests into steps that each take a week or less.
At a weekly meeting, your PMs decide which tickets they want done that week (possibly with input from marketing, etc.). They assign those tickets to developers.
Developers finish as many of their assigned tickets as possible. And/or, they argue with the PMs about tasks they think are longer than a week in duration. Tickets are adjusted, split, reassigned, etc., as necessary.
Code gets written and checked in every week. QA always has something to do. The highest priority changes get done first. Marketing knows exactly what's coming down the pipe, and when. And ultimately:
Your company falls on the right side of the 20% success rate for software projects.
It's not rocket science. The key is step 3. If marketing wants something that seems complicated, your PMs (with developer input) figure out what the first step is that will take less than a week. If the PMs are not technical, all is lost.
Drawbacks to this approach:
When marketing asks, "how long will it take to get [X]?", they don't get an estimate. But we all know, and so do they, that the estimates they got before were pure fiction. At least now they can see proof, every week, that [X] is being worked on.
We, as developers, have fewer options for what we work on each week. This is indubitably true. Two points, though: first, good teams involve the developers in the decisions about what tickets will be assigned. Second, IMO, this actually makes my life better.
Nothing is as disheartening as realizing at the 1-month mark that the 2-month estimate I gave is hopelessly inadequate, but can't be changed, because it's already in the official marketing literature. Either I piss off the higher-ups by changing my estimate, risking a bad review and/or missing my bonus, or I do a lot of unpaid overtime. I've realized that a lot of overtime is not the mark of a bad developer, or the mark of a "passionate" one - it's the product of a toxic culture.
And yeah, a lot of this stuff is covered under (variously) XP, "agile," SCRUM, etc., but it's not really that complicated. You don't need a book or a consultant to do it. You just need the corporate will.
The Scotty Rule:
make your best guess
round up to the nearest whole number
double that quadruple that (thanks Adam!)
increase to the next higher unit of measure
Example:
you think it will take 3.5 hours
round that to 4 hours
quadruple that to 16 hours
shift it up to 16 days
Ta-daa! You're a miracle worker when you get it done in less than 8 days.
Typically yes, but I have two strategies:
Always provide estimates as a range (i.e. 1d-2d) rather than a single number. The difference between the numbers tells the project manager something about your confidence, and allows them to plan better.
Use something like FogBugz' Evidence Based-Scheduling, or a personal spreadsheet, to compare your historical estimates to the time you actually took. That'll give you a better idea than always doubling. Not least because doubling might not be enough!
I'll be able to answer this in 3-6 weeks.
It's not called "inflating" — it's called "making them remotely realistic."
Take whatever estimate you think appropriate. Then double it.
Don't forget you (an engineer) actually estimate in ideal hours (scrum term).
While management work in real hours.
The difference being that ideal hours are time without interuption (with a 30 minute warm up after each interuption). Ideal hours don't include time in meetings, time for lunch or normal chit chat etc.
Take all these into consideration and ideal hours will tend towards real hours.
Example:
Estimated time 40 hours (ideal)
Management will assume that is 1 week real time.
If you convert that 40 hours to real time:
Assume one meeting per day (duration 1 hour)
one break for lunch per day (1 hour)
plus 20% overhead for chit chat bathroom breaks getting coffie etc.
8 hour day is now 5 hours work time (8 - meeting - lunch - warm up).
Times 80% effeciency = 4 hours ideal time per day.
This your 40 hour ideal will take 80 hours real time to finish.
Kirk : Mr. Scott, have you always multiplied your repair estimates by a factor of four?
Scotty : Certainly, sir. How else can I keep my reputation as a miracle worker?
A good rule of thumb is estimate how long it will take and add 1/2 again as much time to cover the following problems:
The requirements will change
You will get pulled onto another project for a quick fix
The New guy at the next desk will need help with something
The time needed to refactor parts of the project because you found a better way to do things
<sneaky> Instead of inflating your project's estimate, inflate each task individually. It's harder for your superiors to challenge your estimates this way, because who's going to argue with you over minutes.
</sneaky>
But seriously, through using EBS I found that people are usually much better at estimating small tasks than large ones. If you estimate your project at 4 months, it could very well be 7 month before it's done; or it might not. If your estimate of a task is 35 minutes, on the other hand, it's usually about right.
FogBugz's EBS system shows you a graph of your estimation history, and from my experience (looking at other people's graphs as well) people are indeed much better at estimating short tasks. So my suggestion is to switch from doing voodoo multiplication of your projects as totals, and start breaking them down upfront into lots of very small tasks that you're much better at estimating.
Then multiply the whole thing by 3.14.
A lot depends on how detailed you want to get - but additional 'buffer' time should be based on a risk assessment - at a task level, where you put in various buffer times for:
High Risk: 50% to 100%
Medium Risk: 25% to 50%
Low Risk: 10% to 25% (all dependent on prior project experience).
Risk areas include:
est. of requirement coverage (#1 risk area is missing components at the design and requirement levels)
knowledge of technology being used
knowledge/confidence in your resources
external factors such as other projects impacting yours, resource changes, etc.
So, for a given task (or group of tasks) that cover component A, initial est. is 5 days and it's considered a high risk based on requirements coverage - you could add between 50% to 100%
Six weeks.
Industry standard: every request will take six weeks. Some will be longer, some will be shorter, everything averages out in the end.
Also, if you wait long enough, it no longer becomes an issue. I can't tell you how many times I've gone through that firedrill only to have the project/feature cut.
I wouldn't say I inflate them, so much as I try to set more realistic expectations based on past experience.
You can calculate project durations in two ways - one is to work out all the tasks involved and figure out how long each will take, factor in delays, meetings, problems etc. This figure always looks woefully short, which is why people always say things like 'double it'. After some experience in delivering projects you'll be able to tell very quickly, just by looking briefly at a spec how long it will take, and, invariably, it will be double the figure arrived at by the first method...
It's a better idea to add specific buffer time for things like debugging and testing than to just inflate the total time. Also, by taking the time up front to really plan out the pieces of the work, you'll make the estimation itself much easier (and probably the coding, too).
If anything, make a point of recording all of your estimates and comparing them to actual completion time, to get a sense of how much you tend to underestimate and under what conditions. This way you can more accurately "inflate".
I wouldn't say I inflate them but I do like to use a template for all possible tasks that could be involved in the project.
You find that not all tasks in your list are applicable to all projects, but having a list means that I don't let any tasks slip through the cracks with me forgetting to allow some time for them.
As you find new tasks are necessary, add them to your list.
This way you'll have a realistic estimate.
I tend to be optimistic in what's achievable and so I tend to estimate on the low side. But I know that about my self so I tend to add on an extra 15-20%.
I also keep track of my actuals versus my estimates. And make sure the time involved does not include other interruptions, see the accepted answer for my SO question on how to get back in the flow.
HTH
cheers
I wouldn't call additional estimated time on a project "inflated" unless you actually do complete your projects well before your original estimation. If you make a habit of always completing the project well before your original estimated time, then project leaders will get wise and expect it earlier.
What are your estimates based on?
If they're based on nothing but a vague intuition of how much code it would require and how long it would take to write that code, then you better pad them a LOT to account for subtasks you didn't think of, communication and synchronization overhead, and unexpected problems. Of course, that kind o estimate is nearly worthless anyway.
OTOH, if your estimates are based on concrete knowledge of how long it took last time to do a task of that scope with the given technology and number of developers, then inflation should not be necessary, since the inflationary factors above should already be included in the past experiences. Of course there will be probably new factors whose influence on the current project you can't foresee - such risks justify a certain amount of additional padding.
This is part of the reason why Agile teams estimate tasks in story points (an arbitrary and relative measurement unit), then as the project progresses track the team's velocity (story points completed per day). With this data you can then theoretically compute your completion date with accuracy.
I take my worst case scenario, double it, and it's still not enough.
Under-promise, over-deliver.
Oh yes, the general rule from long hard experience is give the project your best estimate for time, double it, and that's about how long it will actually take!
We have to, because our idiot manager always reduces them without any justification whatever. Of course, as soon as he realizes we do this, we're stuck in an arms race...
I fully expect to be the first person to submit a two-year estimate to change the wording of a dialog.
sigh.
As a lot said, it's a delicate balance between experience and risk.
Always start by breaking down the project in manageable pieces, in fact, in pieces you can easily imagine yourself starting and finishing in the same day
When you don't know how to do something (like when it's the first time) the risk goes up
When your risk goes up, that's where you start with your best guess, then double it to cover some of the unexpected, but remember, you are doing that on a small piece of the project, not the whole project itself
The risk goes up also when there's a factor you don't control, like the quality of an input or that library that seems it can do everything you want but that you never tested
Of course, when you gain experience on a specific task (like connecting your models to the database), the risk goes down
Sum everything up to get your subtotal...
Then, on the whole project, always add about another 20-30% (that number will change depending on your company) for all the answers/documents/okays you will be waiting for, meetings we are always forgetting, the changes of idea during the project and so on... that's what we call the human/political factor
And again add another 30-40% that accounts for tests and corrections that goes out of the tests you usually do yourself... such as when you'll first show it to your boss or to the customer
Of course, if you look at all this, it ends up that you can simplify it with the magical "double it" formulae but the difference is that you'll be able to know what you can squeeze in a tight deadline, what you can commit to, what are the dangerous tasks, how to build your schedule with the important milestones and so on.
I'm pretty sure that if you note the time spent on each pure "coding" task and compare it to your estimations in relation to its riskiness, you won't be so far off. The thing is, it's not easy to think of all the small pieces ahead and be realistic (versus optimistic) on what you can do without any hurdle.
I say when I can get it done. I make sure that change requests are followed-up with a new estimation and not the "Yes, I can do that." without mentioning it will take more time. The person requesting the change will not assume it will take longer.
Of course, you'd have be to an idiot not to add 25-50%
The problem is when the idiot next to you keeps coming up with estimates which are 25-50% lower than yours and the PM thinks you are stupid/slow/swinging it.
(Has anyone else noticed project managers never seem to compare estimates with actuals?)
I always double my estimates, for the following reasons:
1) Buffer for Murphy's Law. Something's always gonna go wrong somewhere that you can't account for.
2) Underestimation. Programmers always think things are easy to do. "Oh yeah, it'll take just a few days."
3) Bargaining space. Upper Management always thinks that schedules can be shortened. "Just make the developers work harder!" This allows you to give them what they want. Of course, overuse of this (more than once) will train them to assume you're always overestimating.
Note: It's always best to put buffer at the end of the project schedule, and not for each task. And never tell developers that the buffer exists, otherwise Parkinson's Law (Work expands so as to fill the time available for its completion) will take effect instead. Sometimes I do tell Upper Management that the buffer exists, but obviously I don't give them reason #3 as justification. This, of course depends on how much your boss trusts you to be truthful.

Why do many software projects fail today? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
As long as there are software projects, the world is wondering why they fail so often.
I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years.
You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved".
EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time.
In my opinion the 10% + / - is a good range for an offer / tender.
EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams?
When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...)
I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"?
EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement.
Clean Code Developers live principles
like SOLID every day. A professional
developer is the biggest reviewer of
his own work. And he has an internal
value system which helps him to improve and become better.
Check it out on: Clean Code Developer
EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.
Bad management.
Projects are not successes or failures based on some underlying feature of the project, but on whether they fulfill the needs of the users. (They can fail altogether, in which case there was a gross misstatement of what was possible.) It is mostly in the process of evaluating the feasibility and cost-benefit ratio of the project, and establishing goals, that software projects tend to fail or succeed.
There's a disconnect between people who deal with facts and things (like programmers) and people who deal with other people (like sales types and managers). To a programmer, the facts are the facts, and have to be dealt with. To a sales person, the facts are what other people think, and are changeable.
There's also differences between tangible and intangible facts. Nobody thinks that workers could build a large bridge in a month if they were really motivated; they can see all the steel and concrete and other stuff that has to be moved and fixed into position. Software is much less tangible, and lacks the physical restrictions: while it is not even theoretically possible to build the bridge within a month, it is conceivable that a team could create a large project within a month, as "all" they have to do is get everything right the first time. It is physically possible to type thousands of lines of code a day; it's just that the chance that they're usable as is is so close to zero it doesn't matter. The actual productivity of a top developer is actually pretty unimpressive in word count, compared to (say) the productivity of a journalist.
Therefore, those who are used to flexible facts don't have the imposing physical limits to remind them that things can be pushed only so far, no appreciation for what programming actually requires, and no good feel for how much productivity is realistically possible. Moreover, they know how to get their way in negotiations, much more than the average developer, so in negotiations about what's possible they tend to assume more than they can, in the real world, get.
In the meantime, software development is inherently fuzzy, because producing the physical product is trivial. I can produce a copy of software quickly and cheaply, once it's been developed. Software development is design work, pure and simple. Anything corresponding to manufacturing is ruthlessly eliminated with such things as compilers and wizards and code generation. The developer, faced with the manager who wants the impossible, finds it hard to say the impossible is actually impossible, because there's no way to say it's actually impossible. Given facts that are unknown enough to feel flexible, the person with strong negotiating skills and determination will typically get the answer he or she wants.
Given this disconnect, one might ask whose responsibility it is to bridge it. The answer is, in my opinion, clear. The responsibility for understanding how different people think belongs to the people who specialize in dealing with other people. The responsibility for coordinating different types of people belongs to the people whose job it is to coordinate these things. Therefore, managers.
Managers who do understand software development and developers, and can deal well with other managers, will do well, and their projects will generally succeed. There are still far too many of the other type in the world.
Not a direct answer, but I found the Virtual Case File to be a fascinating case study on how a big government-backed well-funded project can still tank.
You can also add your top reason why a
software project fails.
Another IEEE Spectrum Online article "Why Software Fails" examines this very question. It summarizes the major points as follows:
Unrealistic or unarticulated project goals
Inaccurate estimates of needed resources
Badly defined system requirements
Poor reporting of the project's status
Unmanaged risks
Poor communication among customers, developers, and users
Use of immature technology
Inability to handle the project's complexity
Sloppy development practices
Poor project management
Stakeholder politics
Commercial pressures
Poor planning.
Hofstadter's Law
It always takes longer than you expect, even when you take Hofstadter's Law into account.
Mismanagement.
SW project get started by throwing developers against a perceived problem. Business requirements crystallize as the project progresses. New functionality gets added while deadlines stay put. More developers are thrown in. Original project members quit or get fired. By this point too much time, money and resources is invested in the project so it cannot be canceled. As the deadline passes the project is declared finished and successful despite the obvious lack of finished product.
Come to think of it - I've jet to see a SW project fail...
Honestly, I think its because most programmers are not very good at what they do(and I don't mean just cranking out code). People on stackoverflow are probably the exception. I don't know about the rest of you but as a consultant/contract programmer I have worked in or around many places, and the ratio of mediocre or poor programmers to good ones is about 10 to 1.
One of my strengths has always been estimating accurately and then delivering on time and on or under budget - I always aim for coming in 10% under cost and on time. Then I like to tell my client that because I got things done for less $$ than expected, which of the "extras" would you like to add in?
Even a perfectly functioning product that is late and/or over budget will be considered a failure by many business managers. Programmers often focus on just the technical aspects of what they do, with little regard for the cost or deadline. You really need to do all three well for it to be deemed successful project. There are many other programmers that could code circles around me without a doubt, but for the person paying for the project, that is rarely enough.
It is because no-one seems to read anymore.
Every single reason why projects fails has been analyzed time and time again.
You only have to read three books to know why 80% of projects fail:
The Deadline: A Novel About Project Management (Tom Demarco, published 1997)
It's a great introduction and it's pretty entertaining.
Peopleware : Productive Projects and Teams (Tom Demarco, published 1987)
The Mythical Man-Month: Essays on Software Engineering (Fred Brooks, published 1975)
We as a profession simply seem to forget everything every 3-5 years (see "centralised computing is inefficient; let the clients handle it" vs cloud computing).
(From a programmers point of view - I'm not a project managemer, but I've often been involved in the process).
A number of people have mentioned that bad programmers are endemic. But I think this is true in another sense as well - we're all bad programmers in that we find it difficult to anticipate complexity, an unavoidable issue that 50 years of magic bullet estimation and planning schemes have failed to solve.
Anticipating the side effects of large projects gets exponentially more difficult as projects grow. This is a dull truism, for sure, but for me it means that on any project I've worked on where I've been involved in the estimating process I've run into some case where there's an unanticipated consequence of a design decision that causes everything to come to a grinding halt, or at least a few days of bugfixing - just something that nobody foresaw, not any sort of malpractice or stupidity. It's just the nature of a complex enough system.
Aside from the built-in uncertainty, there's also a tendency to underestimate things whose outline is known, because the fact that they have less uncertainty makes them seem simpler to implement.
So the uncertain stuff gets magnified, the clear stuff gets minimized, and what really kills you is the thing that you didn't think would be uncertain.
The number one reason: a failure of project management.
A PM's raison d'etre is to make a project succeed, ergo a project failure is their failure. Certainly there are factors beyond their control, but it's still within the PM's job description to manage that risk, and the only get out clauses should be someone higher up the food chain taking decision control (which is a terrible thing to do to a PM) or acts of god.
In my experience failures mostly occur when PM work has been fast and loose or non-existant, including when decisions start to flow from sales people and when the client starts decreeing change control. A good PM is priceless.
Failure is a judgement -- more of an accusation, really.
"The project was more than 10% over budget and time."
Which budget? Which schedule?
6 months ago, I wrote a plan saying it would take 6 months.
3 months ago, the users asked for more stuff. I gave them a plan that said it would take 9 more months.
Last month I was told that the project was 6 months over budget and therefore a "failure".
But wait. It delivered what the users wanted. It was over the "original" estimate. It was under the revised estimate. The users want more. IT wants less.
I'll approach it from a different aspect than most the rest here.
I've noticed a project slowly fail over a period of time. Sure, it's gotten better in that time too--but it still isn't profitable. In this market profitability, and being in the black, means success.
Why is it failing? I think it's simple: you get what you pay for.
Software is like a bank account, not primordial ooze. If you don't put resources into it (time, money, focus, effort) then you won't get anything out of it except failure and cost. So you must invest things into your project, and sometimes the earliest work sets the stage for years to come. You can't throw mud at your computer and expect a new mouse in two years and $10 million dollars later, so likewise there must be effort expended.
One of the biggest problems today are "budget developers" in a third-world country. I don't begrudge them their part of the market, but for a well-funded Silicon Valley startup to seek them out and get a budget foundation (framework or even prototype) is to make a poor investment in the future. This very same budget framework is what is causing my friends so much of a hassle today. It works now; it worked when it was written, but it wasn't written well and few even take the time to maintain it. Were the company to stop and rewrite the software the way it should have been written in the first place they wouldn't have all this trouble. Can they afford the time? Nope. They have to make it profitable before they can even thing of it.
As the saying goes, "I can make it: cheap, fast or good. Now, pick any two of those." Everyone wants all three, myself included. But if you don't invest the time, planning, and work required to make your project a success from the start ... then don't expect anything you can be proud of later. It'll feel like a forged Mona Lisa where you, and every other engineer like you, can see a defect here and there that shouldn't have been there from the start.
So:
Don't undertake what you cannot afford in: time, money, effort, focus, etc.
Don't skip planning!
Don't be afraid to rewrite early when it counts the most. (Later it'll be worse than a trip to the dentist, believe me.)
Don't underestimate the power of bureaucracy to prevent you from doing it right.
And don't be cheap where you should spend the most of your time. It will cost you later, guaranteed. And if not you, then someone else will take the bullet for you.
One common mistake is that sales people and technical people do not communicate sufficiently. So the salespeople sell things that are technically not feasable within budget. (And then they run with their bonus :) )
Being over budget and time is not a good definition of failure (and actually being in budget and time doesn't always mean success). Take the following examples provided by Hugh Woodward, PMP, in Expert Project Management - Project Success: Looking Beyond Traditional Project Metrics:
Sydney Opera House: With its graceful sails dominating Sydney Harbor, the Sydney Opera House is arguably one of the most recognized buildings in the world. Yet, from a project management perspective, it was a spectacular failure. When construction started in 1959, it was estimated to cost $7 million, and take four years to build. It was finally completed in 1973 for over $100 million.
[...]
Project Orion: This massive effort to develop Kodak's new Advantix photographic system was reputedly very well managed from a project management perspective. PMI recognized it as the 1997 International Project of the Year, and Business Week selected the system as one of the best new products of 1996 (Adams, 1998). But Kodak's stock price has fallen 67% since the introduction of the Advantix system, in part because it failed to anticipate the accelerating switch to digital photography.
Corporate Intranet: Finch describes a project that involved the implementation of a corporate intranet to globalize and improve communications. From a traditional project perspective, it failed to meet its success criteria, but not significantly. It was one month late and believed to have been accomplished with a small budget overrun. But both the project manager and senior management viewed the project as successful. The hardware and software had been installed successfully with a minimum of disruption, thereby providing all staff members with access to the corporate intranet. Following implementation, however, employees made only limited use of the intranet facilities. The main objective of the project was therefore not achieved. In this case, both the project manager and senior management focused on an objective that was too narrow.
[...]
Manufacturing Plant Optimization: A paper manufacturing company with five plants across North America decided to increase its manufacturing capacity by embarking on a de-bottlenecking program. A project team was formed to install the necessary equipment, and charged with completing the work in 18 months at a cost of $26 million. But almost immediately, the project team was asked to defer major expenditures until an unrelated cash flow problem was resolved. Rather than stop work completely, the team adopted a strategy of prototyping the technologies on which the de-bottlenecking program was based, and actually developed some cheaper and more effective solutions. Even when the project was authorized to proceed, the team continued this same approach. The project eventually spanned five years, but the resulting capacity increase was three times the initial commitment. Not surprisingly, the company immediately appropriated another $40 million to continue the program.
[...]
So in these examples, which ones were truly successful? Examples like the 2002 Winter Olympics and the Batu Hijau Copper Concentrator would suggest that these are truly successful because they not only met the traditional project managers' definition of success, but also met the projects sponsors' perception of success.
As we start to look at the examples
like Project Orion, the Corporate
Intranet and the Laptop Upgrade, we
notice that the traditional metrics
start to fail. These projects are
considered successes in project
managers' definition of success, but
failed at meeting the sponsors'
success criteria. The project Orion
example is quite astounding as this
project was recognized by PMI (Project
Management International) in 1997 as
the International project of the year.
Yet it did not increase Kodak's
revenue, because they did not foresee
the adoption of digital cameras.
Most interesting are the examples of
the Manufacturing Plant Optimization
and the Sydney Opera House. They both
failed to meet the traditional project
managers' success metrics but were in
fact considered successes. This is
particularly shocking when you see
that the Sydney Opera House had a
"cost overrun of 1300%" and a
"schedule overrun of 250%".
Once we realize that projects can fail
to meet the traditional metrics of
success, but still be successful to
the stakeholders, this creates a
quandary for the project manager. How
does one really define success? Is it
possible that a "Challenged" project
could be canceled that would have met
the sponsors' needs? Is it also
possible to identify a project that
should be canceled that is currently
on time, on budget and meeting the
defined needs?
What do you think of that conclusion? How do you really define success?
My answer is rather unusual from the rest of this, but quite common around here: killer bugs. I had a project go an extra two weeks (50% extra time) because of one switch in source that I didn't know about until I dug through the source code (there was nothing documented in help or on the web).
People/companies do not proudly shout about their failures, so many cases wont be ever heard.
Poor use of practices and software development methods. In my experience, one of the big reasons a project failed its that the development team use a wrong method to face the software development process. Choosing a methodology without having a good understanding of how it works and what it takes, can bring a time consuming issues to a project, like poor planning.
Also a common problem is also the use of technologies without a previous evaluation of it to understand how it can be applied, and if it brings any value to the project.
There have been some good studies done on this. I recommend this link from the Construx website (Steve McConnells company).
The Construx link above is real good!
Many projects are managed on a rosy picture of reality. Managers tend to power talk developers into optimistic estimates. But say a 20 week projects gets "talked down" to 10 weeks. The requirements phase will now be 1 week instead of 2. After 1 week, the requirements aren't finished, but the project moves on. Now you're working on shaky ground-- and on a stretched schedule!
This can be funny. Once there was this old guy in a room opposite mine. His job title was system adminstrator. But the system he was supposed to adminsiter wasn't there. It had never been finished, although management thought it had been. The guy played games for about a year before he got bored and moved on.
The funniest part? Management put up a new job opening after he left!
IT Project Failures is a blog about project failures that may have a few answers here if one wants to read about it.
Myself, I think a large part of this lands on the question of being able to state exactly what is expected in x months at $y when really the answer is much more open-ended. For example, if a company is replacing an ERP or CRM system, there is a good chance that one isn't going to get all the requirements exactly right and thus there will be some changes, bug fixes and extras that come from taking on such a large project. For an analogy consider how many students entering high school or university could state their precise schedule for all 4 years without taking any classes and actually stick to that in the end. My guess would be very few do that as some people change majors or courses they wanted to take aren't offered or other things happen that change what the expected result is but how is this captured in a project plan that we started here and while we thought we were going there, we ended up way over in spot number three.
The last statistic that I heard was that 90% of projects are either over time or over budget. So if you consider that failing, quit a bit.
The reason why it fails mainly lies on process. We as software engineers don't do a good job of gather requirements, and controlling the customer. Because building software is a "mysterious" job to people outside of IT, it is difficult for them to understand why last minute changes are difficult. It's not like building a house and clearly showing them why it isn't possible to quickly add another door to the backside of the house made of brick.
Not only software projects go over budget, or take more than scheduled time to complete. Just open the newspaper and look at public projects like bridges.
But for a software project it is far more easy to cancel everything. If a bridge or building is half finished, there is no way back. Half the infrastructure is in place. It is very visible and it takes money to remove it.
For a software project you can press Shift-Delete and nobody notices.
Its just a very hard job to do an accurate cost analysis.
Using the FBI's Virtual Case File system it comes down to poor program management. The program had pre-9/11 requirements and post-9/11 expectations. Had government management done their job there would have been contract mods.
http://government.zdnet.com/?p=2518
"Because of an open-ended contract with few safeguards, SAIC reaped more than $100 million as the project became bigger and more complicated, even though its software never worked properly. The company continued to meet the bureau’s requests, accepting payments despite clear signs that the FBI’s approach to the project was badly flawed, according to people who were involved in the project or later reviewed it for the government."
Although $105,000,000 for 700,000 lines of code comes to $150 per line of code. Not too bad.
To truly gage whether a project is really successful, the original estimate / budget is probably a poor yardstick. Most engineers tend to underestimate how much time it will take because of political pressure, and they don't know how to estimate anyway. Many managers are poor planners because they want unrealistic deadlines to please their boss, they often don't understand what's involved, and plus it's something they can look at and understand and use as a stick in meetings, as opposed to actually helping solve problems. Practically, it helps businesses make rough projections of expense for budgeting purposes, at least it gives them something the go by.
A better measurement of project success would be - are the users happy? Does it help the business make money? How fast will the money gained help recover the cost of the system? These are harder to gauge than a simple plan, though.
In the end, I've find you're better off delivering on deadline, even if it means working some overtime. It sucks, but that is the reality of it.
As stated previously the vast majority of people involved in software development to not actually understand how
ask the right questions to learn about the problem
appreciate the users goal and judge expectations
understand technology available and the related Eco structure
most of team directly/indirectly involved are poorly skilled.
do not appreciate or know when they are wrong do they can take action.
Even with perfect requirements and related definitions too many developers don't know what they are doing.
Just look at the types of questions asked here. Would you go to a doctor that asked the same equivalent question. The scarey thing is that they ask and don't know how to learn or reason.
Different agendas
Management do not really understand what a developer does, how they produce code and the difficulties encountered. All they care about is the final product delivered on time. But for a developer they care about the technical aspects, well produced code in a solution they our proud of.
Deadlines
I've often heard developers say they wish they could produce better code, and that deadlines often push them into producing something that just works, rather than good code. When these deadlines are unreallistic the problems are exacerbated.
I think this thread managed to gather the biggest group of tallented unhappy software developers, engineers, project managers, etc. that it is possible to gather in one place.
I share point of view with most of the posts stuck here and I think they come out of a lot of suffering through seeing co-workers not doing a good job when job (programming) and success is the most important part of our lives.
http://www.clean-code-developer.de/
They have an incredible cause! and their philosophy, if taken ahead, could manage to create a new layer of heroes, as the main stream of developers and IT professionals is so ROT these days.
I'm working on something similar here in Brazil, 'cos I love our profession of bringing software alive both as PM and software developer (.NET) and I can't stand people who face programming as nothing else but they way out to make big money with almost no effort.
... sure I don't consider going overnight in front of the computer geniality. but the few who matter, have done it more than once.

Giving estimates for large scale projects in an Agile Environment [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
My firm just got its first large-scale development project inquiry and I would like to use an Agile process. The client has a vision for the application but openly admits to having very few requirements and recognizes that we will have to charge by the hour. Because of this, I've all but sold him on an Agile approach.
The problem is that he wants a figure to budget around. I've read a number of articles that pretty much advocate against giving up an estimate because the client will budget for that number and even as requirements change, the number in their head and in the books doesn't.
I've read there are a number of ways to factor in pricing in the contract, but almost all of them (save one) incorporate an up-front number. This just seems to violate the entire set of principles of Agile development.
So my question is, if you're an Agile shop, how do you manage to circumvent this Catch-22 of Agile development?
Here's the fundamental question.
When will the client think they're done?
If they think they'll be done by June, then you put an Agile team in place. That's 4-6 people for 6 months. That's the budget. Essentially, you do the multiplication for them. team * rate * 6 months.
If they think they'll be mostly done by June, but there will be more work after that, then you're possibly looking at 9 months of work. Again, you're just doing a multiply that they could do for themselves. team * rate * 9 months.
If they think that you'll be their development team for the foreseeable future, give them a price that will get the project through to the end of the year. team * rate * 12 months.
Since each release is an opportunity to reprioritize, you should be pricing each release as as separate piece of work based on the things you will get done in that release. Since it's their priority scheme, they control what you build, they control the budget, phase by phase.
Often your client really wants to know how much a particular feature set will cost. Instead of ask that, they ask about overall budget (which is silly). Spend a lot of time on the first release showing what they get and how much that first release will cost.
Eventually, they'll see the fundamental truth.
They're buying the features from most important to least important. If they prioritize correctly, they can stop spending money at any time and have something useful.
Done is a relative term. Some projects are "done" because there's no more money. Others are done because there's no more time. Rarely (at least in software development) is a project ever done because we ran out of things to do.
For this situation, we've provided an estimate for the first chunk of work, and then let the client purchase more sprints as required to complete the work to the desired level.
As you get more of the system developed, and incorporate feedback from the client into the sprint deliverables, you will both get a better feeling for the amount of work involved, and hence the costs involved.
Edit: Cool. Way excellent! From the fact that you've sold them on an Agile approach, BTW good one!, chances are you will be able to seel them on approaching it from an agile point of view in terms of features to be implemented. Maybe have a listen to this "Intro to Scrum" podcast. You might be able to sell them on the fact that they probably won't need to have all the bells and whistles that they think tha they need right now.
HTH
cheers,
Rob
Look, there's a core fact here. You will be asked to estimate cost, contract for a particular delivery date, and commit to a full set of delivered features.
You can't do all three.
Not "you shouldn't" or "it would be wise not to"; you (for all practical purposes) can't. By which I mean that the probability of successfully doing all three is extremely small.
The best answer is to commit to a cost and schedule, and to an iterative process with quick iterations and regular feedback, and then write the agreement so that what is done unde the fixed cost and schedule is what will be delivered. That is, you keep delivering new features (and modifications) until the time and money runs out.
The truth is, even if you do sign up to all three, the best you'll ever be able to do is two out of three anyway. Might as well be open about it.
Here's how a crusty old government contractor I know recently put it: "As the prostitutes say, first you gotta get them upstairs."
That doesn't answer your question, but it's worth remembering. If they won't come upstairs without number up front (and they won't), you have to give them a number.
Anybody sponsoring a software project needs to have an idea of what kind of return they're going to get on their investment, so that they can evaluate whether or not the investment is worth making. A project may be worth doing if it costs $1m and takes 12 months. It may not be worth doing if it costs $2m and takes 24 months.
The real question is: can you do this project within a time frame and budget that makes it possible for the client to realize an appropriate return on their investment? If your answer to that question is anything but "yes," then they shouldn't hire you to do the project. Otherwise, they're just spending money and rolling the dice.
If your current answer to that question is "we don't know yet," then they shouldn't hire you to do the project. But that doesn't mean that they shouldn't hire you to find out whether or not the project is worth doing. A good consulting-firm buzzword for this is "preliminary scoping exercise."
Agile development is about managing a curve in three dimensions: money spent, calendar time, and feature set. It's a given that if the budget and schedule are fixed, the feature set must be variable. You can't know what the final feature set will be, given those constraints. But you can know if a feature set that you'll be able to produce within those constraints falls inside a range that your client will find acceptable.
You can know this, and you can find it out. Finding that out is something that your firm can do and your client can't. It's a service that you can provide to them and that they should pay you for. Avoid the temptation to do this for free and call it a cost of sales. Project scoping is every bit as much a part of professional services as software development. You're not doing this to close a sale; you're doing this to help your client make a business decision that they don't yet have enough information to make.
But first you gotta get them to come upstairs.
These answers are really great. I don't have a lot of experience to share, but I thought of an analogy:
Last year my wife and I remodeled our kitchen. The contractor proposed billing on time & materials instead of giving an estimate, because we didn't know what we'd discover with respect to plumbing, subfloor damage, and so on. We agreed, because I'm working from home and I was available continuously to answer questions. The contractor and I had a good rapport so when something came up, he felt free to knock on my office door and we'd discuss it. Decisions got made quickly and the work was done the way I wanted it. That seems similar to the Agile priority on frequent customer review.
By contrast, my wife's cousin is also remodeling his house. He's an ER doctor and he is not available on site during the remodel. So he described being regularly surprised by the contractors making major decisions without consulting him ("What? I didn't want that picture window blocked off! Tear out the wall and reframe the window the way it was!"). This is of course painfully expensive and blows the schedule out of the water. Definitely not Agile.
So one way to convince a client that a time & materials mode will work may be to assure them you'll make frequent progress reports to get their feedback. I believe this is already a core recommendation of most Agile methodologies. To begin, of course this requires the customer give their trust to the consultant.
As a separate resource, I searched and found a book on this subject: "Agile Estimating and Planning" by Mike Cohn. Read the praise quotes on that page, from an impressive set of Agile experts.
First of all, I want to say I think it's great you've sold him an on agile approach and per hour pricing. That's very hard to do.
Regarding your dilemma, I think you should present him with a rough pricing estimate and the features / scope it includes. During the development process, if you see something important coming up that will change the scope / cost, you say to the client "stop - this is great, but understand that it changes the original scope we discussed."
At this point the client has the privilege of agreeing, being aware that he will pay more than he originally thought, or halting the new development for now. Many agile projects are built in many mini-stages - for which you can give pretty accurate estimate of price / time. Before any new mini-stage commence, it's important that you and the client see eye to eye on what is up next and how much time / cost will be spent on it.
As always there's quality, functionality and time (=resources) which are your main parameters. We're not messing with quality any more, so there's only features and resources left. Fairly often you can get away with convincing that a certain amount of resources will at minimum reach a certain feature set. So as long as you can find an amount of resources that delivers a plausible feature set I believe this is enough for quite a few customers. The upside is that they can control progress while on the move, and there's always the option of backing out.
The way that I have done this is to use my current velocity and estimate a range based on rough estimates of complexity (number of stories). You would update this as you begin planning the application so that the range estimate gradually becomes better and better.
For example, give an estimate from 6mos to 2 years (if you're sure that it will take less time than 2 years. As you break it down into features, then plan for those features, you come up with better and better estimates so that after 6mos you can reliably say (absent any other changes), we'll be done in another 2-3 months (total of 8-9 months). Eventually, you get to the point where you can say, we'll be done after the next iteration (2weeks to 1month).
You need to pair this with letting the customer know that adding features will increase the time to completion, then let them decide how to proceed -- either drop the feature or increase the time. For any given project, scope and time are really the only variables that you can effectively change. Quality and resources are pretty much fixed. Actually, you can add resources, but generally you see an increase in time and decrease in quality initially so this only works if your time horizon is long.
One approach you could take is to give the client a general estimate that you believe is relatively close. Also give a percentage. The percentage will be an increase or decrease allotment in the budget due to changing requirements.
Example: General estimate of $10,000 but since the requirements are vague and the project could change course easily there is a +/- 30% price adjustment option.
This way the client can have a general idea of what to budget and you have some flexibility in charging to the project.
This is a really tough issue. See what Robert Glass has to say on the subject in his book "Facts and Fallacies of Software Engineering". (Note that Amazon has books available new for less than they're available second-hand! [as of 2009-01-05 12:20 -08:00]; also at Google books.)
If you have successfully sold the customer an on Agile approach, and billing by the hour, do not give them an estimate for how much it will cost to complete the project!. Even if they say they want it only for budgeting purposes, do not do it!.
The reason is that any estimate of cost will come to be treated as a definite commitment; no matter how many times you say it isn't, and how many times the customer says they understand that, it will be treated as a commitment. Even if the customer understands, their boss won't, and although they won't be able to legally force you to deliver for the price you said, you will come to known as a company that doesn't deliver what it promised. Providing an estimate throws away all the advantage of being agile in the first place.
What I suggest is telling them that you will spend no more than such-and-such an amount before the end of the financial year (or whatever other billing period your customer is interested in). That should be enough for them to plan financially without in any way saying that the project will be finished by then.
This was my answer to a closed question related to story points. I use this all the time for macro estimation.
When I switched to points, I decided to it only if I could meet the two following points; 1) find and argument that justify the switch and that will convince the team 2) Find an easy method to use it.
Convincing
It took me a lot of reading on the subject but a finally found the argument that convinced me and my team: It’s nearly impossible to find two programmers that will agree on the time a task will take but the same two programmers will almost always agree on which task is the biggest when shown two different tasks.
This is the only skill you need to ‘estimate’ your backlog. Here I use the word ‘estimate’ but at this early stage it’s more like ordering the backlog from tough to easy.
Putting Points in the Backlog
This step is done with the participation of the entire scrum team.
Start dropping the stories one by one in a new spreadsheet while keeping the following order: the biggest story at the top and the smallest at the bottom. Do that until all the stories are in the list.
Now it’s time to put points on those stories. Personally I use the Poker Planning Scale (1/2,1,2,3,5,8,13,20,40,100) so this is what I will use for this example. At the bottom of that list you’ll probably have micro tasks (things that takes 4 hours or less to do). Give to every micro tasks the value of 1/2. Then continue up the list by giving the value 1 (next in the scale) to the stories until it is clear that a story is much bigger (2 instead of 1, so twice bigger). Now using the value '2', continue up the list until you find a story that should clearly have a 3 instead of a 2. Continue this process all the way to the top of the list.
NOTE: Try to keep the vast majority of the points between 1 and 13. The first time you might have a bunch of big stories (20, 40 and 100) and you’ll have to brake them down into chunks smaller or equal to 13.
That is it for the points and the original backlog. If you ever have a new story, compare it to that list to see where it fits (bigger/smaller process) and give it the value of its neighbors.
Velocity & Estimation (macro planning)
To estimate how long it will take you to go through that backlog, do the first sprint planning. Make the total of the points attached to the stories the teams picked and VOILA!, that’s your first velocity measure. You can then divide the total of points in the backlog by that velocity, to know how many sprints will be needed.
That velocity will change and settle in the first 2-3 sprints so it's always good to keep an eye on that value

How long does it really take to do something? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I mean name off a programming project you did and how long it took, please. The boss has never complained but I sometimes feel like things take too long. But this could be because I am impatient as well. Let me know your experiences for comparison.
I've also noticed that things always seem to take longer, sometimes much longer, than originally planned. I don't know why we don't start planning for it but then I think that maybe it's for motivational purposes.
Ryan
It is best to simply time yourself, record your estimates and determine the average percent you're off. Given that, as long as you are consistent, you can appropriately estimate actual times based on when you believed you'd get it done. It's not simply to determine how bad you are at estimating, but rather to take into account the regularity of inevitable distractions (both personal and boss/client-based).
This is based on Joel Spolsky's Evidence Based Scheduling, essential reading, as he explains that the primary other important aspect is breaking your tasks down into bite-sized (16-hour max) tasks, estimating and adding those together to arrive at your final project total.
Gut-based estimates come with experience but you really need to detail out the tasks involved to get something reasonable.
If you have a spec or at least some constraints, you can start creating tasks (design users page, design tags page, implement users page, implement tags page, write tags query, ...).
Once you do this, add it up and double it. If you are going to have to coordinate with others, triple it.
Record your actual time in detail as you go so you can evaluate how accurate you were when the project is complete and hone your estimating skills.
I completely agree with the previous posters... don't forget your team's workload also. Just because you estimated a project would take 3 months, it doesn't mean it'll be done anywhere near that.
I work on a smaller team (5 devs, 1 lead), many of us work on several projects at a time - some big, some small. Depending on the priority of the project, the whims of management and the availability of other teams (if needed), work on a project gets interspersed amongst the others.
So, yes, 3 months worth of work may be dead on, but it might be 3 months worth of work over a 6 month period.
I've done projects between 1 - 6 months on my own, and I always tend to double or quadrouple my original estimates.
It's effectively impossible to compare two programming projects, as there are too many factors that mean that the metrics from only aren't applicable to another (e.g., specific technologies used, prior experience of the developers, shifting requirements). Unless you are stamping out another system that is almost identical to one you've built previously, your estimates are going to have a low probability of being accurate.
A caveat is when you're building the next revision of an existing system with the same team; the specific experience gained does improve the ability to estimate the next batch of work.
I've seen too many attempts at estimation methodology, and none have worked. They may have a pseudo-scientific allure, but they just don't work in practice.
The only meaningful answer is the relatively short iteration, as advocated by agile advocates: choose a scope of work that can be executed within a short timeframe, deliver it, and then go for the next round. Budgets are then allocated on a short-term basis, with the stakeholders able to evaluate whether their money is being effectively spent. If it's taking too long to get anywhere, they can ditch the project.
Hofstadter's Law:
'It always takes longer than you expect, even when you take Hofstadter's Law into account.'
I believe this is because:
Work expands to fill the time available to do it. No matter how ruthless you are cutting unnecessary features, you would have been more brutal if the deadlines were even tighter.
Unexpected problems occur during the project.
In any case, it's really misleading to compare anecdotes, partly because people have selective memories. If I tell you it once took me two hours to write a fully-optimised quicksort, then maybe I'm forgetting the fact that I knew I'd have that task a week in advance, and had been thinking over ideas. Maybe I'm forgetting that there was a bug in it that I spent another two hours fixing a week later.
I'm almost certainly leaving out all the non-programming work that goes on: meetings, architecture design, consulting others who are stuck on something I happen to know about, admin. So it's unfair on yourself to think of a rate of work that seems plausible in terms of "sitting there coding", and expect that to be sustained all the time. This is the source of a lot of feelings after the fact that you "should have been quicker".
I do projects from 2 weeks to 1 year. Generally my estimates are quite good, a posteriori. At the beginning of the project, though, I generally get bashed because my estimates are considered too large.
This is because I consider a lot of things that people forget:
Time for bug fixing
Time for deployments
Time for management/meetings/interaction
Time to allow requirement owners to change their mind
etc
The trick is to use evidence based scheduling (see Joel on Software).
Thing is, if you plan for a little extra time, you will use it to improve the code base if no problems arise. If problems arise, you are still within the estimates.
I believe Joel has wrote an article on this: What you can do, is ask each developer on team to lay out his task in detail (what are all the steps that need to be done) and ask them to estimate time needed for each step. Later, when project is done, compare the real time to estimated time, and you'll get the bias for each developer. When a new project is started, ask them to evaluate the time again, and multiply that with bias of each developer to get the values close to what's really expects.
After a few projects, you should have very good estimates.

Resources