Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
My firm just got its first large-scale development project inquiry and I would like to use an Agile process. The client has a vision for the application but openly admits to having very few requirements and recognizes that we will have to charge by the hour. Because of this, I've all but sold him on an Agile approach.
The problem is that he wants a figure to budget around. I've read a number of articles that pretty much advocate against giving up an estimate because the client will budget for that number and even as requirements change, the number in their head and in the books doesn't.
I've read there are a number of ways to factor in pricing in the contract, but almost all of them (save one) incorporate an up-front number. This just seems to violate the entire set of principles of Agile development.
So my question is, if you're an Agile shop, how do you manage to circumvent this Catch-22 of Agile development?
Here's the fundamental question.
When will the client think they're done?
If they think they'll be done by June, then you put an Agile team in place. That's 4-6 people for 6 months. That's the budget. Essentially, you do the multiplication for them. team * rate * 6 months.
If they think they'll be mostly done by June, but there will be more work after that, then you're possibly looking at 9 months of work. Again, you're just doing a multiply that they could do for themselves. team * rate * 9 months.
If they think that you'll be their development team for the foreseeable future, give them a price that will get the project through to the end of the year. team * rate * 12 months.
Since each release is an opportunity to reprioritize, you should be pricing each release as as separate piece of work based on the things you will get done in that release. Since it's their priority scheme, they control what you build, they control the budget, phase by phase.
Often your client really wants to know how much a particular feature set will cost. Instead of ask that, they ask about overall budget (which is silly). Spend a lot of time on the first release showing what they get and how much that first release will cost.
Eventually, they'll see the fundamental truth.
They're buying the features from most important to least important. If they prioritize correctly, they can stop spending money at any time and have something useful.
Done is a relative term. Some projects are "done" because there's no more money. Others are done because there's no more time. Rarely (at least in software development) is a project ever done because we ran out of things to do.
For this situation, we've provided an estimate for the first chunk of work, and then let the client purchase more sprints as required to complete the work to the desired level.
As you get more of the system developed, and incorporate feedback from the client into the sprint deliverables, you will both get a better feeling for the amount of work involved, and hence the costs involved.
Edit: Cool. Way excellent! From the fact that you've sold them on an Agile approach, BTW good one!, chances are you will be able to seel them on approaching it from an agile point of view in terms of features to be implemented. Maybe have a listen to this "Intro to Scrum" podcast. You might be able to sell them on the fact that they probably won't need to have all the bells and whistles that they think tha they need right now.
HTH
cheers,
Rob
Look, there's a core fact here. You will be asked to estimate cost, contract for a particular delivery date, and commit to a full set of delivered features.
You can't do all three.
Not "you shouldn't" or "it would be wise not to"; you (for all practical purposes) can't. By which I mean that the probability of successfully doing all three is extremely small.
The best answer is to commit to a cost and schedule, and to an iterative process with quick iterations and regular feedback, and then write the agreement so that what is done unde the fixed cost and schedule is what will be delivered. That is, you keep delivering new features (and modifications) until the time and money runs out.
The truth is, even if you do sign up to all three, the best you'll ever be able to do is two out of three anyway. Might as well be open about it.
Here's how a crusty old government contractor I know recently put it: "As the prostitutes say, first you gotta get them upstairs."
That doesn't answer your question, but it's worth remembering. If they won't come upstairs without number up front (and they won't), you have to give them a number.
Anybody sponsoring a software project needs to have an idea of what kind of return they're going to get on their investment, so that they can evaluate whether or not the investment is worth making. A project may be worth doing if it costs $1m and takes 12 months. It may not be worth doing if it costs $2m and takes 24 months.
The real question is: can you do this project within a time frame and budget that makes it possible for the client to realize an appropriate return on their investment? If your answer to that question is anything but "yes," then they shouldn't hire you to do the project. Otherwise, they're just spending money and rolling the dice.
If your current answer to that question is "we don't know yet," then they shouldn't hire you to do the project. But that doesn't mean that they shouldn't hire you to find out whether or not the project is worth doing. A good consulting-firm buzzword for this is "preliminary scoping exercise."
Agile development is about managing a curve in three dimensions: money spent, calendar time, and feature set. It's a given that if the budget and schedule are fixed, the feature set must be variable. You can't know what the final feature set will be, given those constraints. But you can know if a feature set that you'll be able to produce within those constraints falls inside a range that your client will find acceptable.
You can know this, and you can find it out. Finding that out is something that your firm can do and your client can't. It's a service that you can provide to them and that they should pay you for. Avoid the temptation to do this for free and call it a cost of sales. Project scoping is every bit as much a part of professional services as software development. You're not doing this to close a sale; you're doing this to help your client make a business decision that they don't yet have enough information to make.
But first you gotta get them to come upstairs.
These answers are really great. I don't have a lot of experience to share, but I thought of an analogy:
Last year my wife and I remodeled our kitchen. The contractor proposed billing on time & materials instead of giving an estimate, because we didn't know what we'd discover with respect to plumbing, subfloor damage, and so on. We agreed, because I'm working from home and I was available continuously to answer questions. The contractor and I had a good rapport so when something came up, he felt free to knock on my office door and we'd discuss it. Decisions got made quickly and the work was done the way I wanted it. That seems similar to the Agile priority on frequent customer review.
By contrast, my wife's cousin is also remodeling his house. He's an ER doctor and he is not available on site during the remodel. So he described being regularly surprised by the contractors making major decisions without consulting him ("What? I didn't want that picture window blocked off! Tear out the wall and reframe the window the way it was!"). This is of course painfully expensive and blows the schedule out of the water. Definitely not Agile.
So one way to convince a client that a time & materials mode will work may be to assure them you'll make frequent progress reports to get their feedback. I believe this is already a core recommendation of most Agile methodologies. To begin, of course this requires the customer give their trust to the consultant.
As a separate resource, I searched and found a book on this subject: "Agile Estimating and Planning" by Mike Cohn. Read the praise quotes on that page, from an impressive set of Agile experts.
First of all, I want to say I think it's great you've sold him an on agile approach and per hour pricing. That's very hard to do.
Regarding your dilemma, I think you should present him with a rough pricing estimate and the features / scope it includes. During the development process, if you see something important coming up that will change the scope / cost, you say to the client "stop - this is great, but understand that it changes the original scope we discussed."
At this point the client has the privilege of agreeing, being aware that he will pay more than he originally thought, or halting the new development for now. Many agile projects are built in many mini-stages - for which you can give pretty accurate estimate of price / time. Before any new mini-stage commence, it's important that you and the client see eye to eye on what is up next and how much time / cost will be spent on it.
As always there's quality, functionality and time (=resources) which are your main parameters. We're not messing with quality any more, so there's only features and resources left. Fairly often you can get away with convincing that a certain amount of resources will at minimum reach a certain feature set. So as long as you can find an amount of resources that delivers a plausible feature set I believe this is enough for quite a few customers. The upside is that they can control progress while on the move, and there's always the option of backing out.
The way that I have done this is to use my current velocity and estimate a range based on rough estimates of complexity (number of stories). You would update this as you begin planning the application so that the range estimate gradually becomes better and better.
For example, give an estimate from 6mos to 2 years (if you're sure that it will take less time than 2 years. As you break it down into features, then plan for those features, you come up with better and better estimates so that after 6mos you can reliably say (absent any other changes), we'll be done in another 2-3 months (total of 8-9 months). Eventually, you get to the point where you can say, we'll be done after the next iteration (2weeks to 1month).
You need to pair this with letting the customer know that adding features will increase the time to completion, then let them decide how to proceed -- either drop the feature or increase the time. For any given project, scope and time are really the only variables that you can effectively change. Quality and resources are pretty much fixed. Actually, you can add resources, but generally you see an increase in time and decrease in quality initially so this only works if your time horizon is long.
One approach you could take is to give the client a general estimate that you believe is relatively close. Also give a percentage. The percentage will be an increase or decrease allotment in the budget due to changing requirements.
Example: General estimate of $10,000 but since the requirements are vague and the project could change course easily there is a +/- 30% price adjustment option.
This way the client can have a general idea of what to budget and you have some flexibility in charging to the project.
This is a really tough issue. See what Robert Glass has to say on the subject in his book "Facts and Fallacies of Software Engineering". (Note that Amazon has books available new for less than they're available second-hand! [as of 2009-01-05 12:20 -08:00]; also at Google books.)
If you have successfully sold the customer an on Agile approach, and billing by the hour, do not give them an estimate for how much it will cost to complete the project!. Even if they say they want it only for budgeting purposes, do not do it!.
The reason is that any estimate of cost will come to be treated as a definite commitment; no matter how many times you say it isn't, and how many times the customer says they understand that, it will be treated as a commitment. Even if the customer understands, their boss won't, and although they won't be able to legally force you to deliver for the price you said, you will come to known as a company that doesn't deliver what it promised. Providing an estimate throws away all the advantage of being agile in the first place.
What I suggest is telling them that you will spend no more than such-and-such an amount before the end of the financial year (or whatever other billing period your customer is interested in). That should be enough for them to plan financially without in any way saying that the project will be finished by then.
This was my answer to a closed question related to story points. I use this all the time for macro estimation.
When I switched to points, I decided to it only if I could meet the two following points; 1) find and argument that justify the switch and that will convince the team 2) Find an easy method to use it.
Convincing
It took me a lot of reading on the subject but a finally found the argument that convinced me and my team: It’s nearly impossible to find two programmers that will agree on the time a task will take but the same two programmers will almost always agree on which task is the biggest when shown two different tasks.
This is the only skill you need to ‘estimate’ your backlog. Here I use the word ‘estimate’ but at this early stage it’s more like ordering the backlog from tough to easy.
Putting Points in the Backlog
This step is done with the participation of the entire scrum team.
Start dropping the stories one by one in a new spreadsheet while keeping the following order: the biggest story at the top and the smallest at the bottom. Do that until all the stories are in the list.
Now it’s time to put points on those stories. Personally I use the Poker Planning Scale (1/2,1,2,3,5,8,13,20,40,100) so this is what I will use for this example. At the bottom of that list you’ll probably have micro tasks (things that takes 4 hours or less to do). Give to every micro tasks the value of 1/2. Then continue up the list by giving the value 1 (next in the scale) to the stories until it is clear that a story is much bigger (2 instead of 1, so twice bigger). Now using the value '2', continue up the list until you find a story that should clearly have a 3 instead of a 2. Continue this process all the way to the top of the list.
NOTE: Try to keep the vast majority of the points between 1 and 13. The first time you might have a bunch of big stories (20, 40 and 100) and you’ll have to brake them down into chunks smaller or equal to 13.
That is it for the points and the original backlog. If you ever have a new story, compare it to that list to see where it fits (bigger/smaller process) and give it the value of its neighbors.
Velocity & Estimation (macro planning)
To estimate how long it will take you to go through that backlog, do the first sprint planning. Make the total of the points attached to the stories the teams picked and VOILA!, that’s your first velocity measure. You can then divide the total of points in the backlog by that velocity, to know how many sprints will be needed.
That velocity will change and settle in the first 2-3 sprints so it's always good to keep an eye on that value
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
As a lead developer I often get handed specifications for a new project, and get asked how long it'll take to complete the programming side of the work involved, in terms of hours.
I was just wondering how other developers calculate these times accurately?
Thanks!
Oh and I hope this isn't considered as a argumentitive question, I'm just interested in finding the best technique!
Estimation is often considered a black art, but it's actually much more manageable than people think.
At Inntec, we do contract software development, most if which involves working against a fixed cost. If our estimates were constantly way off, we would be out of business in no time.
But we've been in business for 15 years and we're profitable, so clearly this whole estimation thing is solvable.
Getting Started
Most people who insist that estimation is impossible are making wild guesses. That can work sporadically for the smallest projects, but it definitely does not scale. To get consistent accuracy, you need a systematic approach.
Years ago, my mentor told me what worked for him. It's a lot like Joel Spolsky's old estimation method, which you can read about here: Joel on Estimation. This is a simple, low-tech approach, and it works great for small teams. It may break down or require modification for larger teams, where communication and process overhead start to take up a significant percent of each developer's time.
In a nutshell, I use a spreadsheet to break the project down into small (less than 8 hour) chunks, taking into account everything from testing to communication to documentation. At the end I add in a 20% multiplier for unexpected items and bugs (which we have to fix for free for 30 days).
It is very hard to hold someone to an estimate that they had no part in devising. Some people like to have the whole team estimate each item and go with the highest number. I would say that at the very least, you should make pessimistic estimates and give your team a chance to speak up if they think you're off.
Learning and Improving
You need feedback to improve. That means tracking the actual hours you spend so that you can make a comparison and tune your estimation sense.
Right now at Inntec, before we start work on a big project, the spreadsheet line items become sticky notes on our kanban board, and the project manager tracks progress on them every day. Any time we go over or have an item we didn't consider, that goes up as a tiny red sticky, and it also goes into our burn-down report. Those two tools together provide invaluable feedback to the whole team.
Here's a pic of a typical kanban board, about halfway through a small project.
You might not be able to read the column headers, but they say Backlog, Brian, Keith, and Done. The backlog is broken down by groups (admin area, etc), and the developers have a column that shows the item(s) they're working on.
If you could look closely, all those notes have the estimated number of hours on them, and the ones in my column, if you were to add them up, should equal around 8, since that's how many hours are in my work day. It's unusual to have four in one day. Keith's column is empty, so he was probably out on this day.
If you have no idea what I'm talking about re: stand-up meetings, scrum, burn-down reports, etc then take a look at the scrum methodology. We don't follow it to the letter, but it has some great ideas not only for doing estimations, but for learning how to predict when your project will ship as new work is added and estimates are missed or met or exceeded (it does happen). You can look at this awesome tool called a burn-down report and say: we can indeed ship in one month, and let's look at our burn-down report to decide which features we're cutting.
FogBugz has something called Evidence-Based Scheduling which might be an easier, more automated way of getting the benefits I described above. Right now I am trying it out on a small project that starts in a few weeks. It has a built-in burn down report and it adapts to your scheduling inaccuracies, so that could be quite powerful.
Update: Just a quick note. A few years have passed, but so far I think everything in this post still holds up today. I updated it to use the word kanban, since the image above is actually a kanban board.
There is no general technique. You will have to rely on your (and your developers') experience. You will have to take into account all the environment and development process variables as well. And even if cope with this, there is a big chance you will miss something out.
I do not see any point in estimating the programming times only. The development process is so interconnected, that estimation of one side of it along won't produce any valuable result. The whole thing should be estimated, including programming, testing, deploying, developing architecture, writing doc (tech docs and user manual), creating and managing tickets in an issue tracker, meetings, vacations, sick leaves (sometime it is batter to wait for the guy, then assigning task to another one), planning sessions, coffee breaks.
Here is an example: it takes only 3 minutes for the egg to become roasted once you drop it on the frying pen. But if you say that it takes 3 minutes to make a fried egg, you are wrong. You missed out:
taking the frying pen (do you have one ready? Do you have to go and by one? Do you have to wait in the queue for this frying pen?)
making the fire (do you have an oven? will you have to get logs to build a fireplace?)
getting the oil (have any? have to buy some?)
getting an egg
frying it
serving it to the plate (have any ready? clean? wash it? buy it? wait for the dishwasher to finish?)
cleaning up after cooking (you won't the dirty frying pen, will you?)
Here is a good starting book on project estimation:
http://www.amazon.com/Software-Estimation-Demystifying-Practices-Microsoft/dp/0735605351
It has a good description on several estimation techniques. It get get you up in couple of hours of reading.
Good estimation comes with experience, or sometimes not at all.
At my current job, my 2 co-workers (that apparently had a lot more experience than me) usually underestimated times by 8 (yes, EIGHT) times. I, OTOH, have only once in the last 18 months gone over an original estimate.
Why does it happen? Neither of them, appeared to not actually know what they are doing, codewise, so they were literally thumb sucking.
Bottom line:
Never underestimate, it is much safer to overestimate. Given the latter you can always 'speed up' development, if needed. If you are already on a tight time-line, there is not much you can do.
If you're using FogBugz, then its Evidence Based Scheduling makes estimating a completion date very easy.
You may not be, but perhaps you could apply the principles of EBS to come up with an estimation.
This is probably one of the big misteries in the IT business. Countless failed software projects have shown that there is no perfect solution to this yet, but the closest thing to solving this I have found so far is to use the adaptive estimation mechanism built in to FogBugz.
Basically you are breaking your milestones into small tasks and guess how long it will take you to complete them. No task should be longer than about 8 hours. Then you enter all these tasks as planned features into Fogbugz. When completing the tasks, you track your time with FogBugz.
Fogbugz then evaluates your past estimates and actual time comsumption and uses that information to predict a window (with probabilities) in which you will have fulfilled your next few milestones.
Overestimation is rather better than underestimation. That's because we don't know the "unknown" and (in most cases) specifications do change during the software development lifecycle.
In my experience, we use iterative steps (just like in Agile Methodologies) to determine our timeline. We break down projects into components and overestimate on those components. The sum of these estimations used and add extra time for regression testing and deployment, and all the good work....
I think that you have to go back from your past projects, and learn from those mistakes to see how you can estimate wisely.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I'm time+cost estimating a semi-complex software solution, that hasn't got specific requirements in about 75% of features. I would still like to make as good estimate as possible, by getting additional data from the client. There will still be parts that may end up not being able to develop, since there's too many dependencies with other products/technologies and lack of definition. I also have a very tight schedule to produce this estimate.
There will also be other contenders on this project. Client expects a price+duration (and probably also features) and I know everyone will be off. I know that's impossible, but tell that to marketing people. Another problem is that I'm talking to middle-man and not directly to the client. I can get confidence with middle-man only, but not with the deciding client. Which is a different problem altogether.
What disclaimer/info can I put in my price plan/contract not to kill team with this project, so when project starts slipping (in terms of cost/time/features) we will be covered with some sort of payment. I would of course like to be paid by sprints or releases with relation to time, but I doubt client could be convinced in this. I'm sure we can finish this product before deadline and also create a great product, but how can I convice client to believe me?
Question
What can I do to get this project and avoid death march situation at the same time?
Any suggestion welcome!
EDIT: Outcome
In the end we (me and my co-worker) convinced the client we need at least a week to evaluate the product. So we did. We also pushed (and got) a slot for a few hours long meeting with the client to clarify any outstanding requirements' questions. So we did. Meeting was done after we made the first estimate draft, so we were sure we have all the questions to point out specifics that were either completely misunderstood or too vague to estimate. I hope we get the project, because it would mean 8 months of full time work for us, plus a reasonable pay. We'll know in about a week and a half.
Of course I also pointed out that the way we'll be delivering this product will get them exactly where they want to be with a product that will actually be what they wanted. And also that we only commit to price and time, but not functionality, because it is and will be subject to change. I think we made a good enough impression.
In this economic environment, there are a lot of companies competing for a little work. Someone is bound to give them a very sweet bid that will
Not be able to deliver on,
Kill their team with, or
Both.
When they can't deliver at the agreed price, they will start to cut down on the quality in order to deliver something and get paid.
Your challenge is to present that fact to your prospect in a professional manner, and convince them that you will work very hard to deliver at a reasonable cost, but also to deliver exactly what they need. The fact that you're going back for more detail, and the method you approach the project with (agile... but be sure and explain the business benefit to them) helps ensure that they will end up with what they really need.
Remember, they want to get the software delivered that they need at the lowest possible price.
Convince them that you will deliver exactly to their needs, and that your price is reasonable.
Welcome to the world of fixed priced development services :-)
Techniques for to win this project and avoid death march situation at the same time:
Don't underbid a project. Bid for what you think the project will take and add some percentage for things likely to go wrong.
If you are missing 75% of the detail, odds are the project will be significantly different than you currently expect. Document some reasonable detail assumptions within the outline of the defined work. When the project actually starts and the details don't match the assumption, you have an opportunity to negotiate the costs for the changes. At that time, you may also be in a better position to know how much you are over/under and attempt to compensate with this quote.
Your goal in an SOW (statement of work) should be to define enough details so that it gives you an opportunity to renegotiate the cost of changes when you know more about the project. Write these as positives, as much as possible. Note, it is unlikely that people that actually understand the project will read or understand the SOW...I base this on the point that you are given few details to quote. This means it isn't a consultative sale and neither party is really focused on building the 'right' solution.
If you can get a contract as T&M (time and materials) great. I doubt you'll get it or unable to get it without some restrictions that essentially defeat the purpose of a T&M. Your potential customers look at this as them accepting all of the risks around your abilities.
Hopefully, you aren't the first at your company to do this. Find out, historically, how projects have been and the typical result rates. Many software development groups charge an hourly rate that is significantly higher than cost...but their quotes tend to be lower and not actual hours. Customers often will argue more about the hours/days than the actual quote. Enterprises tend to be used to paying high hourly rates.
Figure out your department's expected margin (profit you need to gain from the job). This may help you to understand how much of a 'death march' you may face when your project slips.
In the SOW specify the level of detail that will be required in a specification before you begin work. While Agile and other customer focused processes take an approach that oriented at finding the best solution, they aren't designed to keep costs under control in a fixed bid environment. You will need to take a waterfall approach to requirements and then build in an agile fashion so that you can adjust along the way. The specification, like the SOW, will give you an opportunity to bill for changes. While the customer won't like this, it will put the burden and risks associated with requirements on them and not your team.
Note, to be successful with these negotiations, you needs a supportive management, sales and project management team. If you don't have that, you are bound to always be on 'death marches.' Even if you forgo quality, process, testing and other items, you'll find there's never enough time for a project.
EDIT:
Addressing the middle-men situation. I think the best course of action would be to submit a list of risks along with your bid as a courtesy to the customer. Kind of like giving them a heads-up on what their project limitations are. This will cost you some work up front but I think it could help you win the project.
you have two options
make a best-guess and double or triple your estimate (your competition is probably doing the same thing.)
explain to the customer that you can't bid work like this, and tell him that everyone else that gives him a fixed estimate is probably not being completely truthful.
At the end of the day, if you can't make money on the work, the there is not point in trying for it.
Personally, I prefer the latter, up-front and honest communication with your customers will take you farther than any bid tricks ever will.
A few things I'd say you should think about:
Assumptions: There's no one disclaimer you can add but you need to fill the gaps in the requirements with sensible assumptions and document them. Nothing major or scary, just a section in your spec/bid with a list of bullet points saying what you assumed to be true which was missing (e.g. users details will be pulled using LDAP and no admin screens will be written to cover user admin).
This gives you clarity in estimating as you now have a full scope to work from, but it also means that if the client comes back with things which are wildly different you have a fair basis to start talking about raising change requests and varying the cost. Alternatively they may come back during the negotiations saying this assumption or that one isn't true and you have more information.
Out of Scope: A specific case of assumptions - list things which you aren't including (e.g. No integration will exist with system X). Again this allows you to have a full scope and a reasonable case for potentially varying cost at a later stage.
Assumptions and out of scope are particularly applicable when things are mentioned in passing but not really followed up, or for things which they say could wait for a second phase. These are often the things the client will believe are being done as part of the main project but the project team don't.
Hopefully the thoroughness and insight from the assumptions and scope you define will help inspire confidence with the ultimate client too.
Contingency: A tricky one but you should add contingency in two ways:
(1) for specific risks. For things which might mean something takes longer than you've estimated then put in an amount to cover that weighted by the chance of it happening. Add all these up and that's your risk contingency.
(2) Shit happens contingency - unpredictable shit happens on IT projects. Add between 10% and 20% to cover this.
Whether you hide contingency from your commercial people and the client or not depends on your relationship but if it gets removed they need to understand what that means (essentially you WILL over run).
Understand the relationship between effort and cost: As a technologist your role is to provide an estimate of the effort based on the information you have. You need to then communicate that with assumptions, level of contingency and so on to your commercial team who can convert it into a monetary value. The thing to be clear with them on is that if they want to drop the cost that doesn't change the effort.
There are loads of good reasons for writing down the cost to the client (to build a relationship, because you'll end up with stuff you can reuse later and so on) but people need to understand that unless the scope changes the effort stays the same - the reduction comes out of the profit.
i have a blog article which may have a few tips in it for you:
http://pm4web.blogspot.com/2009/06/surviving-under-resourced-project.html
one of the other posters here has a good point to. there will always be someone who will offer a lower price to get the work. and the developer will suffer for it later (i.e. having to do a lot of free work to satisfy the client).
some clients need to have this experience before it clicks that you cant do IT projects on the cheap without paying some kind of price.
LM
Go for realism. Avoid promising too much, then make a point of it.
A lot of customers out there have been burnt by unrealistic offerers who fail to deliver as promised.
Emphasize the need for a specifications sprint. Convey a focus on core functionality and commitment to deliver rather than a feature bonanza. Offer a primary development phase to deliver core functionality.
Communicate the power and safety of the agile approach. Credit the customer with the ability to see good sense.
In short: Strive to come across as realistic and serious (more so than your competitors). The most important thing for any serious customer in the end is not the price, but a confidence that the product will be delivered on time and budget.
Let's say you have a project that will involve two web applications (that will share DAL/DAO/BO assemblies and some OSS libraries):
a semi complex management application that uses Windows Live ID for authentication and is also capable of communicating with various notifier services (email, sms, twitter etc.), targeted notifiers being about 10% of functionality
a low to semi complex user application with less functionality but more robustness that also uses Windows Live ID for authentication
There are two of us with medium estimation capabilities and we won't be able to do it in two days even if we wanted/have to. At least it would be a far off estimate.
Questions
How long would/does it normally take you to make a reliable/valuable estimate?
What would you suggest to speed-up estimation without sacrificing accuracy?
How much slack (in terms of cost/time) would you add depending on estimation speed (when you would say: I could estimate it a bit more, because I think it's still quite off)
Since we use Agile methods (Scrum, specifically) it takes us about an hour longer than it takes the users to prioritize.
More time doesn't lead to more accuracy.
The hard part, then, is getting the users to prioritize. We hear this discussion all the time "if the whole thing isn't completed on time, it's all worthless." "Except for the XYZZY component, which does have some value." That argument can go on for hours until it's resolved that XYZZY should be first.
Generally, we try to create 4-week sprints. The first few are complicated because there's always something new. After the first two (or three) we seem to set a steady pace.
Each use case has a relatively simple, subjective valuation of how the effort it will take to finish it. Anything over one full sprint in duration has to be decomposed. Most times a few use cases are bundled into a single sprint.
The are formal ways of scoring each use case to better handle the cost and schedule issues. We don't use them because the extra effort doesn't help.
After the first two sprints,
There's new and different functionality,
The priorities have all changed,
The details of each use case have been dramatically revised.
What does "accuracy" mean when the thing you're trying to estimate changes at the end of each sprint?
One lesson learned. Parts of my company spend a long time fully defining exactly what will be delivered, and then measuring that they are delivering precisely what they want.
Customers notice this, and one said we "spend a lot of time delivering what the contract says, but it isn't what we needed."
The problem with firm up-front estimates is that they take on a life of their own. The more you "invest" in the estimating, the more the estimates seem to be a useful deliverable. They aren't useful because they're generally totally wrong. They're based on up-front assumptions that are totally wrong.
It's a bad policy to invest more time in estimating. The "accurate" answers aren't more accurate, but they are more treasured by every layer of management. As you and the customer learn, you invalidate numerous assumptions and you absolutely must re-estimate constantly.
Don't do it up front. If your contract requires you to do it up front, then make sure you have a change control provision and tell the customer that you absolutely will make changes as you go forward. As both you and the customer learn, you both must make changes.
Interesting question. I'm afraid the answer is "it really depends!" I know that's not terribly useful (although it IS true) so here are some factors:
1) Quality and completeness of requirements and their specification. This is, to me, most often the project-estimate killer. If you don't have quality requirements, you have no reasonable basis for an estimate. We use a "RUP-lite" style of product development here, so as the engineering manager I won't give anything but the coarsest-grain estimate until we've completed our "elaboration" phase and gotten sign-off from product management that the core 80% of the product features are in fact accurately covered.
2) The scope & nature of the product. Bigger/more expensive/more complicated = substantially longer to estimate. I've spent years working in telco-carrier land delivering solutions which have the normal robust carrier requirements ("5 9's" of uptime required by SLA mean you must really do a good job of solution design and failure recovery!). In that sort of environment with all the moving parts across functional areas of the business, the estimation of work is going to hinge on getting the whole picture...specifically, cross-functional dependencies and external dependencies can be a REAL killer here. That said, I've also built lots of shrink-wrapped and enterprise software, too. In those environments it's much easier as the scope is typically substantially smaller, so thus easier to estimate.
3) How "new" is this project? How "new" is the team to this product or technology set? The newer the product or team, the longer and more buffer you should allocate.
4) How specific do we need to be? If this is a "rough guess" then I'll lean on my engineering leads to provide a conservative estimate, then I'll pad that. If we need a "real" estimate (e.g., one which is used by my boss and which I'll be responsible for hitting), I'd need the input from a number of different managers and team members, who will need time to analyze the requirements and confer amongst themselves.
That can take as little as a couple days, or weeks..it all depends on the size. "Two or three days" is, frankly, not long enough for sizing anything but the most trivial of projects.
The best thing you can do to improve the quality of your estimates to to improve the quality of your requirements, and be ruthless in identifying hidden dependencies.
One final thing: FWIW, I've been doing this since '81 and I regard accurately estimating a project's duration/cost as the single most difficult/fraught with peril part of engineering management.
To make a reliable estimate you really would need to create a list of tasks to be done. Break it down into stories/tasks (even if you don't use agile) and evaluate them. This can take a lot of time already - especially the amount of research (to look for libraries to do this notifier stuff to reduce the cost). I would take at least 3 days - however 1-2 week(s) sound more reasonable for me. This doesn't seem to be a small project.
I wouldn't dare to speed-up the estimation process if you wan't to have somewhat reasonable results. Estimations are never accurate and you will just make things worse.
An option would be to make a totally rough guess within a few hours and then start to work on it already (for a week or so). At the end of the week you might be able to give a better estimation based on your current progress. However it's important to create a good prototype (which looks like crap but has a little bit of code in all of the regions).
Well, a common quote is "The price will be between 50% and 400% of our initial estimates". The reason that this quote has grown large is that it's true. Estimation accuracy largely depends on your domain-knowledge. If this is the 100th time you sold a given type of blog-engine than you're pretty sure about the estimates. However, more often than not, you don't have that indepth knowledge of the domain (If the app already exists, why create a new one?).
Agile development has become popular because people largely recognize that the traditional "waterfall" type of thinking just doesn't work for most real-world projects. You should apply the same way of thinking to your estimates. Obviously, you need some kind of selling point, but be sure to tell your customers that this information is very vague (and that no matter what any competitor will tell them, their estimates are vague as well).
You need to sell iterations, and therefore also estimate iterations. I'm sure some marketing-guy in any company will know how to handle the paperwork and the legal stuff for this, so it shouldn't be that big a deal.
Consider a 5 iterations project:
Start out by putting up milestones. These are subject to change, but will provide your vargue first estimates for the final product.
Plan iteration #1.
Estimate iteration #1, adjust total estimates accordingly.
Do iteration #1
Rinse / repeat till iteration #5
You're done :)
Reflect over your project. How did your estimates evolve? Reasons? Learn by doing :)
Most customers would rather have part-estimates and part-deliverances than some unrealistic goal that some suit sold them :)
Straying slightly from your original question but it's also important to learn from the estimates that you have produced in the past by keeping information about how long it actually took to do the things that you estimated.
We estimated 5 days to produce these pages, it actually took 10 days, and etc.
In the long term information like this will (hopefully!) enable you to produce more accurate estimates.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If so why? How much?
I tend to inflate mine a little because I can be overly optimistic.
Hofstadter's Law: Any computing project will take twice as long as you think it will — even when you take into account Hofstadter's Law.
If you inflate your estimate based on past experiences to try and compensate for your inherent optimism, then you aren't inflating. You are trying to provide an accurate estimate. If however you inflate so that you will always have fluff time, that's not so good.
Oh yes, I've learnt to always multiply my initial estimation by two. That's why FogBUGZ's Evidence-Based Scheduling tool is so really useful.
Any organization that asks its programmers to estimate time for coarse-grained features is fundamentally broken.
Steps to unbreak:
Hire technical program managers. Developers can double as these folks if needed.
Put any feature request, change request, or bug into a database immediately when it comes in. (My org uses Trac, which doesn't completely suck.)
Have your PMs break those requests into steps that each take a week or less.
At a weekly meeting, your PMs decide which tickets they want done that week (possibly with input from marketing, etc.). They assign those tickets to developers.
Developers finish as many of their assigned tickets as possible. And/or, they argue with the PMs about tasks they think are longer than a week in duration. Tickets are adjusted, split, reassigned, etc., as necessary.
Code gets written and checked in every week. QA always has something to do. The highest priority changes get done first. Marketing knows exactly what's coming down the pipe, and when. And ultimately:
Your company falls on the right side of the 20% success rate for software projects.
It's not rocket science. The key is step 3. If marketing wants something that seems complicated, your PMs (with developer input) figure out what the first step is that will take less than a week. If the PMs are not technical, all is lost.
Drawbacks to this approach:
When marketing asks, "how long will it take to get [X]?", they don't get an estimate. But we all know, and so do they, that the estimates they got before were pure fiction. At least now they can see proof, every week, that [X] is being worked on.
We, as developers, have fewer options for what we work on each week. This is indubitably true. Two points, though: first, good teams involve the developers in the decisions about what tickets will be assigned. Second, IMO, this actually makes my life better.
Nothing is as disheartening as realizing at the 1-month mark that the 2-month estimate I gave is hopelessly inadequate, but can't be changed, because it's already in the official marketing literature. Either I piss off the higher-ups by changing my estimate, risking a bad review and/or missing my bonus, or I do a lot of unpaid overtime. I've realized that a lot of overtime is not the mark of a bad developer, or the mark of a "passionate" one - it's the product of a toxic culture.
And yeah, a lot of this stuff is covered under (variously) XP, "agile," SCRUM, etc., but it's not really that complicated. You don't need a book or a consultant to do it. You just need the corporate will.
The Scotty Rule:
make your best guess
round up to the nearest whole number
double that quadruple that (thanks Adam!)
increase to the next higher unit of measure
Example:
you think it will take 3.5 hours
round that to 4 hours
quadruple that to 16 hours
shift it up to 16 days
Ta-daa! You're a miracle worker when you get it done in less than 8 days.
Typically yes, but I have two strategies:
Always provide estimates as a range (i.e. 1d-2d) rather than a single number. The difference between the numbers tells the project manager something about your confidence, and allows them to plan better.
Use something like FogBugz' Evidence Based-Scheduling, or a personal spreadsheet, to compare your historical estimates to the time you actually took. That'll give you a better idea than always doubling. Not least because doubling might not be enough!
I'll be able to answer this in 3-6 weeks.
It's not called "inflating" — it's called "making them remotely realistic."
Take whatever estimate you think appropriate. Then double it.
Don't forget you (an engineer) actually estimate in ideal hours (scrum term).
While management work in real hours.
The difference being that ideal hours are time without interuption (with a 30 minute warm up after each interuption). Ideal hours don't include time in meetings, time for lunch or normal chit chat etc.
Take all these into consideration and ideal hours will tend towards real hours.
Example:
Estimated time 40 hours (ideal)
Management will assume that is 1 week real time.
If you convert that 40 hours to real time:
Assume one meeting per day (duration 1 hour)
one break for lunch per day (1 hour)
plus 20% overhead for chit chat bathroom breaks getting coffie etc.
8 hour day is now 5 hours work time (8 - meeting - lunch - warm up).
Times 80% effeciency = 4 hours ideal time per day.
This your 40 hour ideal will take 80 hours real time to finish.
Kirk : Mr. Scott, have you always multiplied your repair estimates by a factor of four?
Scotty : Certainly, sir. How else can I keep my reputation as a miracle worker?
A good rule of thumb is estimate how long it will take and add 1/2 again as much time to cover the following problems:
The requirements will change
You will get pulled onto another project for a quick fix
The New guy at the next desk will need help with something
The time needed to refactor parts of the project because you found a better way to do things
<sneaky> Instead of inflating your project's estimate, inflate each task individually. It's harder for your superiors to challenge your estimates this way, because who's going to argue with you over minutes.
</sneaky>
But seriously, through using EBS I found that people are usually much better at estimating small tasks than large ones. If you estimate your project at 4 months, it could very well be 7 month before it's done; or it might not. If your estimate of a task is 35 minutes, on the other hand, it's usually about right.
FogBugz's EBS system shows you a graph of your estimation history, and from my experience (looking at other people's graphs as well) people are indeed much better at estimating short tasks. So my suggestion is to switch from doing voodoo multiplication of your projects as totals, and start breaking them down upfront into lots of very small tasks that you're much better at estimating.
Then multiply the whole thing by 3.14.
A lot depends on how detailed you want to get - but additional 'buffer' time should be based on a risk assessment - at a task level, where you put in various buffer times for:
High Risk: 50% to 100%
Medium Risk: 25% to 50%
Low Risk: 10% to 25% (all dependent on prior project experience).
Risk areas include:
est. of requirement coverage (#1 risk area is missing components at the design and requirement levels)
knowledge of technology being used
knowledge/confidence in your resources
external factors such as other projects impacting yours, resource changes, etc.
So, for a given task (or group of tasks) that cover component A, initial est. is 5 days and it's considered a high risk based on requirements coverage - you could add between 50% to 100%
Six weeks.
Industry standard: every request will take six weeks. Some will be longer, some will be shorter, everything averages out in the end.
Also, if you wait long enough, it no longer becomes an issue. I can't tell you how many times I've gone through that firedrill only to have the project/feature cut.
I wouldn't say I inflate them, so much as I try to set more realistic expectations based on past experience.
You can calculate project durations in two ways - one is to work out all the tasks involved and figure out how long each will take, factor in delays, meetings, problems etc. This figure always looks woefully short, which is why people always say things like 'double it'. After some experience in delivering projects you'll be able to tell very quickly, just by looking briefly at a spec how long it will take, and, invariably, it will be double the figure arrived at by the first method...
It's a better idea to add specific buffer time for things like debugging and testing than to just inflate the total time. Also, by taking the time up front to really plan out the pieces of the work, you'll make the estimation itself much easier (and probably the coding, too).
If anything, make a point of recording all of your estimates and comparing them to actual completion time, to get a sense of how much you tend to underestimate and under what conditions. This way you can more accurately "inflate".
I wouldn't say I inflate them but I do like to use a template for all possible tasks that could be involved in the project.
You find that not all tasks in your list are applicable to all projects, but having a list means that I don't let any tasks slip through the cracks with me forgetting to allow some time for them.
As you find new tasks are necessary, add them to your list.
This way you'll have a realistic estimate.
I tend to be optimistic in what's achievable and so I tend to estimate on the low side. But I know that about my self so I tend to add on an extra 15-20%.
I also keep track of my actuals versus my estimates. And make sure the time involved does not include other interruptions, see the accepted answer for my SO question on how to get back in the flow.
HTH
cheers
I wouldn't call additional estimated time on a project "inflated" unless you actually do complete your projects well before your original estimation. If you make a habit of always completing the project well before your original estimated time, then project leaders will get wise and expect it earlier.
What are your estimates based on?
If they're based on nothing but a vague intuition of how much code it would require and how long it would take to write that code, then you better pad them a LOT to account for subtasks you didn't think of, communication and synchronization overhead, and unexpected problems. Of course, that kind o estimate is nearly worthless anyway.
OTOH, if your estimates are based on concrete knowledge of how long it took last time to do a task of that scope with the given technology and number of developers, then inflation should not be necessary, since the inflationary factors above should already be included in the past experiences. Of course there will be probably new factors whose influence on the current project you can't foresee - such risks justify a certain amount of additional padding.
This is part of the reason why Agile teams estimate tasks in story points (an arbitrary and relative measurement unit), then as the project progresses track the team's velocity (story points completed per day). With this data you can then theoretically compute your completion date with accuracy.
I take my worst case scenario, double it, and it's still not enough.
Under-promise, over-deliver.
Oh yes, the general rule from long hard experience is give the project your best estimate for time, double it, and that's about how long it will actually take!
We have to, because our idiot manager always reduces them without any justification whatever. Of course, as soon as he realizes we do this, we're stuck in an arms race...
I fully expect to be the first person to submit a two-year estimate to change the wording of a dialog.
sigh.
As a lot said, it's a delicate balance between experience and risk.
Always start by breaking down the project in manageable pieces, in fact, in pieces you can easily imagine yourself starting and finishing in the same day
When you don't know how to do something (like when it's the first time) the risk goes up
When your risk goes up, that's where you start with your best guess, then double it to cover some of the unexpected, but remember, you are doing that on a small piece of the project, not the whole project itself
The risk goes up also when there's a factor you don't control, like the quality of an input or that library that seems it can do everything you want but that you never tested
Of course, when you gain experience on a specific task (like connecting your models to the database), the risk goes down
Sum everything up to get your subtotal...
Then, on the whole project, always add about another 20-30% (that number will change depending on your company) for all the answers/documents/okays you will be waiting for, meetings we are always forgetting, the changes of idea during the project and so on... that's what we call the human/political factor
And again add another 30-40% that accounts for tests and corrections that goes out of the tests you usually do yourself... such as when you'll first show it to your boss or to the customer
Of course, if you look at all this, it ends up that you can simplify it with the magical "double it" formulae but the difference is that you'll be able to know what you can squeeze in a tight deadline, what you can commit to, what are the dangerous tasks, how to build your schedule with the important milestones and so on.
I'm pretty sure that if you note the time spent on each pure "coding" task and compare it to your estimations in relation to its riskiness, you won't be so far off. The thing is, it's not easy to think of all the small pieces ahead and be realistic (versus optimistic) on what you can do without any hurdle.
I say when I can get it done. I make sure that change requests are followed-up with a new estimation and not the "Yes, I can do that." without mentioning it will take more time. The person requesting the change will not assume it will take longer.
Of course, you'd have be to an idiot not to add 25-50%
The problem is when the idiot next to you keeps coming up with estimates which are 25-50% lower than yours and the PM thinks you are stupid/slow/swinging it.
(Has anyone else noticed project managers never seem to compare estimates with actuals?)
I always double my estimates, for the following reasons:
1) Buffer for Murphy's Law. Something's always gonna go wrong somewhere that you can't account for.
2) Underestimation. Programmers always think things are easy to do. "Oh yeah, it'll take just a few days."
3) Bargaining space. Upper Management always thinks that schedules can be shortened. "Just make the developers work harder!" This allows you to give them what they want. Of course, overuse of this (more than once) will train them to assume you're always overestimating.
Note: It's always best to put buffer at the end of the project schedule, and not for each task. And never tell developers that the buffer exists, otherwise Parkinson's Law (Work expands so as to fill the time available for its completion) will take effect instead. Sometimes I do tell Upper Management that the buffer exists, but obviously I don't give them reason #3 as justification. This, of course depends on how much your boss trusts you to be truthful.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I mean name off a programming project you did and how long it took, please. The boss has never complained but I sometimes feel like things take too long. But this could be because I am impatient as well. Let me know your experiences for comparison.
I've also noticed that things always seem to take longer, sometimes much longer, than originally planned. I don't know why we don't start planning for it but then I think that maybe it's for motivational purposes.
Ryan
It is best to simply time yourself, record your estimates and determine the average percent you're off. Given that, as long as you are consistent, you can appropriately estimate actual times based on when you believed you'd get it done. It's not simply to determine how bad you are at estimating, but rather to take into account the regularity of inevitable distractions (both personal and boss/client-based).
This is based on Joel Spolsky's Evidence Based Scheduling, essential reading, as he explains that the primary other important aspect is breaking your tasks down into bite-sized (16-hour max) tasks, estimating and adding those together to arrive at your final project total.
Gut-based estimates come with experience but you really need to detail out the tasks involved to get something reasonable.
If you have a spec or at least some constraints, you can start creating tasks (design users page, design tags page, implement users page, implement tags page, write tags query, ...).
Once you do this, add it up and double it. If you are going to have to coordinate with others, triple it.
Record your actual time in detail as you go so you can evaluate how accurate you were when the project is complete and hone your estimating skills.
I completely agree with the previous posters... don't forget your team's workload also. Just because you estimated a project would take 3 months, it doesn't mean it'll be done anywhere near that.
I work on a smaller team (5 devs, 1 lead), many of us work on several projects at a time - some big, some small. Depending on the priority of the project, the whims of management and the availability of other teams (if needed), work on a project gets interspersed amongst the others.
So, yes, 3 months worth of work may be dead on, but it might be 3 months worth of work over a 6 month period.
I've done projects between 1 - 6 months on my own, and I always tend to double or quadrouple my original estimates.
It's effectively impossible to compare two programming projects, as there are too many factors that mean that the metrics from only aren't applicable to another (e.g., specific technologies used, prior experience of the developers, shifting requirements). Unless you are stamping out another system that is almost identical to one you've built previously, your estimates are going to have a low probability of being accurate.
A caveat is when you're building the next revision of an existing system with the same team; the specific experience gained does improve the ability to estimate the next batch of work.
I've seen too many attempts at estimation methodology, and none have worked. They may have a pseudo-scientific allure, but they just don't work in practice.
The only meaningful answer is the relatively short iteration, as advocated by agile advocates: choose a scope of work that can be executed within a short timeframe, deliver it, and then go for the next round. Budgets are then allocated on a short-term basis, with the stakeholders able to evaluate whether their money is being effectively spent. If it's taking too long to get anywhere, they can ditch the project.
Hofstadter's Law:
'It always takes longer than you expect, even when you take Hofstadter's Law into account.'
I believe this is because:
Work expands to fill the time available to do it. No matter how ruthless you are cutting unnecessary features, you would have been more brutal if the deadlines were even tighter.
Unexpected problems occur during the project.
In any case, it's really misleading to compare anecdotes, partly because people have selective memories. If I tell you it once took me two hours to write a fully-optimised quicksort, then maybe I'm forgetting the fact that I knew I'd have that task a week in advance, and had been thinking over ideas. Maybe I'm forgetting that there was a bug in it that I spent another two hours fixing a week later.
I'm almost certainly leaving out all the non-programming work that goes on: meetings, architecture design, consulting others who are stuck on something I happen to know about, admin. So it's unfair on yourself to think of a rate of work that seems plausible in terms of "sitting there coding", and expect that to be sustained all the time. This is the source of a lot of feelings after the fact that you "should have been quicker".
I do projects from 2 weeks to 1 year. Generally my estimates are quite good, a posteriori. At the beginning of the project, though, I generally get bashed because my estimates are considered too large.
This is because I consider a lot of things that people forget:
Time for bug fixing
Time for deployments
Time for management/meetings/interaction
Time to allow requirement owners to change their mind
etc
The trick is to use evidence based scheduling (see Joel on Software).
Thing is, if you plan for a little extra time, you will use it to improve the code base if no problems arise. If problems arise, you are still within the estimates.
I believe Joel has wrote an article on this: What you can do, is ask each developer on team to lay out his task in detail (what are all the steps that need to be done) and ask them to estimate time needed for each step. Later, when project is done, compare the real time to estimated time, and you'll get the bias for each developer. When a new project is started, ask them to evaluate the time again, and multiply that with bias of each developer to get the values close to what's really expects.
After a few projects, you should have very good estimates.