How do "spikes" figure in the schedule / estimation game? [closed] - project-management

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Might be subjective and/or discussion.. but here goes.
I've been asked to estimate a feature for the next big thing at work. I break it down.. use story points come up with a estimate. The feature however calls for interfacing with GoDiagrams a third party diagramming component in addition to various other company initiatives.. (a whole set of 2008_Limited_Edition frameworks/services:). I've been tracking myself using a burn-up chart and I find that I'm unable to sustain my pace primarily due to "spikes".. Definition
I estimate for 2 points a week and then I find myself working weekends (well trying to.. end up neither here nor there) because I can't figure out where to hook in so that I can preview user-actions, show a context menu, etc. In the end I spend time making spikes that throw my schedule off-track... and decreases its value.. doesn't give the right picture.
Spikes are needed to drive nails through the planks of ignorance. But how are they factored into the estimation equation? Doing all required spikes before the feature seems wrong.. (might turn out to be YAGNI) Doing it in between disrupts my flow. Right now it's during pre-iteration planning.. but this is pushing the touchline out on a weekly basis.

I guess you are constantly underestimating
what you do already know about the 3rd party component
how long it takes you to create usable/helpful spikes for unknown areas
1. Get better at estimating those two things.
So, it's all about experience. No matter what methodology you use, they will help you to use your experience better, not replace it.
2. Try not to get lose track when working on those spikes.
They should be short, time boxed sessions. They are not about playing around with all the possible features listed on the marketing slides.
Give them focus, two or three options to explore. Expect them to deliver one concrete result.
Update(Gishu): To summarize
Spikes need to be explicit tasks defined in the iteration planning step.
If spikes exceed the timebox period, stop working on it. Shelve the associated task. Complete the other tasks in the current iteration bucket. Return to the shelved task or add a more elaborate/broken down spike to the next iteration along with the associated task. Tag a more conservative estimate to the generation 1 spike the next time.

If you run out of time in your timeboxed spike, you should still stop and complete your other committed work. You should then add another spike to your next iteration to complete the necessary work you need to complete in order to accurately estimate the task resulting from the spike.
If there is a concern over spiking things for too long and this becoming a problem - this is one reason I like 1 week iterations. :-)

#pointernil..
It's more of no estimation coupled with a Indy-Jones Head-First approach to tackling a story. I estimate stories by their content.. currently I don't take into account the time required to find the right incantation for the control library to play nice. That sometimes takes more time than my application logic.. So to rephrase the Original question, should spikes be separate tasks in the iteration plan, added on a JIT basis before you start working on a particular story?
My Spikes are extremely focussed.. I just can't wait to get back to the "real" problems. e.g. 'How do I show a context menu from this control?' I may be guilty of not reading the entire 150+ page manual or code samples.. but then time is scarce. The first solution that solves the problem gets the nod and I move on. But when you're unable to find that elusive event or NIH pattern of notification used by the component, spikes can be time-consuming. How do I timebox something that is unknown? e.g. My timebox has elapsed and I still have no clue for plugging-in my custom context menu. How do I proceed? Keep hacking away?
Maybe this comes in the "Buffering Uncertainity" scheme of things.. I'll look if I find something useful in Mike Cohn's book.

I agree with pointernil. The only issue is that your estimates are incorrect. Which is no big drama, unless you've just blown out a 3 million dollar project of course :-)
If it happens once, its a learning experience. If it happens again and the result is better, then you've got another learning experience under your belt. If you are constantly underestimating and your percentages are getting worse, you need to wisen up a bit. No methodology will get you out of this.
Spikes just need to be given the time that they need. The one thing I've seen happen repeatedly in my experience is that people expect to be able to nail a technology within a couple of hours, or a day. That just doesn't happen in real life. The simplest issue, even a bug caused by a typo, can have a developer pulling their hair our for huge chunks of time. Be honest about how competent yourself or your staff really are, and put it in the budget.

Related

Estimating time to complete tasks [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I know this question has been asked several times on here/programmers and it is quite a common, classic question:
Just how do you accurately give estimates for how long a task will take?
The problem I have is for point-and-click tasks in Windows, I can give accurate estimates. For coding something new (as in using APIs I am not familiar with) I cannot estimate an accurate time to save my life. It's a case of thinking of and saying the first number (of days/weeks/whatever) that comes into my head. For code that uses APIs I am familiar with and apps I can instantly say I can develop (e.g. a notepad-type app) I could give an estimate that is accurate.
Any help appreciated.
Thanks
Focus on the pieces. When you try and estimate a task at a high level, not only is it daunting, but you will fail to accurately factor everything that will comprise the total time.
Instead, do not even try to guess at the total (not only is it not useful, but it can actually bias your estimates of individual tasks), but rather sit down and just try to think of all the sub tasks that comprise the task. If those feel too big, break them down into even smaller sub tasks.
Once you've done this, give each of the sub tasks an estimate. If any of them are larger then about 4 hours, the sub task probably hasn't been broken down enough. Add all of these sub estimates up. This is your estimate.
Using this approach forces you to reason out what is actually required to complete the task and will let you produce better estimates.
Make sure you think of the non obvious steps required to complete a task as well. If you're writing a piece of code, did you include time estimates for writing the associated unit tests? For testing the code? For documenting it?
When converting hours to days, use realistic expectations about how much time you actually spend heads down working. The general consensus is that a developer can complete 4-6 hours of work in any given 8 hour work day. This roughly matches my personal experience.
If you have other team members, you can try a technique called Planning Poker. At its simplest, the idea is to get each member of the team to go off and estimate each of the tasks individually. Once that is done, the team gets together and compares estimates looking for large deviations. This tends to bring to light if the task wasn't clear enough, if there are members of the team who posses relevant information that the others don't, or if there are different assumptions being made by different team members.
When doing your estimations, be aware of your assumptions and document them as part of the estimates. Assuming x, y, & x, task q should take n hours. These could be things like assuming the availability of a QA engineer to test the feature, assuming the availability of a development environment to deploy the feature to for testing, assuming the compatibility of two third party frameworks that haven't been tested together yet, assuming that feature z that the task is dependent on is ready by a certain date... Etc. We make tons of these assumptions all the time without being aware of them. Documenting them forces you to be aware of them and allows other parties to validate that your assumptions are correct. And if the estimates do turn out to be wrong, you have much more information to analyze why.
Even following all of this advice, you will still frequently make inaccurate estimates, but don't feel too bad because our brains are hardwired to produce terrible estimates for abstract tasks. We have a multitude of cognitive biases that affect our ability to gauge task size and effort.
I recommend reading this article for even more information and advice:
http://blog.muonlab.com/2012/04/12/why-you-suck-at-estimating-a-lesson-in-psychology/
I've mostly worked on projects with "smaller" durations, but something that has worked well for me is to break apart the task into small enough sub-tasks that I think I could implement each one in about a day. For some items, this means just two or three sub-tasks, for others, this might mean dozens or hundreds. Add in some overhead percentage that gets wasted on non-productive activities, some overhead for exploring wrong directions, and hopefully you'll get a number that's within a reasonable range of the end result.
what I usually do is breaking down the tasks to small tasks. the largest task should not exceed 2 days. estimating small tasks is not that difficult. each small task has its test time included in the estimation, so I don't end up with a ton of untested code at the end of project.
Breaking down the task to small pieces only is possible if there is a high level design in place otherwise the estimate is just a rough guess which is usually 2 weeks, which is the famous 2 weeks most developers use;)
adding some time to the end of the project to integrate-stabilize-bug fix makes it a reasonable estimate.
Similar to what sarnold mentions, breaking apart the task is key...but it may be more difficult to get it down to fine grained 1-day increments if you don't understand the domain/technologies involved. In your case, I would suggest the following (especially if this clearly doesn't look to be a task that is going to take less than a few days to complete):
1) Firstly, you need to ask around to your team/colleagues to see if they can shed some light (and/or if they've used the same API/technologies you're using). If nobody knows anything, you're going to have to own up to the fact that you simply don't have enough data to hazard a reasonable guess and will need to spend X days to investigate (the amount of time to investigate needs to be aligned with the complexity of the API/Domain you are working with)
2) At the end of the time you've allocated to investigate the new technology, you should be able to do a very basic hello, world-ish type application that uses the API (compiles, links, and runs fine). By this time, you should be in a good position to determine if your task is going to take days, weeks, or months.
3) The key thing to do next is to, as soon as possible, identify any major roadblocks/hiccups that are going to blow away your predictions...this is key. The absolute worst thing you can do is continually go to your manager/customer on the due date and mention you need a significant amount of additional time to deliver. They need the heads up as soon as possible to help remedy the situation and/or come up with a Plan B.
And that's pretty much it...it's not rocket science but you basically provide an estimate once you're in a position to give one and then make sure that you update that estimate based on new, unanticipated events that have a significant impact on your ability to make anticipated dates.

Calculating Project Programming Times [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
As a lead developer I often get handed specifications for a new project, and get asked how long it'll take to complete the programming side of the work involved, in terms of hours.
I was just wondering how other developers calculate these times accurately?
Thanks!
Oh and I hope this isn't considered as a argumentitive question, I'm just interested in finding the best technique!
Estimation is often considered a black art, but it's actually much more manageable than people think.
At Inntec, we do contract software development, most if which involves working against a fixed cost. If our estimates were constantly way off, we would be out of business in no time.
But we've been in business for 15 years and we're profitable, so clearly this whole estimation thing is solvable.
Getting Started
Most people who insist that estimation is impossible are making wild guesses. That can work sporadically for the smallest projects, but it definitely does not scale. To get consistent accuracy, you need a systematic approach.
Years ago, my mentor told me what worked for him. It's a lot like Joel Spolsky's old estimation method, which you can read about here: Joel on Estimation. This is a simple, low-tech approach, and it works great for small teams. It may break down or require modification for larger teams, where communication and process overhead start to take up a significant percent of each developer's time.
In a nutshell, I use a spreadsheet to break the project down into small (less than 8 hour) chunks, taking into account everything from testing to communication to documentation. At the end I add in a 20% multiplier for unexpected items and bugs (which we have to fix for free for 30 days).
It is very hard to hold someone to an estimate that they had no part in devising. Some people like to have the whole team estimate each item and go with the highest number. I would say that at the very least, you should make pessimistic estimates and give your team a chance to speak up if they think you're off.
Learning and Improving
You need feedback to improve. That means tracking the actual hours you spend so that you can make a comparison and tune your estimation sense.
Right now at Inntec, before we start work on a big project, the spreadsheet line items become sticky notes on our kanban board, and the project manager tracks progress on them every day. Any time we go over or have an item we didn't consider, that goes up as a tiny red sticky, and it also goes into our burn-down report. Those two tools together provide invaluable feedback to the whole team.
Here's a pic of a typical kanban board, about halfway through a small project.
You might not be able to read the column headers, but they say Backlog, Brian, Keith, and Done. The backlog is broken down by groups (admin area, etc), and the developers have a column that shows the item(s) they're working on.
If you could look closely, all those notes have the estimated number of hours on them, and the ones in my column, if you were to add them up, should equal around 8, since that's how many hours are in my work day. It's unusual to have four in one day. Keith's column is empty, so he was probably out on this day.
If you have no idea what I'm talking about re: stand-up meetings, scrum, burn-down reports, etc then take a look at the scrum methodology. We don't follow it to the letter, but it has some great ideas not only for doing estimations, but for learning how to predict when your project will ship as new work is added and estimates are missed or met or exceeded (it does happen). You can look at this awesome tool called a burn-down report and say: we can indeed ship in one month, and let's look at our burn-down report to decide which features we're cutting.
FogBugz has something called Evidence-Based Scheduling which might be an easier, more automated way of getting the benefits I described above. Right now I am trying it out on a small project that starts in a few weeks. It has a built-in burn down report and it adapts to your scheduling inaccuracies, so that could be quite powerful.
Update: Just a quick note. A few years have passed, but so far I think everything in this post still holds up today. I updated it to use the word kanban, since the image above is actually a kanban board.
There is no general technique. You will have to rely on your (and your developers') experience. You will have to take into account all the environment and development process variables as well. And even if cope with this, there is a big chance you will miss something out.
I do not see any point in estimating the programming times only. The development process is so interconnected, that estimation of one side of it along won't produce any valuable result. The whole thing should be estimated, including programming, testing, deploying, developing architecture, writing doc (tech docs and user manual), creating and managing tickets in an issue tracker, meetings, vacations, sick leaves (sometime it is batter to wait for the guy, then assigning task to another one), planning sessions, coffee breaks.
Here is an example: it takes only 3 minutes for the egg to become roasted once you drop it on the frying pen. But if you say that it takes 3 minutes to make a fried egg, you are wrong. You missed out:
taking the frying pen (do you have one ready? Do you have to go and by one? Do you have to wait in the queue for this frying pen?)
making the fire (do you have an oven? will you have to get logs to build a fireplace?)
getting the oil (have any? have to buy some?)
getting an egg
frying it
serving it to the plate (have any ready? clean? wash it? buy it? wait for the dishwasher to finish?)
cleaning up after cooking (you won't the dirty frying pen, will you?)
Here is a good starting book on project estimation:
http://www.amazon.com/Software-Estimation-Demystifying-Practices-Microsoft/dp/0735605351
It has a good description on several estimation techniques. It get get you up in couple of hours of reading.
Good estimation comes with experience, or sometimes not at all.
At my current job, my 2 co-workers (that apparently had a lot more experience than me) usually underestimated times by 8 (yes, EIGHT) times. I, OTOH, have only once in the last 18 months gone over an original estimate.
Why does it happen? Neither of them, appeared to not actually know what they are doing, codewise, so they were literally thumb sucking.
Bottom line:
Never underestimate, it is much safer to overestimate. Given the latter you can always 'speed up' development, if needed. If you are already on a tight time-line, there is not much you can do.
If you're using FogBugz, then its Evidence Based Scheduling makes estimating a completion date very easy.
You may not be, but perhaps you could apply the principles of EBS to come up with an estimation.
This is probably one of the big misteries in the IT business. Countless failed software projects have shown that there is no perfect solution to this yet, but the closest thing to solving this I have found so far is to use the adaptive estimation mechanism built in to FogBugz.
Basically you are breaking your milestones into small tasks and guess how long it will take you to complete them. No task should be longer than about 8 hours. Then you enter all these tasks as planned features into Fogbugz. When completing the tasks, you track your time with FogBugz.
Fogbugz then evaluates your past estimates and actual time comsumption and uses that information to predict a window (with probabilities) in which you will have fulfilled your next few milestones.
Overestimation is rather better than underestimation. That's because we don't know the "unknown" and (in most cases) specifications do change during the software development lifecycle.
In my experience, we use iterative steps (just like in Agile Methodologies) to determine our timeline. We break down projects into components and overestimate on those components. The sum of these estimations used and add extra time for regression testing and deployment, and all the good work....
I think that you have to go back from your past projects, and learn from those mistakes to see how you can estimate wisely.

When is the optimization really worth the time spent on it?

After my last question, I want to understand when an optimization is really worth the time a developer spends on it.
Is it worth spending 4 hours to have queries that are 20% quicker? Yes, no, maybe, yes if...?
Is 'wasting' 7 hours to switch a task to another language to save about 40% of the CPU usage 'worth it'?
My normal iteration for a new project is:
Understand what the customer wants, and what he needs;
Plan the project: what languages and where, database design;
Develop the project;
Test and bug-fix;
Final analysis of the running project and final optimizations;
If the project requires it, a further analysis on the real-life usage of the resources followed by further optimization;
"Write good, maintainable code" is implied.
Obviously, the big 'optimization' part happens at point #2, but often when reviewing the code after the project is over I find some sections that, even if they do their job well, could be improved. This is the rationale for point #5.
To give a concrete example of the last point, a simple example is when I expect 90% of queries to be SELECT and 10% to be INSERT/UPDATE, so I charge a DB table with indexes. But after 6 months, I see that in real-life there are 10% SELECT queries and 90% INSERT/UPDATEs, so the query speed is not optimized. This is the first example that comes to my mind (and obviously this is more a 'patch' to an initial mis-design than an optimization ;).
Please note that I'm a developer, not a business-man - but I like to have a clear conscience by giving my clients the best, where possible.
I mean, I know that if I lose 50 hours to gain 5% of an application's total speed-up, and the application is used by 10 users, then it maybe isn't worth the time... but what about when it is?
When do you think that an optimization is crucial?
What formula do you usually apply, aware that the time spent optimizing (and the final gain) is not always quantifiable on paper?
EDIT: sorry, but i cant accept an answer like 'untill people dont complain about id, optimization is not needed'; It can be a business-view (questionable, imho), but not an developer or (imho too) a good-sense answer. I know, this question is really subjective.
I agree with Cheeso, the performance optimization should be deferred, after some analysis about the real-life usage and load of the project, but a small'n'quick optimization can be done immediatly after the project is over.
Thanks to all ;)
YAGNI. Unless people complain, a lot.
EDIT : I built a library that was slightly slower than the alternatives out there. It was still gaining usage and share because it was nicer to use and more powerful. I continued to invest in features and capability, deferring any work on performance.
At some point, there were enough features and performance bubbled to the top of the list I finally spent some time working on perf improvement, but only after considering the effort for a long time.
I think this is the right way to approach it.
There are (at least) two categories of "efficiency" to mention here:
UI applications (and their dependencies), where the most important measure is the response time to the user.
Batch processing, where the main indicator is total running time.
In the first case, there are well-documented rules about response times. If you care about product quality, you need to keep response times short. The shorter the better, of course, but the breaking points are about:
100 ms for an "immediate" response; animation and other "real-time" activities need to happen at least this fast;
1 second for an "uninterrupted" response. Any more than this and users will be frustrated; you also need to start think about showing a progress screen past this point.
10 seconds for retaining user focus. Any worse than this and your users will be pissed off.
If you're finding that several operations are taking more than 10 seconds, and you can fix the performance problems with a sane amount of effort (I don't think there's a hard limit but personally I'd say definitely anything under 1 man-month and probably anything under 3-4 months), then you should definitely put the effort into fixing it.
Similarly, if you find yourself creeping past that 1-second threshold, you should be trying very hard to make it faster. At a minimum, compare the time it would take to improve the performance of your app with the time it would take to redo every slow screen with progress dialogs and background threads that the user can cancel - because it is your responsibility as a designer to provide that if the app is too slow.
But don't make a decision purely on that basis - the user experience matters too. If it'll take you 1 week to stick in some async progress dialogs and 3 weeks to get the running times under 1 second, I would still go with the latter. IMO, anything under a man-month is justifiable if the problem is application-wide; if it's just one report that's run relatively infrequently, I'd probably let it go.
If your application is real-time - graphics-related for example - then I would classify it the same way as the 10-second mark for non-realtime apps. That is, you need to make every effort possible to speed it up. Flickering is unacceptable in a game or in an image editor. Stutters and glitches are unacceptable in audio processing. Even for something as basic as text input, a 500 ms delay between the key being pressed and the character appearing is completely unacceptable unless you're connected via remote desktop or something. No amount of effort is too much for fixing these kinds of problems.
Now for the second case, which I think is mostly self-evident. If you're doing batch processing then you generally have a scalability concern. As long as the batch is able to run in the time allotted, you don't need to improve it. But if your data is growing, if the batch is supposed to run overnight and you start to see it creeping into the wee hours of the morning and interrupting people's work at 9:15 AM, then clearly you need to work on performance.
Actually, you really can't wait that long; once it fails to complete in the required time, you may already be in big trouble. You have to actively monitor the situation and maintain some sort of safety margin - say a maximum running time of 5 hours out of the available 6 before you start to worry.
So the answer for batch processes is obvious. You have a hard requirement that the bast must finish within a certain time. Therefore, if you are getting close to the edge, performance must be improved, regardless of how difficult/costly it is. The question then becomes what is the most economical means of improving the process?
If it costs significantly less to just throw some more hardware at the problem (and you know for a fact that the problem really does scale with hardware), then don't spend any time optimizing, just buy new hardware. Otherwise, figure out what combination of design optimization and hardware upgrades is going to get you the best ROI. It's almost purely a cost decision at this point.
That's about all I have to say on the subject. Shame on the people who respond to this with "YAGNI". It's your professional responsibility to know or at least find out whether or not you "need it." Assuming that anything is acceptable until customers complain is an abdication of this responsibility.
Simply because your customers don't demand it doesn't mean you don't need to consider it. Your customers don't demand unit tests, either, or even reasonably good/maintainable code, but you provide those things anyway because it is part of your profession. And at the end of the day, your customers will be a lot happier with a smooth, fast product than with any of those other developer-centric things.
Optimization is worth it when it is necessary.
If we have promised the client response times on holiday package searches that are 5 seconds or less and that the system will run on a single Oracle server (of whatever spec) and the searches are taking 30 seconds at peak load, then the optimization is definitely worth it because we're not going to get paid otherwise.
When you are initially developing a system, you (if you are a good developer) are designing things to be efficient without wasting time on premature optimization. If the resulting system isn't fast enough, you optimize. But your question seems to be suggesting that there's some hand-wavey additional optimization that you might do if you feel that it's worth it. That's not a good way to think about it because it implies that you haven't got a firm target in mind for what is acceptable to begin with. You need to discuss it with the stakeholders and set some kind of target before you start worrying about what kind of optimizations you need to do.
Like everyone said in the other questions answers is, when it makes monetary sense to change something then it needs changing. In most cases good enough wins the day. If the customers aren't complaining then it is good enough. If they are complaining then fix it enough so they stop complaining. Agile methodologies will give you some guidance on how to know when enough is enough. Who cares if something is using 40% CPU more CPU than you think it needs to be, if it is working and the customers are happy then it is good enough. Really simple, get it working and maintainable and then wait for complaints that probably will never come.
If what you are worried about was really a problem, NO ONE would ever have started using Java to build mission critical server side applications. Or Python or Erlang or anything else that isn't C for that matter. And if they did that, nothing would get done in a time frame to even acquire that first customer that you are so worried about losing. You will know well in advance that you need to change something well before it becomes a problem.
Good posting everyone.
Have you looked at the unnecessary usage of transactions for simple SELECT(s)? I got burned on that one a few times... I also did some code cleanup and found MANY graphs being returned for maybe 10 records needed.... on and on... sometimes it's not YOUR code per se, but someone cutting corners.... Good luck!
If the client doesn't see a need to do performance optimization, then there's no reason to do it.
Defining a measurable performance requirements SLA with the client (i.e., 95% of queries complete in under 2 seconds) near the beginning of the project lets you know if you're meeting that goal, or if you have more optimization to do. Performance testing at current and estimated future loads gives you the data that you need to see if you're meeting the SLA.
Optimization is rarely worth it before you know what needs to be optimized. Always remember that if I/O is basically idle and CPU is low, you're not getting anything out of the computer. Obviously you don't want the CPU pegged all the time and you don't want to be running out of I/O bandwidth, but realize that trying to have the computer basically idle all day while it performs intense operations is unrealistic.
Wait until you start to reach a predefined threshold (80% utilization is the mark I usually use, others think that's too high/low) and then optimize at that point if necessary. Keep in mind that the best solution may be scaling up or scaling out and not actually optimizing the software.
Use Amdal's law. It shows you which is the overall improvement when optimizing a certain part of a system.
Also: If it ain't broke, don't fix it.
Optimization is worth the time spent on it when you get good speedups for spending little time in optimizing (obviously). To get that, you need tools/techniques that lead you very quickly to the code whose optimization yields the most benefit.
It is common to think that the way to find that code is by measuring the time spent by functions, but to my mind that only provides clues - you still have to play detective. What takes me straight to the code is stackshots. Here is an example of a 40-times speedup, achieved by finding and fixing several problems. Others on SO have reported speedup factors from 7 to 60, achieved with little effort.*
*(7x: Comment 1. 60x: Comment 30.)

Do you inflate your estimated project completion dates? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If so why? How much?
I tend to inflate mine a little because I can be overly optimistic.
Hofstadter's Law: Any computing project will take twice as long as you think it will — even when you take into account Hofstadter's Law.
If you inflate your estimate based on past experiences to try and compensate for your inherent optimism, then you aren't inflating. You are trying to provide an accurate estimate. If however you inflate so that you will always have fluff time, that's not so good.
Oh yes, I've learnt to always multiply my initial estimation by two. That's why FogBUGZ's Evidence-Based Scheduling tool is so really useful.
Any organization that asks its programmers to estimate time for coarse-grained features is fundamentally broken.
Steps to unbreak:
Hire technical program managers. Developers can double as these folks if needed.
Put any feature request, change request, or bug into a database immediately when it comes in. (My org uses Trac, which doesn't completely suck.)
Have your PMs break those requests into steps that each take a week or less.
At a weekly meeting, your PMs decide which tickets they want done that week (possibly with input from marketing, etc.). They assign those tickets to developers.
Developers finish as many of their assigned tickets as possible. And/or, they argue with the PMs about tasks they think are longer than a week in duration. Tickets are adjusted, split, reassigned, etc., as necessary.
Code gets written and checked in every week. QA always has something to do. The highest priority changes get done first. Marketing knows exactly what's coming down the pipe, and when. And ultimately:
Your company falls on the right side of the 20% success rate for software projects.
It's not rocket science. The key is step 3. If marketing wants something that seems complicated, your PMs (with developer input) figure out what the first step is that will take less than a week. If the PMs are not technical, all is lost.
Drawbacks to this approach:
When marketing asks, "how long will it take to get [X]?", they don't get an estimate. But we all know, and so do they, that the estimates they got before were pure fiction. At least now they can see proof, every week, that [X] is being worked on.
We, as developers, have fewer options for what we work on each week. This is indubitably true. Two points, though: first, good teams involve the developers in the decisions about what tickets will be assigned. Second, IMO, this actually makes my life better.
Nothing is as disheartening as realizing at the 1-month mark that the 2-month estimate I gave is hopelessly inadequate, but can't be changed, because it's already in the official marketing literature. Either I piss off the higher-ups by changing my estimate, risking a bad review and/or missing my bonus, or I do a lot of unpaid overtime. I've realized that a lot of overtime is not the mark of a bad developer, or the mark of a "passionate" one - it's the product of a toxic culture.
And yeah, a lot of this stuff is covered under (variously) XP, "agile," SCRUM, etc., but it's not really that complicated. You don't need a book or a consultant to do it. You just need the corporate will.
The Scotty Rule:
make your best guess
round up to the nearest whole number
double that quadruple that (thanks Adam!)
increase to the next higher unit of measure
Example:
you think it will take 3.5 hours
round that to 4 hours
quadruple that to 16 hours
shift it up to 16 days
Ta-daa! You're a miracle worker when you get it done in less than 8 days.
Typically yes, but I have two strategies:
Always provide estimates as a range (i.e. 1d-2d) rather than a single number. The difference between the numbers tells the project manager something about your confidence, and allows them to plan better.
Use something like FogBugz' Evidence Based-Scheduling, or a personal spreadsheet, to compare your historical estimates to the time you actually took. That'll give you a better idea than always doubling. Not least because doubling might not be enough!
I'll be able to answer this in 3-6 weeks.
It's not called "inflating" — it's called "making them remotely realistic."
Take whatever estimate you think appropriate. Then double it.
Don't forget you (an engineer) actually estimate in ideal hours (scrum term).
While management work in real hours.
The difference being that ideal hours are time without interuption (with a 30 minute warm up after each interuption). Ideal hours don't include time in meetings, time for lunch or normal chit chat etc.
Take all these into consideration and ideal hours will tend towards real hours.
Example:
Estimated time 40 hours (ideal)
Management will assume that is 1 week real time.
If you convert that 40 hours to real time:
Assume one meeting per day (duration 1 hour)
one break for lunch per day (1 hour)
plus 20% overhead for chit chat bathroom breaks getting coffie etc.
8 hour day is now 5 hours work time (8 - meeting - lunch - warm up).
Times 80% effeciency = 4 hours ideal time per day.
This your 40 hour ideal will take 80 hours real time to finish.
Kirk : Mr. Scott, have you always multiplied your repair estimates by a factor of four?
Scotty : Certainly, sir. How else can I keep my reputation as a miracle worker?
A good rule of thumb is estimate how long it will take and add 1/2 again as much time to cover the following problems:
The requirements will change
You will get pulled onto another project for a quick fix
The New guy at the next desk will need help with something
The time needed to refactor parts of the project because you found a better way to do things
<sneaky> Instead of inflating your project's estimate, inflate each task individually. It's harder for your superiors to challenge your estimates this way, because who's going to argue with you over minutes.
</sneaky>
But seriously, through using EBS I found that people are usually much better at estimating small tasks than large ones. If you estimate your project at 4 months, it could very well be 7 month before it's done; or it might not. If your estimate of a task is 35 minutes, on the other hand, it's usually about right.
FogBugz's EBS system shows you a graph of your estimation history, and from my experience (looking at other people's graphs as well) people are indeed much better at estimating short tasks. So my suggestion is to switch from doing voodoo multiplication of your projects as totals, and start breaking them down upfront into lots of very small tasks that you're much better at estimating.
Then multiply the whole thing by 3.14.
A lot depends on how detailed you want to get - but additional 'buffer' time should be based on a risk assessment - at a task level, where you put in various buffer times for:
High Risk: 50% to 100%
Medium Risk: 25% to 50%
Low Risk: 10% to 25% (all dependent on prior project experience).
Risk areas include:
est. of requirement coverage (#1 risk area is missing components at the design and requirement levels)
knowledge of technology being used
knowledge/confidence in your resources
external factors such as other projects impacting yours, resource changes, etc.
So, for a given task (or group of tasks) that cover component A, initial est. is 5 days and it's considered a high risk based on requirements coverage - you could add between 50% to 100%
Six weeks.
Industry standard: every request will take six weeks. Some will be longer, some will be shorter, everything averages out in the end.
Also, if you wait long enough, it no longer becomes an issue. I can't tell you how many times I've gone through that firedrill only to have the project/feature cut.
I wouldn't say I inflate them, so much as I try to set more realistic expectations based on past experience.
You can calculate project durations in two ways - one is to work out all the tasks involved and figure out how long each will take, factor in delays, meetings, problems etc. This figure always looks woefully short, which is why people always say things like 'double it'. After some experience in delivering projects you'll be able to tell very quickly, just by looking briefly at a spec how long it will take, and, invariably, it will be double the figure arrived at by the first method...
It's a better idea to add specific buffer time for things like debugging and testing than to just inflate the total time. Also, by taking the time up front to really plan out the pieces of the work, you'll make the estimation itself much easier (and probably the coding, too).
If anything, make a point of recording all of your estimates and comparing them to actual completion time, to get a sense of how much you tend to underestimate and under what conditions. This way you can more accurately "inflate".
I wouldn't say I inflate them but I do like to use a template for all possible tasks that could be involved in the project.
You find that not all tasks in your list are applicable to all projects, but having a list means that I don't let any tasks slip through the cracks with me forgetting to allow some time for them.
As you find new tasks are necessary, add them to your list.
This way you'll have a realistic estimate.
I tend to be optimistic in what's achievable and so I tend to estimate on the low side. But I know that about my self so I tend to add on an extra 15-20%.
I also keep track of my actuals versus my estimates. And make sure the time involved does not include other interruptions, see the accepted answer for my SO question on how to get back in the flow.
HTH
cheers
I wouldn't call additional estimated time on a project "inflated" unless you actually do complete your projects well before your original estimation. If you make a habit of always completing the project well before your original estimated time, then project leaders will get wise and expect it earlier.
What are your estimates based on?
If they're based on nothing but a vague intuition of how much code it would require and how long it would take to write that code, then you better pad them a LOT to account for subtasks you didn't think of, communication and synchronization overhead, and unexpected problems. Of course, that kind o estimate is nearly worthless anyway.
OTOH, if your estimates are based on concrete knowledge of how long it took last time to do a task of that scope with the given technology and number of developers, then inflation should not be necessary, since the inflationary factors above should already be included in the past experiences. Of course there will be probably new factors whose influence on the current project you can't foresee - such risks justify a certain amount of additional padding.
This is part of the reason why Agile teams estimate tasks in story points (an arbitrary and relative measurement unit), then as the project progresses track the team's velocity (story points completed per day). With this data you can then theoretically compute your completion date with accuracy.
I take my worst case scenario, double it, and it's still not enough.
Under-promise, over-deliver.
Oh yes, the general rule from long hard experience is give the project your best estimate for time, double it, and that's about how long it will actually take!
We have to, because our idiot manager always reduces them without any justification whatever. Of course, as soon as he realizes we do this, we're stuck in an arms race...
I fully expect to be the first person to submit a two-year estimate to change the wording of a dialog.
sigh.
As a lot said, it's a delicate balance between experience and risk.
Always start by breaking down the project in manageable pieces, in fact, in pieces you can easily imagine yourself starting and finishing in the same day
When you don't know how to do something (like when it's the first time) the risk goes up
When your risk goes up, that's where you start with your best guess, then double it to cover some of the unexpected, but remember, you are doing that on a small piece of the project, not the whole project itself
The risk goes up also when there's a factor you don't control, like the quality of an input or that library that seems it can do everything you want but that you never tested
Of course, when you gain experience on a specific task (like connecting your models to the database), the risk goes down
Sum everything up to get your subtotal...
Then, on the whole project, always add about another 20-30% (that number will change depending on your company) for all the answers/documents/okays you will be waiting for, meetings we are always forgetting, the changes of idea during the project and so on... that's what we call the human/political factor
And again add another 30-40% that accounts for tests and corrections that goes out of the tests you usually do yourself... such as when you'll first show it to your boss or to the customer
Of course, if you look at all this, it ends up that you can simplify it with the magical "double it" formulae but the difference is that you'll be able to know what you can squeeze in a tight deadline, what you can commit to, what are the dangerous tasks, how to build your schedule with the important milestones and so on.
I'm pretty sure that if you note the time spent on each pure "coding" task and compare it to your estimations in relation to its riskiness, you won't be so far off. The thing is, it's not easy to think of all the small pieces ahead and be realistic (versus optimistic) on what you can do without any hurdle.
I say when I can get it done. I make sure that change requests are followed-up with a new estimation and not the "Yes, I can do that." without mentioning it will take more time. The person requesting the change will not assume it will take longer.
Of course, you'd have be to an idiot not to add 25-50%
The problem is when the idiot next to you keeps coming up with estimates which are 25-50% lower than yours and the PM thinks you are stupid/slow/swinging it.
(Has anyone else noticed project managers never seem to compare estimates with actuals?)
I always double my estimates, for the following reasons:
1) Buffer for Murphy's Law. Something's always gonna go wrong somewhere that you can't account for.
2) Underestimation. Programmers always think things are easy to do. "Oh yeah, it'll take just a few days."
3) Bargaining space. Upper Management always thinks that schedules can be shortened. "Just make the developers work harder!" This allows you to give them what they want. Of course, overuse of this (more than once) will train them to assume you're always overestimating.
Note: It's always best to put buffer at the end of the project schedule, and not for each task. And never tell developers that the buffer exists, otherwise Parkinson's Law (Work expands so as to fill the time available for its completion) will take effect instead. Sometimes I do tell Upper Management that the buffer exists, but obviously I don't give them reason #3 as justification. This, of course depends on how much your boss trusts you to be truthful.

How long does it really take to do something? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I mean name off a programming project you did and how long it took, please. The boss has never complained but I sometimes feel like things take too long. But this could be because I am impatient as well. Let me know your experiences for comparison.
I've also noticed that things always seem to take longer, sometimes much longer, than originally planned. I don't know why we don't start planning for it but then I think that maybe it's for motivational purposes.
Ryan
It is best to simply time yourself, record your estimates and determine the average percent you're off. Given that, as long as you are consistent, you can appropriately estimate actual times based on when you believed you'd get it done. It's not simply to determine how bad you are at estimating, but rather to take into account the regularity of inevitable distractions (both personal and boss/client-based).
This is based on Joel Spolsky's Evidence Based Scheduling, essential reading, as he explains that the primary other important aspect is breaking your tasks down into bite-sized (16-hour max) tasks, estimating and adding those together to arrive at your final project total.
Gut-based estimates come with experience but you really need to detail out the tasks involved to get something reasonable.
If you have a spec or at least some constraints, you can start creating tasks (design users page, design tags page, implement users page, implement tags page, write tags query, ...).
Once you do this, add it up and double it. If you are going to have to coordinate with others, triple it.
Record your actual time in detail as you go so you can evaluate how accurate you were when the project is complete and hone your estimating skills.
I completely agree with the previous posters... don't forget your team's workload also. Just because you estimated a project would take 3 months, it doesn't mean it'll be done anywhere near that.
I work on a smaller team (5 devs, 1 lead), many of us work on several projects at a time - some big, some small. Depending on the priority of the project, the whims of management and the availability of other teams (if needed), work on a project gets interspersed amongst the others.
So, yes, 3 months worth of work may be dead on, but it might be 3 months worth of work over a 6 month period.
I've done projects between 1 - 6 months on my own, and I always tend to double or quadrouple my original estimates.
It's effectively impossible to compare two programming projects, as there are too many factors that mean that the metrics from only aren't applicable to another (e.g., specific technologies used, prior experience of the developers, shifting requirements). Unless you are stamping out another system that is almost identical to one you've built previously, your estimates are going to have a low probability of being accurate.
A caveat is when you're building the next revision of an existing system with the same team; the specific experience gained does improve the ability to estimate the next batch of work.
I've seen too many attempts at estimation methodology, and none have worked. They may have a pseudo-scientific allure, but they just don't work in practice.
The only meaningful answer is the relatively short iteration, as advocated by agile advocates: choose a scope of work that can be executed within a short timeframe, deliver it, and then go for the next round. Budgets are then allocated on a short-term basis, with the stakeholders able to evaluate whether their money is being effectively spent. If it's taking too long to get anywhere, they can ditch the project.
Hofstadter's Law:
'It always takes longer than you expect, even when you take Hofstadter's Law into account.'
I believe this is because:
Work expands to fill the time available to do it. No matter how ruthless you are cutting unnecessary features, you would have been more brutal if the deadlines were even tighter.
Unexpected problems occur during the project.
In any case, it's really misleading to compare anecdotes, partly because people have selective memories. If I tell you it once took me two hours to write a fully-optimised quicksort, then maybe I'm forgetting the fact that I knew I'd have that task a week in advance, and had been thinking over ideas. Maybe I'm forgetting that there was a bug in it that I spent another two hours fixing a week later.
I'm almost certainly leaving out all the non-programming work that goes on: meetings, architecture design, consulting others who are stuck on something I happen to know about, admin. So it's unfair on yourself to think of a rate of work that seems plausible in terms of "sitting there coding", and expect that to be sustained all the time. This is the source of a lot of feelings after the fact that you "should have been quicker".
I do projects from 2 weeks to 1 year. Generally my estimates are quite good, a posteriori. At the beginning of the project, though, I generally get bashed because my estimates are considered too large.
This is because I consider a lot of things that people forget:
Time for bug fixing
Time for deployments
Time for management/meetings/interaction
Time to allow requirement owners to change their mind
etc
The trick is to use evidence based scheduling (see Joel on Software).
Thing is, if you plan for a little extra time, you will use it to improve the code base if no problems arise. If problems arise, you are still within the estimates.
I believe Joel has wrote an article on this: What you can do, is ask each developer on team to lay out his task in detail (what are all the steps that need to be done) and ask them to estimate time needed for each step. Later, when project is done, compare the real time to estimated time, and you'll get the bias for each developer. When a new project is started, ask them to evaluate the time again, and multiply that with bias of each developer to get the values close to what's really expects.
After a few projects, you should have very good estimates.

Resources